id
stringlengths 2
9
| source
stringclasses 2
values | version
stringclasses 1
value | added
stringlengths 24
24
| created
stringlengths 24
24
| text
stringlengths 240
297k
|
---|---|---|---|---|---|
231942560 | s2orc/train | v2 | 2021-02-18T02:15:38.991Z | 2021-02-17T00:00:00.000Z | Some Aspects on Identification, Decay Properties and Nuclear Structure of the Heaviest Nuclei
Synthesis of new elements at the upper border of the charts of nuclei and investigation of their decay properties and nuclear structure has been one of the main research topics in low energy nuclear physics since more than five decades. Main items are the quest for the heaviest nuclei that can exist and the verification of the theoretical predicted spherical proton and neutron shells at Z = 114, 120 or 126 and N = 172 or 184. The scope of the present paper is to illustrate some technical and physical aspects in investigation of the heaviest nuclei ('superheavy nuclei') and to critical discuss some selected results, which from a strict scientific point of view are not completely clear so far, making partly also suggestions for alternatively interpretations. A complete review of the whole field of superheavy element research, however, is out of the scope of this paper.
Introduction
First extensions of the nuclear shell model [1,2] into regions far beyond the heaviest known doubly magic nucleus, 208 Pb (Z = 82, N = 126), performed more than fifty years ago lead to the prediction of spherical proton and neutron shells at Z = 114 and N = 184 [3,4]. Nuclei in the vicinity of the crossing of both shells were expected to be extremely stabilized against spontaneous fission by fission barriers up to about 10 MeV. Particulary for the doubly magic nucleus a fission barrier of 9.6 MeV and hence a partial fission half-life of 10 16 years [5], in a preceding study even 2×10 19 years [6], were expected. In an allegorical picture these nuclei were regarded to form an island of stability, separated from the peninsula of known nuclei (the heaviest, safely identified element at that time was lawrencium (Z = 103)) by a sea of instability and soon were denoted as 'superheavy' (see e.g [7]). The theoretical predictions initiated tremendous efforts from experimental side to produce these superheavy nuclei and to investigate their decay properties as well as their nuclear and atomic structure and their chemical properties. The major and so far only successful method to synthesize transactinide elements (Z > 103) were complete fusion reactions. These efforts were accompanied by pioneering technical developments a) of accelerators and ion sources to deliver stable heavy ion beams of high intensity, b) of targets being able to stand the high beam intensities for long irradiation times (> several weeks), c) for fast and efficient separation of products from complete fusion reactions from the primary beam and products from nuclear reactions others than complete fusion, d) of detector systems to measure the different decay modes (α -decay, EC -decay, spontaneous fission and acompanying γ radiation and conversion electrons), e) of data analysis techniques, and f) for modelling measured particle spectra by advanced simulations, e.g. GEANT4 [8]. Despite all efforts it took more than thirty years until the first serious results on the production of elements Z ≥ 114 (flerovium) were reported [9,10]. However, these first results could not be reproduced independently and are still ambiguous [11]. Nevertheless, during the past twenty years synthesis of elements Z = 113 to Z = 118 has been reported and their discovery was approved by the International Union for Pure and Applied Chemistry (IUPAC) [12,13,14]. The decay data reported for the isotopes of elements Z ≥ 113 that have been claimed to be identified indicate the existence of a region of shell stabilized nuclei towards N = 184, but the center has not been reached so far. Data on the strength of the possible shells is still scarce. Tremendous efforts have also been undertaken from the theoretical side to make predictions on stability ('shell effects'), fission barriers, Q α -values, decay modes, halflives, spin and parity of the ground-state as well as of low lying excited states, etc.. For about thirty years the calculations were performed using macroscopicmicroscopic approaches based on the nuclear drop model [15] and the Strutinsky shell correction method [16]. Although predicted shell correction energies ('shell effects') disagreed considerably the models agreed in Z = 114 and N = 184 as proton and neutron shell closures (see e.g. [17,18]). The situation changed by the end of the 1990ties when for the first time results using self-consistent models like Skyrme-Hartree-Fock-Bogoliubov (SHFB) calculations or relativistic mean-field models (RMF) were published [19,20]. Most of the calculations predict Z = 120 as proton shell closure, while others predict Z = 114 (SkI4) or Z = 126 (SkP, SkM*). Skyrme force based calculations agree in N = 184 as neutron shell closure, while the RMF calculations favour N = 172. As a common feature all these parametrizations and also the macroscopic -microscopic calculations result in a wide area of high shell effects. That behavior is different to that at known shell closures, e.g. Z = 50, N = 50, 82, where the region of high shell effects (or high 2p -and 2nseparation energies) is strongly localized. It is thus not evident if the concept of nuclear shells as known from the lighter nuclei is still reasonable in the region of superheavy nuclei. It might be wiser to speak of regions of high shell stabilization instead. On the other hand, it has been already discussed extensively by Bender et al. [21] that the proton number Z and the neutron number N, where the shell closure occurs strongly depend on details in the description of the underlying forces, specifically on the values for the effective masses m * and the strength of the spin -orbit interaction. It also has been emphasized in [21] that the energy gap between the spin -orbit partners 2f 5/2 and 2f 7/2 determines whether the proton shell occurs at Z = 114 or Z = 120. Under these circumstances predictions of shell closures at different proton (Z) and/or neutron (N) numbers by different models may be regarded rather as a feature of 'fine tuning' of the models than as a principle disagreement. Having this in mind superheavy elements represent an ideal laboratory for investigation of the nuclear ('strong') force. More detailed knowledge of properties and structure of superheavy heavy nuclei is thus undoubtedly decisive for deeper understanding of basic interactions. Therefore investigations of decay properties and structure of superheavy nuclei will become in future even more important than synthesis of new elements. One has, however, to keep in mind, that the theoretically predicted high density of nuclear levels in a narrow energy interval above the ground-state may lead to complex α-decay patterns, while on the other hand often only little numbers of decay events are observed. Therefore it is tempting to take the average of the measured decay data, which finally results to assign the measured data as due to one transition in case of the α-decay energies, or due to originating from one nuclear level in the case of life-times. Thus fine structure in the α decay or the existence of isomeric levels might be overseen. Rather, a critical analysis and assessment of the measured data is required. As already indicated above the expression 'superheavy nuclei' or 'superheavy elements' had been originally suggested for the nuclei in the vicinity of the crossing of the spherical proton and neutron shells at Z = 114 and N = 184. The establishment of deformed proton and neutron shells at Z = 108 and N = 162 [22,23,24,25] resulted in the existence of a ridge between the 'peninsula' of known nuclei and the 'island of stability'. Thus it became common to denote all purely shell stabilized nuclei as 'superheavy', i.e. nuclei with liquid drop fission barriers lower than the zero -point motion energy (<0.5 MeV). The region of superheavy nuclei that shall be treated in this review is shown in fig. 1.
Experimental Approach
Complete fusion reactions of suited projectile and target nuclei has been so far the only successful method to produce nuclei with atomic numbers Z > 103 (see e.g. [26,27]). Under these considerations a separation method has been developped to take into account the specific features of this type of nuclear reactions. Due to momentum conservation the velocity of the fusion product, in the following denoted as 'compund nucleus' (CN) † can be written as v CN = (m p / (m p + m t )) x v p where m p , m t denote the masses of the projectile and target nucleus, and v p the velocity of the projectile. This simply means that a) fusion products are emitted in beam direction (with an angular distribution around zero degree determined by particle emission from the highly excited CN and by scattering in the target foil) and b) that CN are slower than the projectiles. It seemed therefore straightforward to use the velocity difference for separation of the fusion products from the projectiles and products from nuclear reactions others than complete fusion. Such a method has the further advantage of being fast, as the separation is performed in-flight without necessity to stop the products. So separation time is determined by the flighttime through the separation device. In the region of transactinide nuclei this separation technique has been 165 166 167 applied for the first time at the velocity filter SHIP at GSI, Darmstadt (Germany) [28] for investigation of evaporation residue production in the reactions 50 Ti + 207,208 Pb, 209 Bi [29,30] and for the identification of element Z = 107 (bohrium) in the reaction 54 Cr + 209 Bi [31]. Separation times in these cases were in the order of 2 µs.
As an alternative separation technique gas-filled separators have been developped using the different magnetic rigidities Bρ of fusion products and projectiles as the basis for separation. Early devices used in investigation of tranfermium nuclei were SASSY [32] and SASSY II [33] at LNBL Berkeley, USA and HECK [34] at GSI. Due to their simpler conception and more compact construction, which allows for separation times below 1 µs gas-filled separators meanwhile have become a wide spread tool for investigation of heaviest nuclei and are operated in many laboratories, e.g. RITU (University of Jyväskylä, Finland) [35], BGS (LNBL Berkeley, USA) [36], DGFRS (JINR, Dubna, Russia) [37], GARIS (RIKEN, Wako, Japan) [38], SHANS (IMP, Lanzhou, China) [39], TASCA (GSI, Darmstadt, Germany) [40]. Separation, however, is only one side of the medal. The fusion products have also to be identified safely.
Having in mind that the essential decay modes of superheavy nuclei are α decay and spontaneous fission (SF) detection methods suited for these items have been developped. After it was shown that suppression of the projectile beam by SHIP was high enough to use silicon detectors [41] an advanced detection set-up for investigation of heaviest nuclei was built [42]. It consisted of an array of seven position-sensitive silicon detectors ('stop detector'), suited for α spectroscopy and registration of heavy particles (fission products, evaporation residues (ER), scattered projectiles etc.). To obtain a discrimination between particles passing the velocity filter and being stopped in the detector (ER, projectiles, products from few nucleon transfer) and radioactive decays (α decay, SF) and to obtain a separation of ER and scattered projectiles or transfer products a time-of-flight detector was placed in front of the stop detector [43]. Also the possibility to measure γ rays emitted in coincidence with α particles was considered by placing a Ge-detector behind the stop detector. This kind of detection system has been improved in the course of the years at SHIP [44,45] and was also adopted in modified versions and improved by other research groups in other laboratories; examples are GREAT [46], GABRIELA [47] and TASISpec [48]. The improvements essentially comprise the following items: a) the detector set-ups were upgraded by adding a box-shaped Si -dector arrangement, placed upstream and facing the 'stop detector', allowing with high efficiency registration of α particles and fission products escaping the 'stop detector'. This was required as the ranges of α particles and fission fragments in silicon are larger than the range of ER; so about half of the α particles and 30 -50 % of the fission fragments will leave the 'stop detector' releasing only part of their kinetic energy in it. b) the 'old' Si detectors where positions were determined by charge division were replaced by pixelized detectors allowing for a higher position resolution and thus longer correlation times. c) effort was made to reduce the detector noise to have access to low energy particles (E < 500 keV) like conversion electrons (CE). d) digital electronics was introduced to enable dead time free data aquisition and to have access to short-lived acitivities with halflives lower than some microseconds. e) detecor geometry and mechanical components were optimized to use several Ge detectors and to minimize scattering and absorption of γ rays in the detector frames to increase the efficiency for γ ray detection.
As it is not the scope of this work to present experimental techniques and set-ups in detail we refer for this item to a recent review paper [49]. Another technical aspect concerns the targets. As production cross-sections for heaviest elements are small, highest available beam currents have to been used. Consequently a technology was desired to avoid destruction of the targets, which led to the development of rotating target wheels [50]; performance of the wheels and target quality were continuously improved.
As an alternative method to produce superheavy nuclei (SHN) recently the idea of using multinucleon transfer reactions was resumed, see e.g. [51,52]. Indeed, intensive studies of those reactions with respect to SHN production, e.g. 238 U + 238 U or 238 U + 248 Cm, had been performed at the UNILAC at GSI already about forty years ago. A summary of these efforts is given in [53]. Heaviest nuclides that could be identified in these experiments were isotopes of mendelevium (Z = 101). A drawback of these studies, however, was the use of radiochemical methods which restricted isotope identification to those with 'long' halflives, T 1/2 >> 1 min, giving no excess to short-lived nuclei in the region Z ≤102. To proceed in studying those reactions new types of separators are required, taking into account not only short halflives down to the µs -range, but also a broad angular distribution of the reaction products. A more detailed discussion of this feature, however, is beyond the scope of this review.
Data Selection
Within the commonly used techniques particles passing the in-flight separator are implanted into a silicon-detector set-up. As separation of the ER from 'unwanted' particles (scattered projectiles, scattered target nuclei, products from few nucleon tansfer etc.) is not clean, there will be always a cocktail of particles registered, forming a background covering the energy range of the α decays of the nuclei to be investigated, and usually also the energy range of the spontaneous fission products. In cases of small production cross sections, typically <1µb, α decays of the ER are usually not visible in the particle spectra. Further cleaning procedures are required. An often applied procedure is the use of transmission detctors in front of the 'stop -detector' and requiring an anticoincidence between events registered in the stop detector and the transmission detector. In practise the efficiency of the latter is never excatly 100 %, therefore there will still be a residual background in the spectra. In cases of a pulsed beam, one can restrict to the time intervals between the pulses. An example is shown in fig. 2. where the α spectrum taken in an irradiation of 209 Bi with 54 Cr (271 MeV) at SHIP is presented [54]. The black line represents the particle spectrum taken in anti-coincidence with the time-of-flight (TOF) detectors; clearly products ( 211m,212m Po, 214,214m Fr, 215 Ra) stemming from few nucleon transfer reactions are visible, the ER, 261 Bh and its daughter product 257,257m Db, however, are buried under the background. In cases of pulsed beams a further purification is achieved by requiring a 'beam -off' condition. Thus the α decays of 261 Bh, 257,257m Db become visible (red line in fig. 2). Such a restriction, however, is not desirable in many cases as it restricts identification to nuclei having liftimes in the order of the pulse-lengths or longer. A possible way out is the use of genetic correlations between registered events; these may be correlations of the type ER -α, α -α, ER -SF, α -SF etc. 50 Ti at SHIP [60] between the beam bursts; the region below 7.6 MeV has been downscaled by a factor of five for better presentation; b) spectrum of the first α decays following the α decay of 258 Db within 250 s; c) same as b), but requiring that both α decays occur in the same detector strip; d) spectrum of correlated (daughter) α events, requiring a maximum position difference of ±0.3 mm and a time difference ∆t < 250 s.
Genetic Correlations
To establish genetic relationships between mother and daughter α decays is presently a standard method to identify unknown isotopes or to assign individual decay energies to a certain nucleus. Originally it was developped at applying the He -jet technique for stopping the reaction products and transport them to the detection system. As the reaction products were deposited on the surface of the detector, depending on the direction of emission of the α particle the latter could be either registered in the detector, but the residual nucleus was kicked off the detector by the recoil of the emitted α particle or the residual nucleus was shallowly implanted into the detector, while the α particle was emitted in opposite direction and did not hit the detector. To establish correlations sophisticated detector arrangements were required (see i.e. [55]). The technique of stopping the reaction products in silicon surface barrier detectors after in-flight separation from the projectile beam simplified the procedures considerably [41,56]. Due to implantation into the detector by ≈(5 -10) µm the residual nucleus was not kicked out of the detector by the recoil of the emitted α particle and therefore decays of the implanted nucleus and all daughter products occured in the same detector; so it was sufficient to establish chronological relationship between α events measured within the same detector [56]. The applicability of this method was limited by the decay rate in the detector, as the time sequence of decays became accidential if the search time for correlations exceeded the average time distance between two decays. The application of this technique was improved by using position sensitive silicon detectors [42,57]. These detectors deliver the position of implantation as an additional parameter. The position resolution is typically around 300 µm (FWHM), while the range of α particles of (5 -10) MeV is (40 -80) µm [58] and the dislocation of the residual nucleus due to the α recoil is <1 µm. Thus all subsequent decays of a nucleus will occur at the same position (within the detector resolution). The probability to observe random correlations is reduced significantly by this procedure. In these set-ups position signals were produced by charge division between an upper (top) and a lower (bottom) detector electrode (see e.g. [59] for an advanced version of such a detector set-up). In modern set-ups (see e.g. [48]) these position sensitve detectors have been replaced by pixeled detectors having vertical strips (a typical width is 1 mm) on one side and horizontal strips of the same width on the other side. The position is then given by the coordinates representing the numbers of the horizontal and vertical strips. One advantage of the pixeled detectors is a somewhat better position resolution; taking strip widths of each 1 mm, one obtains a pixel size of 1 mm 2 ; for the SHIP -type detector [59] (5 mm wide strips, position resolution 0.3 mm (FWHM)) taking in the analysis three times the FWHM one obtains an effective pixel size of 4.5 mm 2 (3 x 0.3 x 5 mm 2 ). More stringent, however, is the fact that the position resolution for a pixeled detector is given solely by the strip numbers and is thus independent of the energy deposit of the particle and of the range of the particle (as long as it does not exceed the strip width). In position sensitive detectors low energy particles (α particles escaping the detector, conversion electrons) deliver small signals often influenced by the detector noise and nonlinearities of the used amplifiers and ADCs, which significantly lowers the position resolution. In many cases signals are missing at all, as they are lower than the detection threshold. Another drawback is that at electron energies of around 300 keV the range in silicon becomes ≈300 µm and thus reaches the detector resolution, which then requires to enhance the position window for correlation search. But also one drawback of the pixeled detectors should be at least mentioned: due to the small widths of the strips (typically 1 mm) already for a notable fraction of the implanted particles the energy signal is split between two strips making sophisticated data analysis algorithms necessary to reconstruct the energy of the particles. Also, the energy split between two strips also introduces some ambiguities in the determination of the position.
An illustrative example for the benefit of including the position into the correlation search is given in fig. 3. Here the α spectrum obtained in an irradiation of 209 Bi with 50 Ti at SHIP [60] (using the same set-up as in [59]) between the beam bursts is shown in fig. 3a. Besides 258 Db (9.0 -9.4 MeV), produced in the reaction 209 Bi( 50 Ti,n) 258 Db, and its decay products 254 Lr (8. 35 -8.50 MeV) and 254 No (8.1 MeV, EC decay daughter of 254 Lr) also α lines from 212g,212m At (7.68,7.90 MeV and 211 Po (7.45 MeV) are present; these activities were produced by few nucleon transfer reactions; in addition also the α line of 215 Po (6.78 MeV) is visible, stemming from α decay of 223 Ra (T 1/2 = 11.43 d), produced in a preceeding experiment. In fig. 3b the spectrum of the first α particles following an α decay of 258 Db (energy region is marked by the red lines in fig. 3a fig. 3c; the result of the position correlation analysis finally is shown in fig. 3d. Here, in addition the occurence of both α events within a position difference of ±0.3 mm is required. The background of α decays is completely gone, and also details in the energy distribution of the α events are visible; the α events at (7.7 -7.9) MeV stem from decay of 250 Md (α -decay daughter of 254 Lr) and those at 7.45 MeV are here from 250 Fm (α-decay daughter of 254 No, EC -decay daughter of 250 Md). The events at (8.7 -8.8) MeV are from decay of 253g,253m Lr, the α decay daughters of 257g,257m Db, which was produced to a small amount in the reaction 209 Bi( 50 Ti,2n) 257 Db.
Summing of α-particle and Recoil -energies
Implantation of ER into a silicon detector has consquences for measuring the energies of α particles. One item concerns summing of the α particle energy and the energy transferred by the α particle to the residual nucleus, which will be in the following denoted as recoil energy E rec . The total decay energy Q (for a ground-state to ground-state transition) is given by This energy splits in two components Here m mother , m daughter denote the masses of the mother and daughter nucleus ‡ , E α the kinetic energy of the the α particle. ‡ strictly spoken, the atomic mass, not the mass of a bare nucleus Evidently the recoil energy E rec = (m α /m daughter ) × E α is stronly dependent on the mass of the daughter nucleus and the kinetic energy of the α -particle. This behavior is shown in fig. 4, where for some isotopes E rec is plotted versus E α . The black squares represent the results for 210,211,212 Po and 216,217 Th, which are often produced in reactions using lead or bismuth targets by nucleon transfer or in so called 'calibration reactions' (reactions used to check the performance of the experimental set-up), the red dots are results for 'neutron deficient' isotopes in the range Z = (98-110), the blue triangles, finally, results for neutron rich SHN produced so far in irradiations of actinide targets with 48 Ca. Evidently the recoil energies for the polonium and thorium isotopes are by 15-30 keV higher than for the Z = (98-110) -isotopes, while the differences between the latter and the 'neutron rich SHN' are typically in the order of 10 keV; specifically striking is the difference of ∆E rec = 65 keV between 212 Po and 294 Og, both having nearly the same α -decay energy. In practise, however, the differences are less severe: the measured energy of the α particle is not simply the sum of both contributions as due to the high ionisation density of the heavy recoil nucleus part of the created charge carriers will recombine and thus only a fraction of them will contribute to the hight of the detector signal, hence E α (measured) = E α + a×E rec with a < 1, giving the fraction of the contribution of the recoil energy, which can be considered to be in the order of a ≈ 0.3 [61]. One should, however, keep in mind, that this analysis was performed for nuclei around A = 150. As ionization density increases for heavier nuclei (larger Z) the recombination might be larger for SHN, thus a < 0.3. Nevertheless different recoil contributions should be considered when calibrations are performed. Further discussion of this item is found in [59].
Summing of α particle and conversion electron (CE) energies
One more problem is connected with energy summing of α particles and conversion electrons (CE) in cases where excited levels are populated decaying towards the ground state by internal conversion, leading to a shift of the measured α energies towards higher values [62]. An illustrative example is shown in fig. 5 where the decay of 255 Rf is presented. The decay scheme is shown in fig. 5a; α decay populates the 9/2 − [734] -level in 251 No, which then decays by γ emission either into the 7/2 + [624] ground-state (E γ = 203.6 keV ) or into the 9/2 + state (E γ = 143.3 keV ( fig. 5b) [63]. The M1 -transition 9/2 + → 7/2 + is highly converted. In fig 5c we present the energy distributions of α particles either in coincidence with the E γ = 203.6 keV (black line) or the E γ = 143.3 keV (red line). We observe a shift in the α energies by ∆E = 38 keV, which is even larger than the CE energy (31 keV) [64], indicating that not only the CE contribute to the energy shift but also the energy released during deexcitation of the atomic shell (e.g. Auger electrons).
α particles escaping the detector
As the implantation depth into the detector is typically ≤10 µm and thus considerably smaller than the range of α -particles in silicon (>50 µm at E > 8 MeV) only part of them (50 -60 %) will be registered with full enery. So if one observes on the basis of a small number of events, besides a 'bulk' at a mean energy E also α particles with energies of E -∆E will be registered. So it is a priori not possible to state if these events represent decays into higher lying daughter levels or if they are just α particles of energy E escaping the detector with an energy E -∆E. However, some arguments can be given on the basis of the probability to observe the latter events. As an illustrative example the α spectrum of 253 No [65] is given in fig. 6.
Here the α decays in coincidence with the 279.5 keV γ line are shown, which represents the transition of the 9/2 − [734] level in 249 Fm populated by the α decay, into 7/2 + [624] ground-state. In that case one obtains a clean α spectrum of a single transition not disturbed by energy summing with CE. Besides the 'peak' at E α = 8005 keV a bulk of events at E < 2 MeV is shown. About 55 % of the α particles are registered in the peak, about 32% are found at E < 2 MeV; the rest (13%) is distributed in the energy range in between. In the insert the energy range between E =(7.2-8.2) MeV is expanded. It is clearly seen that the number of α particles in the range, taken here somewhat arbitrarily as E mean -570 keV is small. The 'peak' is here defined as the energy region E > 7935 keV, as at this energy the number of events (per 5 keV) has been dropped to 5% of the number in the peak maximum (region 1). The ratio of events in the energy interval (7835,7935) keV (region 2) is about 1.2 % of that in the 'peak' (region 1), while the ratio of events in the energy interval (7435,7835) keV (region 3) is about 0.8 %. These small numbers indicate that at low number of observed total decays, it is quite unlikely that events with energies some hundred keV lower than the 'bulk' energy represent α particles from the 'bulk' leaving the detector with nearly full energy loss. They rather stem from decays into excited daughter levels (but possibly influenced by energy summing with CE) § .
Compatibility of α energy measurements in the region of SHN
As the numbers of decays observed in specific experiments is usually quite small it seems of high interest to merge data of different experiments to enhance statistics to possibly extract details on the decay properties, e.g. fine structure in the α decay. One drawback concerning this item is possible energy summing between α particles and CE as discussed above; another problem is the compatibility of the decay energies measured in the different experiments and thus a consequence of calibration. This is not necessecarily a trivial problem as shown in fig. 7 Db. This decay chain has been investigated so far at three different separators, DGFRS at FLNR, Dubna, Russia [66], TASCA at GSI, Darmstadt, Germany [67], and BGS at LNBL, Berkeley, USA [68]. The energy distributions of the odd-odd nuclei occuring in the decay chain of 288 Mc are in general quite broad indicating decays into different daughter levels accompanied by energy summing of α particles and CE. Solely for 272 Bh a 'quite narrow' line is observed. The results of the different experiments are compared in fig. 7. To avoid ambiguities due to worse energy resolution of 'stop + box' events we restricted to events with full energy release in the 'stop' detector. Evidently there are large discrepancies in the α energies: the DGFRS experiment [66] delivers a mean value E α (DGFRS) = 9.022 ± 0.012 MeV, the TASCA experiment [67] E α (TASCA) = 9.063 ± 0.014 MeV, and the BGS experiment E α (BGS) = 9.098 ± 0.022 MeV, hence differences E α (TASCA) -E α (DGFRS) = 41 keV, E α (BGS) -E α (TASC) = 35 keV, E α (BGS) -E α (DGFRS) = 76 keV, which are by far larger than calibration uncertainties in the range of 10-20 keV, which might be expected usually. That is a very unsatisfying situation. [66]; b) TASCA experiment [67]; c) BGS experiment [68] 5. Discovery of elements Z = 107 (bohrium) to Z = 112 (copernicium) and their approvement by the IUPAC The elements Z = 107 to Z = 112 where first synthesized at the velocity filter SHIP, GSI, in the period 1981 -1996. The corresponding isotopes where identified after implantation into arrangements of silicon detectors by registering their α decay chains. Identification was based on decay properties (α energies, halflives) of at least one member of the decay chain, that had been either known from literature or had been synthesized and investigated at SHIP in preceeding experiments. The latter is the main difference to elements Z ≥ 114, where the decay chains are not connected to the region of known and safely identified isotopes. Nevertheless, the elements Z = 107 -Z = 112 depict in some cases already the difficulties to unambiguously identify an isotope on the basis of only a few observed decays and also the problems evaluaters in charge to approve discovery of a new element are faced with. In order not to overtop the banks in the following only the reports of the Tranfermium Working Group of the IUPAC and IUPAP (TWG) (for elements bohrium to meitnerium) or the IUPAC/IUPAP Joint Working Party (JWP) (for elements darmstadtium to copernicium) concerning the GSI new element claims are considered. Other claims on discovery of one ore more of these elements are not discussed.
Element 107 (Bohrium)
The first isotope of element 107, 262 Bh, was synthesized in 1981 in the reaction 209 Bi( 54 Cr,n) 262 Bh [31]. Altogether six decay chains where observed at that time. Prior to approval of the discovery by the IUPAC two more experiments were performed. The complete results are reported in [69]: two states of 262 Bh decaying by α emission 262g Bh (T 1/2 = 102 ± 26 ms (15 decays)) and 262m Bh (T 1/2 = 8.0 ± 2.1 ms (14 decays)) as well as the neighbouring isotope 261 Bh (10 decays) were observed. Thus approval of the discovery of element 107 was based on a 'safe ground', and it was stated by the TWG [70]: 'This work ( [31]) is considered sufficiently convincing and was confirmed in 1989 [69].'
Element 108 (Hassium)
Compared to bohrium the data for hassium on which the discovery was approved was scarce. In the first experiment performed in 1984 [71] three decay chains of 265 Hs were observed in an irradiation of 208 Pb with 58 Fe. In two cases a full energy event of 265 Hs was followed by an escape event of 261 Sg, while in one case an escape event of 265 Hs was followed by a full energy event of 261 Sg. The α particle from the granddaughter 257 Rf was measured in all three cases with full energy. In a follow-up experiment only one decay chain of the neighbouring isotope 264 Hs was observed in an irradiation of 207 Pb by 58 Fe [72]. The chain consisted of two escape events followed by an SF, which was attributed to 256 Rf on the basis of the decay time. Nevertheless discovery of element 108 was approved on the basis of these data and it was stated by the TWG [70]: 'The Darmstadt work in itself is sufficiently convincing to be accepted as a discovery.'
Element 109 (Meitnerium)
Discovery of element 109 was connected to more severe problems. In the first experiment at SHIP, performed in summer 1982, only one decay chain shown in fig. 8. was observed [73] in an irradiation of 209 Bi with 58 Fe.
It started with an alpha -event with full energy, followed by an escape event and was terminated by an SF event. The latter was attributed to 258 Rf produced by EC decay of 258 Db. A thorough investigation of the data showed that the probability for the event sequence to be random was <10 −18 [74]. Among all possible 'starting points' (energetically possible evaporation residues) 266 Mt was the most likely one [74]. In a second experiment performed early in 1988 (january 31st to february 13th) two more decay chains, also shown in fig. 8 where observed [75]; chain number 2 consisted of four α events, two with full energy, and two escape events, attributed to 266 Mt (first chain member) and 258 Db (third chain member); the two full energy events were attributed to 262 Bh (second chain member) and 254 No (forth chain member), which was interpreted to be formed by EC decay of 254 Lr. The third chain consisted of two α decays, which were assigned to 262 Bh and 258 Db on the basis of the measured energies, while the α particle from the decay of 266 Mt was not observed. The non-registration of 266 Mt could have different reasons: a) 262 Bh was produced directly via the reaction 209 Bi( 58 Fe,αn) 262 Bh. This possibility was excluded as this reaction channel was assumed to be considerably smaller than the 1n -deexcitation channel. And indeed in a later experiment, performed after the approval of element 109 by the IUPAC, twelve more decay chains of 266 Mt were observed, but no signature for an αn -deexcitation channel was found [76]. b) 266 Mt has a short-lived isomer decaying in-flight during separation. This interpretation seemed unlikely as in case of α emission the recoil of the α particle would have kicked the residual nucleus out of its trajectory, so it would not have reached the detector placed in the focal plane of SHIP; similary in case of decay by internal transitions one could expect that emission of Auger electrons following internal conversion would have changed the charge state of the atom, so it would been also kicked out of its trajectory. c) a short-lived isomer may decay within 20 µs after implantation of the ER, i.e. during the dead time of the data acquisition system, and thus not be recorded. Also for this interpretation no arguments were found in the later experiment [76]. d) the α particle from the decay of 266 Mt escaped with an energy loss <670 keV, which was the lower detection limit in this experiment. This was seen as the most reasonable case. To summarize: the three chains presented strong evidence for having produced an isotope of element 109, but it still may be discussed if the presented data really showed an unambiguous proof. However, the TWG did not share those concerns, leading to the assessment 'The result is convincing even though originally only one event was observed' and came to the conclusion 'The Darmstadt work [73] gives confidence that element 109 has been observed' [70]'.
Element 110 (Darmstadtium)
After a couple of years of technical development, including installation of a new low energy RFQ -IH acceleration structure coupled to an ECR ion source of the UNILAC and construction of a new detector set-up experiments on synthesis of new elements were resumed in 1994. The focal plane detector was surrounded by a 'box' formed of six silicon detectors allowing to measure with a probability of about 80% the full energy of α -particles escaping the 'stop' detector as the sum E = ∆E(stop) + E residual (box). The first isotope of element 110, 269 Ds, was synthesized 1994 in the reaction 208 Pb( 62 Ni,n) 269 Ds [44]; four decay chains were observed. In three of the four chains α decays with full energy were observed down to 257 Rf or 253 No, respectivly. For the forth decay chain of 269 Ds only an energy loss signal was measured, while α decays of 265 Hs and 261 Sg were registered with full energy. Further members of the decay chain ( 257 Rf, 253 No, etc.) were not recorded. In a later re-analysis of the data this chain could not be reproduced any more, similary to the case of decay chains reported from irradiations of 208 Pb with 86 Kr at the BGS, Berkeley, and interpreted to start from an isotope of element 118 [77,78]. This deficiency, however, did not concern the discovery of element 110 and it was stated by the JWP of the IUPAC/IUPAP 'Element 110 has been discovered by this collaboration' [79].
Element 111 (Roentgenium)
The first experiment on synthesis of element 111 was performed in continuation of the element 110 discovery experiment. In an irradiation of 209 Bi with 64 Ni three α decay chains were observed. They were assigned to 272 Rg, produced in the reaction 209 Bi( 64 Ni,n) 272 Rg [80]. Two of these chains ended with α decay of 260 Db; for the first member, 272 Rg, only a ∆E signal was registered. In the third chain α decay was observed down to 256 Lr and all α particles from the decay chain members ( 272 Rg, 268 Mt, 264 Bh, 260 Db, 256 Lr) were registered with 'full' energy. It should be noted that also 268 Mt and 264 Bh had not been observed before. The JWP members, however, were quite cautious in that case [79]. It was remarked that the α energy of 264 Bh in chain 1 was quite different to the values of chain 2 and 3 and that the α energy E = 9.146 MeV of 260 Db in chain 2 was in fact in-line with the literature value (given e.g. in [81]), but was quite different to the value E = 9.200 MeV in chain 3. Further it was noted, that the time difference ∆t( 262 Db -256 Lr) = 66.3 s, was considerably longer than the known half-life of 256 Lr (T 1/2 = 28±3 s [81]). So it was stated (JWP assessment): 'The results of this study are definitely of high quality but there is insufficient internal redundancy to warrant certitude at this stage. Confirmation by further results is needed to assign priority of discovery to this collaboration' [79]. In a further experiment at SHIP three more decay chains were observed which confirmed the previous results [82], leading to the JWP statement:'Priority of discovery of element 111 by Hofmann et al. collaboration in [80] has been confirmed owing to the additional convincing observations in [82]' [83]. For completeness it should be noted that the SHIP results were later confirmed in experiments performed at the GARIS separator at RIKEN, Wako (Japan), where the same reaction was used [84], and at the BGS separator at LBNL Berkeley (USA), where, however, a different reaction, 208 Pb( 65 Cu,n) 272 Rg was applied [85].
Element 112 (Copernicium)
Concerning discovery of element 112 the situation was even more complicated. In a first irradiation of 208 Pb with 70 Zn performed at SHIP early in 1996 two decay chains interpreted to start from 277 Cn were reported [86]. In chain 1 α decays down to 261 Rf, in chain 2 alpha decays down to 257 No were observed. Both chains showed severe differences in chain members α(1) and α (2). (In the following α(n) in chains 1 and 2 will be denoted as α(n1) and α(n2), respectively.) The α energies for 277 Cn differed by 0.22 MeV, while the 'lifetimes' τ (time differences between ER implantation and α decay were comparable) with E α11 = 11.65 MeV, τ α11 = 400 µs and E α12 = 11.45 MeV, τ α12 = 280 µs. For α(2) ( 273 Ds) the discrepancies were more severe: E α21 = 9.73 MeV, τ α21 = 170 ms and E α22 = 11.08 MeV, τ α12 = 110 µs. It seemed thus likely that the α decays of 273 Ds (and thus also of 277 Cn) occurred from different levels. This was commented in the JWP report [79] as 'Redundancy is arguably and unfortunately confounded by the effects of isomerism. The two observed alphas from 277 112 involve different states and lead to yet two other very different decay branches in 273 110. (...) The first two alpha in the chains show no redundancy.' It was further remarked that the energy of 261 Rf in chain 2 (E = 8.52 MeV) differed by 0.24 MeV from the literature value [81]. Indeed it was later shown by other research groups [87,88,89] that two longlived states decaying by α emission exist in 261 Rf with one state having a decay energy and a halflife of E α = 8.51±0.06 MeV and T 1/2 = 2.6 +0.7 −0.5 s [89] in-line with the data from chain 2 (E α52 = 8.52 MeV, τ α52 = 4.7 s). But this feature was not known when the TWG report [79] was written. Consequently it was stated 'The results of this study are of characteristically high quality, but there is insufficient internal redundancy to warrant conviction at this state. Confirmation by further experiments is needed to assign priority of discovery to this collaboration.' One further experiment was performed at SHIP in spring 2000, where one more decay chain was observed, which resembled chain 2, but was terminated by a fission event [82].The latter was remarkable, as the fission branch of 261 Rf was estimated at that time as b sf < 0.1. But also here later experiments [87,88,89] established a high fission branch for the 2.6 s -activity with the most recent value b sf = 0.82±0.09 [89]. Then, during preparing the manuscript [82] a 'desaster' happed: in a re-analysis of the data from 1996 chain 1 could not be reproduced, similary to the case of one chain in the element 110 synthesis experiment (see above) and of decay chains reported from irradiations of 208 Pb with 86 Kr at the BGS, Berkeley, and interpreted to start from an isotope of element 118 [77,78]. It was shown that this chain had been created spuriously [82]. At least this finding could explain the inconsistencies concerning the data for 277 Cn and 273 Ds in chains 1 and 2. On this basis the JWP concluded [83]: 'In summary, though there are only two chains, and neither is completely characterized on its own merit. Supportive, independent results on intermediates remain less then completely compelling at that stage.' In the following years two more experiments at SHIP using the reaction 70 Zn + 208 Pb were performed without observing one more chain [90], however, decay studies of 269 Hs and 265 Sg confirmed specifically the data for 261 Rf [87], while the decay chains of 277 Cn were reproduced in an irradiation of 208 Pb with 70 Zn at the GARIS separator at RIKEN, Wako (Japan) [91,92]. On this basis the JWP concluded in their report from 2009 [93]: 'The 1996 collaboration of Hofmann et al. [86] combined with the 2002 collaboration of Hofmann et al. [82] are accepted as the first evidence for synthesis of element with atomic number 112 being supported by subsequent measurements of Morita [91,92] and by assignment of decay properties of likely hassium imtermediates [87,94,95] in the decay chain of 277 112'.
6. Some Critical assessments of decay chains starting from elements Z≥112 and discussion of decay data of the chain members The experiments on synthesis of the new elements with Z = 113 to Z = 118 reflect the extreme difficulties connected with identification of new elements on the basis of observing their decay when only very few nuclei are produced and decay chains end in a region where no isotopes had been identified so far or their decay properties are only known scarcely. Nevertheless discovery of elements Z = 113 to Z = 118 has been approved by IUPAC and discovery priority was settled [12,13,14], and also names have been proposed and accepted so far: Lv and its daughter products [27]; b) decay chain as assigned in [96]; c)decay chain as assigned in [59]; d) decay chain as assigned in [97]; e) decay data reported for 291 Lv and its daughter products [27].
Ambiguities in the assignment of decay chains -case of 293 Lv -291 Lv
As already briefly mentioned in sect. 3, the continuous implantation of nuclei, the overlap of low energy particles passing the separator with the α -decay energies of the expected particles and efficiencies lower than 100 % of detectors used for anti-coincidence to discriminate between 'implantation of nuclei' and 'decays in the detector' introduces a problem of background. It might be severe, if only very few decay chains are observed, since at a larger number of events single chains containing a member that does not fit to the rest of the data can be easily removed.
An example to illustrate those related difficulties is given in fig. 9. The decay chain was observed at SHIP, GSI, in an irradiation of 248 Cm with 48 Ca at a bombarding energy E lab = 265.4 MeV [59]. A first analysis of the data resulted in an implantation of an ER, followed by three α decays. The chain was terminated by a spontaneous fission event [96] as shown in fig. 9b. It was tentatively assigned to the decay of 293 Lv. After some further analysis, one more α decay (an event that occured during the beam-on period) placed at the position of 285 Cn was included into the chain, but still assigned tentatively to the decay of 293 Lv [59] as shown in fig. 9c. However, except for 293 Lv the agreement of the decay properties (α energies, lifetimes) of the chain members with literature data [27], shown in fig. 9a, was rather bad. Therefore in a more recent publication [97] the assigment was revised, including a low energy signal of 0.244 MeV registered during the beam-off period, but without position signal, into the chain at the place of 283 Cn. The chain is now . The differences in the α decay energies of 240 keV principally can be explained by decay in different daughter levels. As in [27] only three decays are reported, it might be that the decay of the lower energy simply was not observed in the experiments from which the data in [27] were obtained. Such an explanation is principally reasonable. For E α = 10.74 MeV one obtains a theoretical α decay halflife of T α = 32 ms using the formula suggested by Poenaru [98] using the parameter modification suggested in [99] which has been proven to reproduce α decay halflives in the region of heaviest nuclei very well [100]. The value is indeed in good agreement with the reported half-life of T α = 19 +17 −6 ms [27]. For E α = 10.50 MeV one obtains T α = 139 ms. This means, that one expects some 20% intensity for an α transition with an energy lower by about 250 keV, provided that α decay hindrance factors are comparable for both transitions. More severe, however, seems the lifetime of 279 Ds, which is a factor of twenty longer than the reported half-life of T α = 0.21±0.04 s. The probability to observe an event after twenty halflives is only ≈10 −6 . To conclude: it is certainly alluring to assign this chain to 291 Lv, the assignment, however, is not unambiguous. As long as it is not confirmed by further data, it should be taken with caution.
Ambiguities in the assignment of decay chains -case of 289,288 Fl
The observation of a decay chain, registered in an irradiation of 244 Pu with 48 Ca at E lab = 236 MeV at the Dubna Gasfilled Separator (DGFRS), assigned to start from 289 Fl was reported by Oganessian et al. [10]. The data presented in [10] are shown in fig. 10a. In a follow-up experiment two more decay chains with different decay chracteristics were observed and attributed to the neighbouring isotope 288 Fl [101], while in an irradiation of 248 Cm with 48 Ca at E lab = 240 MeV one decay chain, shown in fig. 10c was registered [102]. Decay properties of members 2, 3 and 4 of the latter chain were consistent with those for 288 Fl, 284 Ds, and 280 Hs. Consequently the chain ( fig. 10c) However, results from later irradiations of 244 Pu with 48 Ca were interpreted in a different way [103]. The activity previously attributed to 288 Fl was now assigned to 289 Fl, while no further events having the characteristics of the chain originally attributed to 289 Fl [10] were observed. It was now considered as a possible candidate for 290 Fl [104]. But this chain was not mentioned later as a decay chain stemming from a flerovium isotope [105]. However, a new activity, consisting of an α decay of E α = 9.95±0.08 MeV and T 1/2 = 0.63 +0. 27 −0.14 s followed by a fission activity of T fig. 11. The 'new' results are taken from the recent review [27], the 'old' results for 288 Fl are the mean values from the three decays reported in [101,102] as evaluated by the author. It should be noticed for completeness, that Kaji et al. [107] observed also a chain consisting of three α particles terminated by a fission event. The chain was not regarded as unambiguous and so α 3 and the SF event were only tentatively assigned to 284 Cn (α 3 ) and 280 Ds (SF). In a more recent decay study of 288 Fl using the production reaction 244 Pu( 48 Ca,4n) 288 Fl a small α -decay branch (b α ≈0.02) and spontaneous fission of 280 Ds were confirmed [108].
Ambiguities in the assignment of decay chains -case of 287 Fl -283 Cn
A couple of weeks after submission of [10] (recieved march 9,1999) another paper was submitted by Yu.Ts. Oganessian et al. reporting on synthesis of a flerovium isotope with mass number A = 287 [9] (received april 19,1999). The experiment had been performed at the energy filter VASSILISSA at FLNR-JINR Dubna, and the decay chains (shown in fig. 12a) were observed in bombardments of 242 Pu with 48 Ca at E lab = 230 -235 MeV. Two chains consisting of an α -decay (in one case only an 'escape' α -particle was registered) followed by spontaneous fission were observed. Although lifetimes of the SF events were longer than those of two SF events correlated to ER observed in a preceding irradiation of 238 U with 48 Ca at VASSILISSA [109] (fig. 12b) they were attributed to the same isotope, 283 Cn and the α decays were attributed to 287 Fl. In a later irradiation of 238 U with 48 Ca at the same set-up two more SF events attributed to 283 Cn were observed [110] Hs was observed and the chain was terminated by SF of 271 Sg; in one chain observed in the irradiation of 238 U also α decay of 271 Sg was observed and the chain was terminated by SF 267 Rf. The previously observed chains at VASSILISSA were suspected to represent a less probable decay mode [104], but not listed any more in later publications (see e.g. [105]). The 'DGFRresults' were in-line with data for 283 Cn and 287 Fl data later obtained in the reactions 238 U( 48 Ca,3n) 283 Cn investigated at SHIP, GSI Darmstadt [112] and at GARIS II, RIKEN, Wako [113], as well as in the reaction 242 Pu( 48 Ca,3n) 287 Fl investigated at BGS, LNBL Berkeley [114], while the 'VASSILISSA -events' were not observed. It should be noted, however, that due to a more sensitive detector system used in [112] than that used in [111] in cases where the α decay of 283 Cn was denoted as 'missing' in [111], since fission was directly following α decay of 287 Fl, the α decay of 283 Cn probably was not missing, but fission occured from 283 Cn [112]. The discrepancy between the 'DGFRS results' and the 'VASSILiSSA results' could not be clarified so far, but it should be noted that the latter ones were not considered any more in later reviews of SHE synthesis experiments at FLNR -JINR Dubna [27,105]. However, the 'VASSILISSA results' again were discussed in context with a series of events, registered in an irradiation of 248 Cm with 54 Cr at SHIP, which were regarded as a signature for a decay chain starting from an isotope of element 120 [97]. It should be noted that a critical re-inspection of this sequence of events showed that it does not fulfil the physics criteria for a 'real' decay chain and the probability to be a real chain is p << 0.01 [115]. Nevertheless the discussion in [97] can be regarded as an illustrative example of trying to match doubtful data from different experiments. Therefore it will be treated her in some more detail.
The data are shown in table 1. Evidently the events α 4 and 'SF' would represent 287 Fl and 283 Cn if the Tab. 1: Comparison of the 'event sequence' from the 54 Cr + 248 Cm irradiation at SHIP [97] and the VASSILISSA results for 287 Fl and 283 Cn. Data taken from [97]. chain starts at 299 120. α 4 is recorded as an α particle escaping the detector, releasing only an energy loss of ∆E = 0.353 MeV in it. Using the measured lifetime (20 s) and a hindrance factor HF = 104, as derived from the full energy α event (10.29 MeV) attributed to 287 Fl in [9], the authors calculated a full α decay energy for α 4 of E = 10.14 +0.09 −0.27 MeV. Using the same procedure they obtained a full α energy E = 10.19 +0.10 −0.28 MeV for the E = 2.31 MeV -'escape' event in [9]. The time differences ∆T(α 3 -α 4 ) and ∆T(α 4 -SF) resulted in lifetimes τ (α 4 ) = 20 +89 −9 s (T 1/2 = 14 +62 −6 s) and τ (SF) = 12 +56 −5 min (T 1/2 = 500 +233 −208 s) [97] and thus were inline with the 'VASSILISSA -data' for 287 Fl (E α = 10.29±0.02 MeV, T 1/2 = 5.5 +9.9 −2.1 s) and 283 Cn (T 1/2 = 308 +212 −89 s). This finding was seen as a 'mutual support' of the data strengthen the (tentative) assignments of the chains in [9,97], although the authors in [97] could not give a reasonable explanation, why these data were only seen in the VASSILISSA experiment and could not be reproduced in other laboratories. As it was shown in [115] that the decay chain in [97] represents just a sequence of background events, it becomes clear, that blinkered data analysis may lead to correlations between background events even if they are obtained in different experiments.
Alpha decay chain of N-Z = 59 nuclei
Within the so far assigned superheavy nuclei, the decay properties of the N-Z = 59 nuclei are of specific importance and interest, as the acceptance of discovery of elements Z = 117 (tenessine) and Z = 115 (moscovium) is based on them. The heaviest known nucleus of that chain, 293 This assessment was critizised and a different interpretation of the results was suggested [124]. This issue will be illuminated in the following discussion. A final solution of the problem, however, cannot be presented on the basis of the available data. The so far published decay data for the members of the N-Z = 59 chain [27] To obtain more detailed information on the decay properties of the isotopes assigned to the N-Z = 59 chain a closer inspection of the data listed in table 3 was performed, specifically the results from the 'entry' into the chain at 293 Ts (reaction 48 Ca + 249 Bk) and 'entry' into the chain at 289 Mc (reaction 48 Ca + 243 Am) were compared. The resulting α spectra are shown in fig. 13. Before starting a detailed discussion it seems, however, necessary to stress some items that could cause confusion. First, as discussed in sect. 4.4 (individual) α energies measured in the experiments performed at the different laboratories (DGFRS Dubna, TASCA Darmstadt, BGS Berkeley) vary considerably, eventually due to the calibration procedures applied. Differences ∆E = 76 keV were found between the DGFRS and BGS results, and ∆E = 41 keV between the DGFRS and TASCA results for 272 Bh; as ≈90 % of the data in Table 3 are from DGFRS or TASCA an uncertainty of ≈50 keV in the absolute value may be considered. As will be shown, the energy differences from the different production mechanisms are munch larger and thus cannot be attributed to calibration discrepancies. Part of the α energies were obtained as sum events from 'stop' and 'box' detector, thus suffering from worse accuracy due to worse energy resolution. A few decays (three events) from the DGFRS experiments were registered as 'box -only' events (i.e. the α particle escaped the 'stop' detector with an energy loss below the registration threshold). That means, the measured energies are too low by a few hundred keV. Due to the low number of these events also this feature cannot be the reason for the differences in the energy distribtions. Under these circumstances we tentatively can distinguish the following decay chains. This at first glance somewhat puzzling seeming decay pattern can qualitatively explained by existence of low lying long lived isomeric states in 289 Mc, 285 Nh and 281 Rg decaying by α emission or spontaneous fission. The existence of such states is due to existence of Nilsson states with low and high spins placed closely at low excitation energies; the decay by internal transitions of such states is hindered by large spin differences and thus lifetimes become long and α decay can compete with internal transitions. That is a well known phenomena in the transfermium regions. In direct production both states are usually populated; in production by α decay the population of the states depend on the decay of the mother nucleus. If there are two longlived isomeric states in the mother nucleus, also two longlived states in the daughter nucleus may populated, see e.g. decay of 257 Db → 253 Lr [125]; if there is only one α emitting state populated by the deexcitation process two cases are possible; either only one state in the daughter nucleus is populated as e.g. in the decay 261 Sg → 257 Rf [126], or both long-lived states in the daughter nucleus may be populated, as known for 261 Bh → 257 Db [54]. Under this circumstances the puzzling behavior can be understood in the following way: decay of 293 In addition there might be other contributions, e.g. the chain marked as D4 in Table 3, which does not fit to the other ones. Also the very short chains in Table 3 consiting of α → SF seemingly may have a different origins The halflives of the α events, T 1/2 (α) = 0.069 +0.069 −0.23 s, and of the fission events. and T 1/2 (SF) = 0.3 +0.30 −0.1 are lower than the values for 289 Mc(2) and 285 Nh(2), but considering the large uncertainties they are not in disagreement. So they could indicate a fission branch of 285 Nh in the order of b SF ≈0.4. We remark here also the short half-life of T 1/2 = 1.8 +1. 8 −0.6 s of the α events assigned to 281 Rg which is considerably shorter than the half-life of the fission events. Despite this fact we tentatively assign them to 281 Rg(1). The joint analysis of the data presented for the decay chains interpreted to start either from 293 Ts or from 289 Mc seem to shed some light into the 'puzzeling' decay data reported so far and suggests a solution. It should be noted, however, the conclusions drawn here must be confirmed by more sensitive measurements before they can be finally accepted. [135]. 'esc' denotes that the α particle escaped the 'stop' detector and only an energy loss signal was recorded.
Discovery of element 113 -Alpha decay chain of 278 Nh
The first report on discovery of element 113 was published by Oganessian et al. [127] in 2004. In an irradiation of 243 Am with 48 Ca performed at the DGFRs three decay chains were observed which were interpreted to start from 288 115; the isotope 284 113 of the new element 113 was thus produced as the α decay descendant of 288 115. The data published in 2004 were confirmed at DGFRS [66], at TASCA [67] and also later -after giving credit to the discovery of element 113 -at the BGS [68]. Nevertheless the 'fourth IUPAC/IUPAP joint working Party (JWP) did not accept these results as the discovery of elemenent 113, as they concluded that the discovery profiles were not fulfilled: 'The 2013 Oganessian collaboration [66] [136], c) production via 248 Cm( 23 Na,5n) 266 Bh [137,135], α energies from triple α correlations α 1 ( 266 Bh) -α 2 ( 262 Db) -α 3 ( 258 Lr), d) production via 248 Cm( 23 Na,5n) 266 Bh [137,135], α energies from double α correlations α 1 ( 266 Bh) -α 2 ( 262 Db or 258 Lr), e) production via 248 Cm( 23 Na,5n) 266 Bh [137,135], α energies from α -SF correlations, f) production via 243 Am( 26 Mg,3n) 266 Bh [138], g) production via 248 Cm( 23 F,5n) 266 Bh, α( 266 Bh) -SF( 262 Db) correlations [144], h) production via 248 Cm( 23 F,5n) 266 Bh, triple correlations α 1 ( 266 Bh) -α 2 ( 262 Db) -α 3 ( 258 Db) correlations [144]. Wilk et al. 266 Bh production [136] counts / 10 keV 0 1 2 f) Dressler the steep decrease of the cross-sections by a factor of three to four per element, it did not seem to be straightforward to assume it would be the silver bullet to the SHE (see fig. 16). Nevertheless after the successful synthesis of element 112 in bombardments of 208 Pb with 70 Zn [86] it seemed straightforward to attempt to synthesize element 113 in the reaction 209 Bi( 70 Zn,n) 278 113. Being optimistic and assuming a drop in the cross section not larger than a factor of five, as observed for the step from element 110 to element 111 (see fig. 16), a cross section of some hundred femtobarn could be expected. First attempts were undertaken at SHIP, GSI, Darmstadt, Germany in 1998 [129] and in 2003 [130]. No α decay chains that could be attributed to start from an element 113 isotope were observed. Merging the projectile doses collected in both experiments an upper production cross section limit σ ≤ 160 fb was obtained [131]. More intensively this reaction was studied at the GARIS separator, Riken, Wako-shi, Japan. Over a period of nine years (from 2003 to 2012) with a complete irradiation time of 575 days altogether three dcay chains interpreted to start from 278 113 were observed [132,133,134,135]. The collected beam dose was 1.35×10 20 70 Zn -ions, the formation cross-section was σ = 22 +20 −13 fb [134]. The chains are shown in fig. 17 and the data are presented in table 5. Chain 1 and chain 2 consist of four α particles and are terminated by a fission event, while chain 3 consists of six α decays. Already at first glance the large differences in the α energies of members assigned to the same isotope is striking, especially for the events α 2 ( 274 Rg) in chains 2 and 3 with ∆E = 0.68 MeV and α 4 ( 266 Bh) in chains 1 and 2 with ∆E = 0.69 MeV. Although it is known that α -decay energies can vary in a wide range for odd -odd nuclei in the region of heaviest nuclei, as was shown, e.g., for 266 Mt, where α energies were found to vary in the range E α = (10.456 -11.739) MeV [76], the assignment of such different energies to the decay of the same isotope or the same nuclear level can be debated (see e.g. [11]), as specifcally concerning the latter case it is known, that in odd -odd (and also in odd -mass) nuclei often low lying isomeric states exist, which decay by α emission with energies and halflives similar to those of the ground state (see e.g. [60] for the cases of 258 Db, 254 Lr, and 250 Md). In the present case large α energy differences of ∆E > 0.1 MeV are evident for the corresponding members in all chains as shown in table 6.
These differences are of specific importance for 266 Bh which acts as an anchor point for identification of the chains. The observation of α decay of this isotope has been reported by several authors who produced it in diffeent reactions: a) Wilk et al. [136] used the reaction 249 Bk( 22 Ne,5n) 266 Bh. They observed one event with an α energy of E α = 9.29 MeV. b) Morita et al. [135,137] used the reaction 248 Cm( 23 Na,5n) 266 Bh; they observed in total 32 decay chains; 20 of them were attributed (partly tentative) to the decay of 266 Bh; four decay chains consisted of three α particles, assigned as decays α 1 ( 266 Bh) -α 2 ( 262 Db) -α 3 ( 258 Lr); four decay chains consisted of two α particles, interpreted as decays α 1 ( 266 Bh) -α 2 ( 262 Db or 258 Lr); twelve decay chains consisted of an α particle followed by a fission event, interpreted as α( 266 Bh) -SF( 262 Db)(possibly SF from 262 Rf, produced by EC decay of 262 Db); in the case of four α energies of E α < 9 MeV, the assignment was marked as 'tentative'. c) Qin et al. [138] used the reaction 243 Am( 26 Mg,3n) 266 Bh. They observed four decay chains which they assigned start from 266 Bh. Evidently there is no real agreement for the α energies of 266 Bh; two of the three energies of 266 Bh from the 278 113 decay chains ( fig. 18a) are outside the range of energies observed in direct production, which is specifically critical for chain 3, as it is not terminated by fission, but α decay is followed by two more α events attributed to 262 Db and 258 Lr and thus is the anchor point for identification of the chain. Some agreement is obtained for the events from the direct production followed by fission [137,135] (fig. 18e), the α energy in chain 1 ( fig. 18a) (also follwed by fission) and the results from Qin et al. [138] (fig. 18f), where two groups at (9.05 -9.1) MeV and (8.9 -9.0) MeV are visible. Note, that in [137,135] the events at E α < 9.0 MeV followed by fission are only assigned tentatively to 266 Bh, while in [138] all 266 Bh α decays are followed by α decays.
Unclear is the situation of the events followed by α decays. As seen in figs. 18c, 18d, and 18f, there are already in the results from [137,135] discrepancies in the 266 Bh energies from triple correlations ( fig. 18c) and double correlations (fig. 18d). In the triple correlations there is one event at E = 8.82 MeV, three more are in the interval E = (9.08 -9.2) MeV, while for the double correlations all four events are in the range E = (9.14 -9.23) MeV; tentatively merging the 266 Bh α energies from events followed by α decay we find six of eight events (75 per cent) in the range E = (9.14 -9.23) MeV while only one of twelve events followed by fission is observed in that region. In this enery range none of the events observed by Qin et al. [138] is found, which are all below 9.14 MeV, also none of the events from the decay of 278 113, and also not the event reported by Wilk et al. [136] (fig. 18b) is found. To conclude, the α decay energies of 266 Bh reported from the different production reactions as well as from the different decay modes of the daughter products (α decay or (SF/EC) vary considerably, so there is no real experimental basis to use 266 Bh as an anchor point for identification of the chain assumed to start at 278 113. Discrepancies are also found for the halflives. From the 278 113 decay chains a half-life of T 1/2 = 2.2 +2.9 −0.8 s is obtained for 266 Bh [135], while Qin et al. [138] give a value T 1/2 = 0.66 +0. 59 −0.26 s. The discrepancy is already slightly outside the 1σ confidential interval. No half-life value is given from the direct production of Morita et al. [135]. The disagreement in the decay properties of 266 Bh reported by different authors renders the interpretation of the α decay chain (chain 3) quite difficult. It is therefore of importance to check the following α decays assigned to 262 Db and 258 Lr, respectively, as they may help to clarify the situation. In order to do so, it is required to review the reported decay properties of these isotopes and to compare the results with the data in chain 3. It should also be remarked here that the differences of the α energies attributed to 266 Bh followed by α decays or by SF in [135] indicates that the assignement of these events to the same isotope is not straightforward, at least not the assignment to the decay of the same nuclear level. In a previous data compilation [81] three α lines of E α1 = 8.45±0.02 MeV (i = 0.75), E α2 = 8.53±0.02 MeV (i = 0.16), E α3 = 8.67±0.02 MeV (i = 0.09) and a half-life of T 1/2 = 34±4 s are reported for 262 Db. More recent data were obtained from decay studies of 266 Bh [135,136,137,138] or from direct production via the reaction 248 Cm( 19 F,5n) 262 Db [139,140]. The results of the different studies are compared in fig. 19. The energy of the one event from the 278 113 decay chain 3 is shown in fig. 19a. The most extensive recent data for 262 Db were collected by Haba et al. [140]. They observed two groups of α-decay energies, one at E α = (8. 40-8.55) MeV (in the following also denoted as 'low energy component') and another one at E α = (8.60-8.80) MeV (in the following also denoted as 'high energy component') ( fig. 19g). Mean α energy values and intensities are E α = 8.46±0.04 MeV (i rel = 0.70±0.05) and E α = 8.68±0.03 MeV (i rel = 0.30±0.05). In [140] only one common half-life of T 1/2 = 33.8 +4.4 −3.5 s is given for both groups. A re-analysis of the data, however, indicates different halflives: T 1/2 = 39 +6 −5 s for E α = (8.40 -8.55) MeV and T 1/2 = 24 +6 −4 s for E α = (8.60 -8.80) MeV. A similar behavior is reported by Dressler et al. [139] 2 events at E α = (8.40 -8.55) MeV and one event at E α = (8.60 -8.80) MeV (see fig. 19f). Qin et al. [138] oberserved three events at E α = (8. 40 -8.55) MeV and one event at E α = 8.604 MeV, outside the bulk of the high energy group reported in [140](see fig. 19e). A similar behavior is seen for the double correlations ( 262 Db -258 Lr), with missing 266 Bh from the reaction 23 Na + 248 Cm measured by Morita et al. [135] (see fig. 19d). Three of four events are located in the range of the low energy component, while for the triple correlations all four events are in the high energy group (see fig. 19c). This behavior seems somewhat strange as there is no physical reason why the α decay energies of 262 Db should be different for the cases where the preceding 266 Bh α decay is recorded or not recorded. It rather could mean that the triple ( 266 Bh → 262 Db → 258 Lr) and the double correlations ( 262 Db → 258 Lr) of [135] do no respresent the same activities. The α decay energy of the one event observed by Wilk et al. [136] belongs to the low energy group (see fig. 19b). The one event from the decay chain attributed to start from 278 113 does not really fit to one of the groups. The energy is definitely lower than the mean value of the high energy group, an agreement with that group can only be postulated considering the large uncertainty (±60 keV) of its energy value (see fig. 19a). Halflives are T 1/2 = 44 +60 −16 s for the E α = (8. 40 -8.55) MeV component in [135], T 1/2 = 16 +7 −4 s for the E α = (8. 55 -8.80) MeV in agreement with the values of Haba et al. [140], and T 1/2 = 52 +21 −12 s for the SF activity, which is rather in agreement with that of the low energy component.
To summarize: The assignment of the event α 5 in chain 3 in [135] to 262 Db is not unambiguos on the basis of its energy, in addition also its 'lifetime' τ = t α5 -t α4 = 126 s is about five times of the half-life of the high energy component of 262 Db observed in [140]. One should keep in mind, that the probability to observe a decay at times longer than five halflives is p <0.03.
ref. [141,143] ref. [142,143] Observation of 258 Lr was first reported by Eskola et al. [141] and later by Bemis et al. [142]. The reported α energies and intensities slightly disagree [143]. The data are given in table 6. The energies given in [142] are 30-50 keV lower than those reported in [141]. More recent data were obtained from decay studies of 266 Bh, 262 Db [135,136,137,138,139,140]. The results are compared in fig. 20. The quality of the data is lower than that of 262 Db, but not less confusing. Haba et al. got a broad energy distribution in the range E α = (8. 50 -8.75) MeV with the bulk at E α = (8.60 -8.65) MeV having a mean value E α = 8.62±0.02 MeV (see fig. 20g), for which a half-life of T 1/2 = 3.54 +0. 46 −0.36 s is given. Dressler et al. [139] observed all the events at E α ≤ 8.60 MeV (see fig. 20f), but obtained a similar halflife of T 1/2 = 3.10 +3.1 −1.0 s. Each one event within the energy range of the Haba -data was observed by Qin et al. [138] (fig. 20e) and Wilk et al. [136] (fig. 20b). Contrary to the energies of 266 Bh and 262 Db the α energies for 258 Lr from the 266 Bh decay study of Morita et al. [135] are quite in agreement for the triple (fig. 20c) and double (fig. 20d) correlations, a bulk of five events at a mean energy of E α = 8.70±0.01 MeV, further two events at a mean energy E α = 8.59±0.02 MeV, and single one at E α = 8. [135,136,138,144] are shown. None of the fourteen chains agrees with any other one. This feature may indicate the complicate α decay pattern of these isotopes, but it makes the assignment to the same isotope speculative. In other words: the 'subchain' 266 Bh α → 262 Db α → 258 Lr α → of the decay chain interpreted to start from 278 113 does not agree with any other so far observed α decay chain interpreted to start from 266 Bh. The essential item, however, is that this triple correlation was regared as the key point for first identification of element 113, to approve the discovery of this element and give credit to the discoverers. But this decision is based rather on weak probability considerations than on firm experimental facts. The only solid pillar is the agreement with the decay properties reported for 258 Lr, which might be regarded as rather weak. In other words, the assignment of the three decay chains to the decay of 278 Nh is probable, but not firm. It should be reminded that in case of element 111 in the JWP report from 2001 it was stated: 'The results of this study are definitely of high quality but there is insufficient internal redundancy to warrant certitude at this stage. Confirmation by further results is needed to assign priority of discovery to this collaboration' [79]. So it seems strange that in a similar situation as evidently here, such concerns were not expressed.
A new decay study of 266 Bh was reported recently by Haba et al. [144] using the same production reaction as in [135]. Alpha decays were observed correlated to fission events, assigned to the decay of 262 Db and α decay chains 266 Bh α → 262 Db α → or 266 Bh α → 262 Db α → 258 Lr α → . The α spectra of decays followed by fission is shown in fig. 18g, that of events followed by α decays of 262 Db in fig. 18h. Evidently in correlation to fission events a concentration of events ('peak') is observed at E α = 8.85 MeV, not observed in [135], while in the range E α = 8.9-9.0 MeV, where in [137] a peak -like structure was oberserved Haba et al. registered only a broad distribution. Also only a broad distribution without indication of a peak -like concentration in the range E α = 8. 8-9.4 MeV is observed in correlation to α decays. However, two α decays at E ≈ 9.4 MeV were now reported, close to the α energy of 266 Bh in the 278 Nh decay chain 3 [135]. A remarkable result of Haba et al. [144] is the half-life of 266 Bh; values of T 1/2 = 12.8 +5.2 −2.9 s are obtained for events correlated to SF, and T 1/2 = 7.0 +3.0 −1.6 s for events correlated to α decays. This finding suggests a common half-life of T 1/2 = 10.0 +2.6 −1.7 s as given in [144] despite the discrepancies in the α energies. That value is, however, significantly larger than the results from previous studies [135,138]. For 262 Db in [144] a similar α decay energy distribution is observed as in [140], as seen in figs. 19g and 19h, but again not in agreement with the results from [137] 'bulk' of the α energy distribution of 266 Bh and b) one obtains a half-life of T 1/2 = 3.4 −4.7 −1.3 s, lower than the value of 10 s extracted from all events. This can be regarded as a hint for the existence of two long-lived states in 266 Bh decaying by α emission, resulting in two essential decay branches 266 Bh(1) → 262 Db(1) and 266 Bh(2) → 262 Db(2). The α spectrum for 258 Lr measured in [144] is shown in fig. 20h. Essentially it is in-line with the one obtained in [140]. To summarize: the new decay study of 266 Bh delivers results not really in agreement with those from previous studies concerning the decay energies and delivers a considerably longer halflife for that isotope. So the results do not remove the concerns on the decay chains interpreted to start from 278 Nh. But it delivers some interesting features: the α decays in the range (8.38-8.55) MeV and the SF events following α decay of 266 Bh seemingly are due to the decay of the same state in 262 Db, the fission activity, however, may be due to 262 Rf produced by EC of 262 Db. The α -decays of 262 Db of E = (8.55-8,80) MeV eventually are from decay of a second long-lived level. There is also strong evidence that this level is populated essentially by α decay of a long-lived level in 266 Bh, different to that populating the one in 262 Db decaying by α particles in the range (8.38-8.55) MeV. Further studies are required to clarify this undoubtedly interesting feature. Discussing the items above one has of course to emphasize the different experimental techniques used which may influence the measured energies. The important feature are the different detector resolutions which determine the widths of the distributions. So comparison of energies might be somewhat 'dangerous'. Another item is energy summing between α particles and CE. In the experiments of Wilk et al. [136] and Morita et al. [135], the reaction products were implanted into the detector after in-flight separation. Qin et al. [138], Dressler et al. [139] and Haba et al. [140,144] collect the reaction products on the detector surface or on a thin foil between two detectors. The letter procedure reduces the efficiency for energy summing of α particles and CE considerably. This could be the reason for the 'shift' of the small 'bulk' of the 266 Bh α energy distribution from ≈8.85 MeV [144] in fig. 18g to ≈8.95 MeV [135] in fig. 18e. This interpretation might be speculative, but it clearly shows that such effects renders the consistency of data more difficult if different experimental techniques are applied.
(Exemplified) Cross-checks with Nuclear Structure Theory
In the following some discussion on selected decay and nuclear structure properties will be presented.
Alpha-decay energies / Q-alpha values; even Z, odd-A, odd-odd nuclei
Alpha-decay energies provide some basic information about nuclear stability and properties. Discussing the properties one strictly has to distinguish two cases, a) α decay of even-even nuclei, and b) α decay of isotopes with odd proton and/or odd neutron numbers. In even-even nuclei α transitions occur with highest intensities between the I π = 0 + ground -states of mother and daughter isotopes. Still, in the region of strongly deformed heaviest nuclei (Z ≥ 90) notable population with relative intensities of (10-30 %) is observed for transitions into the I π = 2 + level of the ground-state rotational band [81], while band members of higher spins (4 + , 6 + etc.) are populated only weakly with relative intensities of <1%. Under these circumstances the α line of highest intensity represents the Q-value of the transition and is thus a measure for the mass difference of the mother and the daughter nucleus. It should be kept in mind, however, that only in cases where the mass of the daughter nucleus is known, the Q-value can be used to calculate the mass of the mother nucleus, and only in those cases α-decay energies can be used to 'directly' test nuclear mass predictions. Nevertheless, already the mass differences, i.e. the Q-values, can be used for qualitative assessments of those models. Particulary, as crossing of nucleon shells is accompanied by a strong local decrease of the Q α -values, existence, and by some extent also strength . 22b). In case of isotopes with odd neutron numbers, the Q α -value was calculated from the highest reported decay energy.
of such shells can be verified by analyzing systematics of Q α -values. That feature is displayed in fig. 22, where experimental Q α values for the known isotopes of even-Z elements Z ≥ 104 are compared with results of two (widely used) mass predictions based on the macropscopic -microscopic approach, the one reported by R. Smolanczuk and A. Sobiczewski [17] (fig. 22a), and the one reported by P. Möller et al. [18] (fig. 22b). The neutron shells at N = 152 and N = 162, indicated by the black dashed lines are experimentally and theoretically verified by the local minima in the Q α -values. But significant differences in the theoretical predictions are indicated, those of [17] reproduce the experimental data in general quite fairly, while the agreement of those from [18] is significantly worse.
Alpha-decay between states of different spins are hindered. Quantitatively the hindrance can be expressed by a hindrance factor HF, defined as HF = T α (exp) / T α (theo), where T α (exp) denotes the experimental partial α-decay half-life and T α (theo) the theoretical one. To calculate the latter a couple of (mostly empirical) relations are available. In the following will use the one proposed by D.N. Poenaru [98] with the parameter modification suggested by Rurarz [99]. This formula has been proven to reproduce experimental partial α-decay halflives of even-even nuclei in the region of superheavy nuclei within a factor of two [100]. A semi-empirical relation for the hindrance due to angular momentum change was given in 1959 by J.O. Rasmussen [145]. The change of the transition probability P 0 (no angular momentum change) through the barrier, which can be equated with the inverse hindrance factor, was given as with L denoting change of the angular momentum, Z and A the atomic number of the daughter nuclei. In the range of actinide nuclei where data are available (Z ≈ 90 -102) one expects hindrance factors HF ≈ (1.6 -1.7) for ∆L = 2 and HF ≈ (5 -6) for ∆L = 4, with a slight decrease at increasing A and Z. The experimental hindrance factors for α decay into the I π = 2 + and I π = 4 + levels for the known cases in the actinide region Z ≥ 90 are shown in fig. 23. They exhibit a complete different behavior: for the ∆L = 2 transitions the experimental hindrance factor are comparable, but increase at increasing A and Z. For the ∆L = 4 transitions the hindrance factors are considerably larger and a maximum is indicated for curium isotopes in the mass range A = (240 -246). Interestingly this behavior can be related to the ground-state deformation as shown in Hindrance Factor (HF) Fig. 23: Hindrance factors for decays into I π = 2 + and I π = 4 + daughter levels of even-even actinide isotopes Z ≥ 90. Alpha-decay data are taken from [81]. Hassanabadi [147] exp.
Denisov [146] 248 Denisov [146] Hassanabadi [147] 0 fig. 24 the hindrance factors for the I π = 0 + → I π = 2 + transitions are plotted as function of the quadrupole deformation parameter β 2 (taken from [18]). Evidently a strong increase of the hindrance factor at increasing quadrupole deformation is observed. In fig. 25 the hindrance factors for the I π = 0 + → I π = 4 + transitions are plotted as a function of the hexadecapole deformation parameter β 4 (taken from [18]). Here, a maximum at a deformation parameter β 4 ≈ 0.08 is indicated. This suggests a strong dependence of the hindrance factor on nuclear deformation and measuring transitions into rotational members of the ground-state band may already deliver valuable information about the ground-state deformation of the considered nuclei. Some attempts to calculate the transition probabilty into the ground -state rotational band members have been undertaken by V. Yu. Denisov and A.A. Khudenko [146] as well as by H. Hassanabadi and S.S. Hosseini [147].
In both papers the α halflives were calculated in the 'standard' way as with ν denoting the frequency of assaults on the barrier, and P being the penetration probability through the potential barrier using the semiclassic WKB method. In [147] the α -nucleus potential was parameterized as a polynominal of third order for r≤C t and as sum of Coulomb V C , nuclear V N and centrifugal V l + (h 2 l(l+1)/(2µr 2 )) potential. For V C and V l 'standard expressions' were used, for V n the 'proximity potential' of Blocki et al. [148]. C t is the touching configuration of daughter nucleus (d) and α particle (α), C t = C d + C α , with C t denoting the Suessman central radii (see [147] for details). V.Yu. Denisov and A.A. Khudenko [146] use the 'unified model for α decay and α capture (UMADAC). Their potential represents the sum of a 'standard' centrifugal potential V l (see above), a Coulomb potential V C including quadrupole (β 2 ) and hexadecapole (β 4 ) deformations, and a nuclear potential V N of Woods-Saxon type (see [146] for further details). Their results for the californium and fermium isotopes are compared with the experintal values in fig. 26. Obviously the calculations of Denisov and Khudenko do not well reproduce the experimental data for both, californium and fermium isotopes; the calculated (relative) intensities for the 0 + → 0 + transitions ( fig. 26c) are too low and hence too high values for the 0 + → 2 + transitions ( fig. 26c) and the 0 + → 4 + transitions ( fig. 26a) are obtained. The latter are even roughly an order of magnitude higher than the experimental data for the respective transition. Quite fair agreement between experimental data is evident for the calculations of Hassanabadi and Hosseini (blue lines and symbols).
In odd-mass nuclei the situation is completely different as ground-state of mother and daughter nuclei usually differ in spin and often also in parity. So ground-state to ground-state α decays are usually hindered. Hindrance factors significally depend on the spin difference, as well as on a possible parity change and/or a spin flip. For odd-mass nuclei an empirical classification of the hindrance factors into five groups has been established (see e.g. [149]). Hindrance factors HF < 4 characterize transitions between the same Nilsson levels in mother and daughter nuclei and are denoted as 'favoured transitions'. Hindrance factors HF = (4 -10) indicate a favourable overlap between the initial and final nuclear state, while values HF = (10 -100) point to an unfavourable overlap, but still parallel spin projections of the initial and final state. Factors HF = (100 -1000) indicate a parity change and still parallel spin projections, while HF > 1000 mean a parity change and a spin flip. Thus hindrance factors already point to differences in the initial and final states, but on their own do not allow for spin and parity assignments. It is, however, known that in even-Z odd-mass nuclei nuclear structure and thus α decay patterns are similar along the isotone lines (see e.g. [150]), while in odd-Z odd-mass nuclei this feature is evident along the isotope lines (see e.g. [151]). So, in certain cases, based on empirical relationships tentative spin and parity assignments can be established, as done e.g. in suggesting an α decay pattern for 255 No by P. Eskola et al. [152], which later was confirmed by α -γ spectroscopy measurement [153,154]. Another feature in the case of odd-mass nuclei is the fact, that competition between structural hindrance and Q-value hindrance may lead to complex α -decay patterns. Nilsson levels identical to the ground-state of the mother nucleus may be excited states located at several hundred keV in the daughter nuclei, e.g., in a recent decay study of 259 Sg it was shown that the 11/2 − [725] Nilsson level assigned to the ground-state in this isotope, is located at E * ≈ 600 keV in the daughter nucleus 255 Rf [155]. Therefore the advantage of a low hindrance factor may be cancelled by a lower barrier transmission probability due to a significantly lower Q-value compared to the ground-state to ground-state transition. Consequently α transitions with moderate hindrance factors into lower lying levels may have similar or even higher intensities than the favored transition as it is the case in the above mentioned examples, 255 No and 259 Sg. [163].
A drawback of many recent α decay studies of odd-mass nuclei in the transfermium region was the fact that the ground-state to ground-state transition could not be clearly identified and thus the 'total' Q α value could not be established. Another difficulty in these studies was the existence of isomeric states in several nuclei, also decaying by α -emission and having halflives similar as to the ground-state, as in the cases of 251 No [63] or 257 Rf [156], while in early studies ground-state decay and isomeric decay could not be disentangled. Enhanced experimental techniques, applying also α -γ spectroscopy have overcome widely that problem in the transfermium region. An illustrative example is the N-Z = 49 -line, where based on the directly measured mass of 253 No [157], and decay data of 253 No [65], 257 Rf, 261 Sg [126], 265 Hs [158] and 269 Ds [44] experimental masses could be determined up to 269 Ds and could serve for a test of theoretical predictions [18,159] and empirical evaluations [160], as shown in fig. 27. The masses predicted by Möller et al. [18] agree with the experimental value within ≈0.5 MeV up to Z = 106, while towards Z = 110 ( 269 Ds) deviations rapidly increase up to nearly 2 MeV. A similar behavior was observed for the even-even nuclei of the N-Z = 50 line [100], which was interpreted as a possible signature for a lower shell effect at N = 162.
Q α values as signatures for nuclear shells
Historically evidence for nuclear shells was first found from the existence of specifically stable nuclei at certain proton and neutron numbers (Z,N = 2, 8,20,28,50,82 and N = 126) which were denoted as 'magic'. Experimental signatures were, e.g. strong kinks in the 2p-or 2n -binding energies at the magic numbers and on the basis of enhanced nuclear decay data also by local minima in the Q α values. The existence of nuclear shells was theoretically explained by the nuclear shell model [1,2], which showed large energy gaps in the single particle levels at the 'magic' numbers, which were equated with 'shell closures'. This item was the basis for the prediction of 'superheavy' elements around Z = 114 and N = 184 when the nuclear shell model was extended in the region of unknown nuclei far above Z = 82 and N = 126 [3,4]. As the shell gap is related to a higher density of single particle levels, compared to the nuclear average (expected e.g. from a Fermi gas model) below the Fermi level and a lower density above the Fermi level, the large energy gaps at the magic numbers go hand in hand with large shell correction energies, leading to the irregularities in the 2p-, 2n-separation energies and in the Q α values.
In between Z = 82, N = 126 and Z = 114 and N = 184 a wide region of stronly deformed nuclei is existing. Calculations (see e.g. [161]) resulted in large shell gaps at N = 152 and Z = 100. Later theoretical studies also showed in addition a region of large shell correction energies in between N = 152 and N = 184 [22,23,24,25] the center of which is presently set at N = 162 and Z = 108. While the (deformed) nuclear shell at N = 152 is well established on the basis of the Q α -value as seen from fig. 22 and there is, despite of scarce data, strong evidence for a shell at N = 162, the quest for a shell closure at Z = 100 is still open. It was pointed out by Greenlees et al. [162] that their results on nuclear structure investigation of 250 Fm are in-line with a shell gap at Z = 100, but 2p -separation energies and Q α -values do not support a shell closure. The item is shown in fig. 28. On the right hand side Q α values and 2pbinding energies (S 2p ) are plotted for three isotone chains (N = 124, 126, 128) around Z = 82. In all three cases a strong increase in the Q α values and a strong decrease in the S 2p values is observed from Z = 82 to Z = 84. On the right hand side Q α values and S 2p values are plotted around Z = 100 for N = 150, 152, and 154. Here a just a straight increase for both is observed from Z = 94 to Z = 106. Alternatively the so-called 'shell-gap parameter'defined as the difference of the 2n (δ 2n ) -or 2p (δ 2p ) -binding energies is used characterize nuclear shells. In the present case we get So not only the location of the 'shell' is not unambiguous, but the 'strength' is much lower than at Z = 82. So it does not seem to be justified to speak of a proton shell (or a 'magic' number) at Z = 100 as claimed recently [49].
Electron Capture decays
The analysis of α decay chains from SHN produced in reactions of 48 Ca with actinide targets so far acted on the assumptions that the chains consisted on a sequence of α decays and were finally terminated by spontaneous fission [27]. The possibility that one of the chain members could undergo EC -decay was not considered. Indeed, EC -decay of superheavy nuclei has been only little investigated so far. Mainly this is due to the technical difficulties to detect EC decay at very low production rates of the isotopes. Consequently, only very recently EC decay has been investigated successfully in the transactinide region for the cases of 257 Rf [164] and 258 Db [100]. Two ways of identifying EC -decay turned out to be successful, a) measuring delayed coincidences between K X-rays and α decay or spontaneous fission of the EC -daughter, and b) measuring delayed coincidences between implanted nuclei and conversion electrons (CE) from decay of excited states populated by the EC or delayed coincidences between CE and decays (α decay or spontaneous fission) of the EC daughter. The latter cases, however, require population of excited level decaying by internal conversion, which is not necessarily the case. Evidence for occuring EC within the decay chains of SHE gives the termination of the decay chains of oddodd nuclei by spontaneous fission. Since spontaneous fission of odd-odd nuclei is strongly hindered, it can be assumed that it may not the odd-odd nucleus that undergoes fission, but the even-even daughter nucleus, produced by EC decay [165]. The situation is, however, quite complicated. To illustrate we compare in fig. 29 the experimental (EC/β + ) -halflives of lawrencium and dubnium isotopes with recently calculated [166] EC -halflives.
In general the agreement between experimental and calculated values is better for the dubnium, specifcally for A ≤ 262, than for the lawrencium isotopes. Evidently, however, the disagreement increases at approaching the line of beta -stability. The experimental EC halflives are up to several orders of magnitude higher than the theoretical values, which may lead to assume that direct spontaneous of the odd-odd nuclei is observed indeed. So there is lot of room for speculation. In this context the difficulties to make also some 'empirical' conclusions shall be briefly discussed. So far, only in two cases, 260 Md and 262 Db observation of spontaneous fission of an odd-odd isotope is reported. 262 Db seems, however, a less certain case (see discussion in [167]). In table 10 Tab. 10: Comparison of 'fission' halflives of some selected odd-mass and odd-odd nuclei in the range Z = 101 -105. The 'hindrance factor' HF here means the ratio of fission halflives of the odd-odd nucleus and its neighbouring odd mass nuclei (see text).
ratio of the experimental fission half-life and an 'unhindered' fission half-life, defined as the geometric mean of the neighbouring even -even isotopes (see [167] for more detailed discussion), but to estimate reliable hindrance factors the spontaneous fission halflives of the surrounding even-even nuclei have to be known.
In the region of 266,268,270 Db only for one even-even isotope, 266 Sg the fission halflive is known, T sf = 58 s, while a theoretical value of T sf = 0.35 s was reported [168], which is lower by a factor of ≈165. Under these cirumstances it does not make much sense to use theoretical values to estimante hindrance factors for spontaneous fission. Therefore final decision if the terminating odd -odd nuclei may fission directly must be left to future experiments. Techniques to identify EC decay have been presented at the beginnig of this section. But it has to be kept in mind that the identification mentioned was performed for isotopes with production cross sections of some nanobarn, while in the considered SHE region production rates are roughly three orders of magnitude lower. So technical effort to increase production rates and detection efficiencies are required to perform successful experiments in that direction. From physics side such experiments may cause big problems as seen from fig. 30 Fig. 35: a) experimental low lying Nilsson levels in odd-mass einsteinium iotopes (data taken from [151]); b) results of HFB -SLy4 calculations for odd mass einsteinium isotopes (data taken from [172]); c) results of macroscopic -microscopic calculations for odd mass einsteinium isotopes [174]; d) (upper panel) energy differences between the 7/2 − [514] and 7/2 + [633] Nilsson levels, (lower panel) energy difference between the 7/2 + [633] bandhead and the 9/2 + rotational band member; e) (upper panel) quadropule deformation parameters β 2 for odd mass einsteinium isotopes [18], (lower panel) hexadecapole deformation parameters β 4 for odd mass einsteinium isotopes [18].
Spontaneous fission
Spontaneous fission is believed to finally terminating the charts of nuclei towards increasing proton numbers Z. The strong shell stabilization of nuclei in the vicinity of the spherical proton and neutron shells Z = 114 or Z = 120 and N = 172 or N = 184 leads also to high fission barriers and thus long fission halflives. Qualitatively these expections are in line with the experimental results. For all nuclei Z > 114 so far only α decay was observed, while for Z = 112 -114 only for five nuclei 284,286 Fl, 282,283,284 Cn spontaneous fission was reported. The spontaneous fission halflives of the even-even nuclei 284,286 Fl, 282,284 Cn, 280 Ds agree within two orders of magnitude, those for 284,286 Fl even within one order of magnitude with the predictions of R.
Smolanczuk et al. [168], which calculations also quite fairly reproduce the halflives of the even-even isotopes of rutherfordium (Z = 104), seaborgium (Z = 106), and hassium (Z = 108). These results indicate that the expected high stabilization against spontaneous fission in the vicinity of the spherical proton and neutron shells is indeed present. For further discussion of these items we refer to the review paper [167].
Systematics in nuclear structure -odd-mass einsteinium isotopes
Detailed information on nuclear structure of heaviest nuclei provide a wide field of information for testing nuclear models with respect to their predictive power. Presently the situation, however, is not very satisfying for at least three major reasons; a) 'detailed' decay studies using α -γ spectroscopy are essentially only possible for nuclei with Z ≤107 due to low production rates; b) for many isotopes only very few Nilsson levels have been identified, while the assignment is partly only tentative; c) agreement between experimental data and results from theoretical calculations is in general rather poor. In [150] experimental data are compared with results from theoretical calculations for N = 151 and N = 153 isotones of even-Z elements in the range Z = 94-106. Agreement in excitation energies of the Nilsson levels is often not better than a few hundred keV and also the experimentally established ordering of the levels is often not reproduced by the calculations. Thus, for example, the existence of the low lying 5/2 + [622]isomers in the N = 151 isotones is not predicted by the calculations. These deficiencies, on the other hand, make it hard to trust in predictions of properties of heavier nuclei by these models. In this study the situation is illustrated for the case of the odd-mass einsteinium isotopes ( fig. 35) fig. 35b the results from a self-consistent Hartree-Fock-Bogoliubov calculation using SLy4 force (HBF -SLy4) are presented (data taken from [172]), in fig. 35c the results from a macroscopic -microscopic calculation [174]. The HBF -SLy4 calculations only predict the ground-state of 253 Es correctly, for 251 Es they result in 7/2 + [633] as for 253 Es. For the lighter isotopes the ground-state is predicted as 1/2 − [521], while the 3/2 − [521], for which strong experimental evidence exists that it is ground-state or located close to the ground-state, is located at E * ≈400 keV, except for 253 Es. The macroscopic -microscopic calculations, on the other side, predict 7/2 + [633] as a low lying level but the 3/2 − [521] one in an excitation energy range of E * ≈ (400-600) keV. As noted in fig. 35b gives some information about the size of the shell gap. Indeed the experimental energy difference is lower than predicted by the HFB-SLy4 calculations (typically ≈400 keV) and by the macroscopic -microscopic calcultions (typically ≈600 keV) as seen from figs. 35a -35c, which hints to a lower shall gap as preticted. Indeed this could explain the non-observation of a discontinuity in the two-proton binding energies and the Q α -values when crossing Z = 100 (see sect. 7.2). Two more interesting features are evident: in figs. 35d and 33e (upper panel) the energy difference ∆E = E(7/2 − [514]) -E(7/2 + [633] is compared with the quadrupole deformation parameter β 2 , while in the lower panels the energy difference of the 7/2 − [514] bandhead and the 9/2-rotational level is compared with the hexadecapole deformation parameter β 4 , both taken from [18]. Both, the experimental energy differences ∆E = E(7/2 − [514]) -E(7/2 + [633]) (not so evident in the calculation) and the β 2 values show a pronounced maximum at N = 152, while as well as the energy differences E(9/2 − ) -E(7/2 − as the β 4 -values decrease at increasing mass number or increasing neutron number, respectively. 7.6 Nuclear structure predictions for odd-odd nuclei -exemplified for 250 Md and 254 Lr Predictions of level schemes in heaviest odd-odd nuclei are scarce so far. Only for a couple of cases calculations have been performed so far. Thus we will discuss here only two cases, 250 Md and 254 Lr for which recently new results have been reported [60]. The ground-state of 250 Md was predicted by Sood et al. [175] as K π = 0 − and a long-lived isomeric state with spin and parity K π = 7 − expected to decay primarily by α emission or electron capture was predicted at E * = 80±30 keV. Recently a longlived isomeric state at E * = 123 keV was identified [60] in quite good agreement with the calculations. However, no spin and parity asignments have be done for the groud state and the isomeric state. The other case concerns 254 Lr. Levels at E * <250 keV were recently calculated on the basis of a 'Two-Quasi-Particle-Rotor-Model' [176]. The results are shown in fig. 36. The ground-state is predicted as K π = 1 + enery / keV a) theory [176] b) experiment [60] 254 Lr Fig. 36: Predicted (a) ( [176]) and (b) experimentally (tentatively) assigned [60] low lying levels of 254 Lr. and an isomeric state K π = 4 + is predicted at E * ≈75 keV. Recently an isomeric state at E * = 108 keV was identified in 254 Lr. Tentative spin and parity assigments are, however, different. The ground-state was assigned as K π = 4 + , the isomeric state as K π = 1 − (see fig. 36). This assignment was based on the assumed ground-state configuration K π = 0 − of 258 Db and the low α-decay hindrance factor HF≈30 for the transition 258g Db → 254m Lr which rather favors K π = 1 − than K π = 4 + as the latter configuration would require an angular momentum change ∆K = 3 and a change of the parity which requires a much larger hindrance factor (see sect. 7.1).
Here, however, two items should be considered: a) the spin-parity assigmnent of 258 Db is only tentative, b) the calculations are based on the energies of low lying levels in the neighboring odd mass nuclei, in the respecting case 253 No (N = 151) and 253 Lr (Z = 103). The lowest Nilsson levels in 253 No are 9/2 − [734] for the ground-state and 5/2 + [622] for a shortlived isomer at E * = 167 keV [126]. In 253 Lr tentative assignments of the ground-state (7/2 − [514]) and 1/2 − [521] for a low lying isomer are given in [125]. The energy of the isomer is experimentally not established, for the calculations a value of 30 keV was taken [176]. It should be noted, however, that for the neighboring N = 152 isotope of lawrencium, 255 Lr the ground-state had be determined as the Nilsson level 1/2 − [521], while 7/2 − was attributed to a low lying isomeric state at E * = 37 keV [172]. Therefore, with respect to the uncertain starting conditions, the results of the calculations although not in 'perfect agreement' with the experimental results, are still promising, and may be improved in future. It should be noticed the that existence and excitation energy of the isomeric state in 254 Lr has been confirmed by direct mass measurents at SHIPTRAP [177], and there is some confidence that spins can be determined in near future by means of laser spectroscopy using the RADRIS technique [178].
7.7 Attempts to synthesize elements Z = 119 and Z = 120 -Quest for the spherical proton shell at Z = 114 or Z = 120 Although elements up to Z = 118 have been syntheszied so far, the quest for the location of the spherical 'superheavy' proton and neutron shells is still open. Indeed synthesis of elements up Z = 118 in 48 Ca induced reactions show a maximum in the cross sections at Z = 114, which might be seen as an indication of a proton shell at Z = 114 (see fig. 16). Such an interpretation, however, is not unambiguous since a complete understanding of the evaporation residue (ER) production process (capture of projectile and target nuclei, formation of the compund nucleus, deexcitation of the compound nucleus, competition between particle emission and fission) is required to draw firm conclusions. Indeed V.I. Zagebaev and W. Greiner [179] could reproduce cross-sections for elements Z = 112 to Z = 118 produced in 48 Ca induced reaction quite fairly, but evidently a main ingredient of their calculations was quite uncertain. They approximated fission barriers as the sum of the 'shell effects' (according to [18]) and a 'zero-point energy' of 0.5 MeV, which resulted in quite different values than obtained from 'direct' fission barrier calculations (see e.g. [180,181,182]). Due to these uncertainties measured cross sections are not a good argument identification for a proton shell at Z = 114. Indeed on the basis of the results on decay studies of 286,288 Fl and their daughter products Samark-Roth et al. [108] claimed that there is not a real indication for a poton shell at Z = 114. Indirect evidence that the proton shell may not be located at Z = 114 but rather at Z = 120 comes from a recent decay study of 247 Md, where the 1/2 − [521] level was located at E * = 68±11 keV [173]. As shown in fig. 35b, the Nilsson -levels 3/2 − [521], 7/2 + [633], and 7/2 − [514] Nilsson levels stem from the 2f 7/2 , 1i 13/2 , and 1h 9/2 sub shells, respectively, which are located below the shell gap at Z = 114, while the 1/2 − [521] level stems from the 2f 5/2 sub shell located above the Z = 114 [161] shell gap. As seen in fig. 35 this level is predicted as the ground-state of 243 Es by the HFB SLy4 -calculations ( fig. 35b) predicting Z = 120 as proton shell, but as an excited level at E * ≈400 keV by the macroscipic-microscopic calculations ( fig. 35c) predicting Z = 114 as proton shell. The low excitation energy of the 1/2 − [521] level might thus be a hint, that the 2f 5/2 sub-shell is lower in energy, and the enery difference E(2f 5/2 ) -E(2f 7/2 ) which is decisive for the occurence of the shell gap at Z = 114 or at Z = 120 [20], i.e. that the proton shell indeed may be located at Z = 120. From this point of view it rather seems useful to take the α decay properties as a signature for a shell, as discussed in sect. 7.2. However, one has to note that strictly spoken even -even nuclei have to be considered since only for those a ground-state to ground-state transition can be assumed a priori as the strongest decay line. However one is presently not only confronted with the lack of experimental data. The situation is shown in fig. 37 where predicted Q α values for the N = 172 and N = 184 isotones are presented. Different to the situation at Z = 82 (see fig. 28) calculations of Smolanczuk et al. [17] predicting Z = 114 as proton shell result only in a rather small change in decrease of Q α values when crossing the shell, even at the predicted neutron shell at N = 184 compared to the heavier and lighter isotones. At N = 172 there is practically no effect any more, one gets a more or less straight decrease of the Q α values. So probably Q α values could not be applied for identifying Z = 114 as a proton shell even if more data would be available. A possibility to decide whether the proton shell is located Z = 114 or Z = 120 results from comparison of experimental Q α values and halflives with results from models predicting either Z = 114 or Z = 120. However, one has to consider a large straggling of the predicted values, so it is required to produce and to investigate nuclei in a region, where uncertainties of the predictions are larger than the results from models predicting either Z = 114 or Z = 120 as proton shells. An inspection of the different models shows that element 120 seems to be first one where the differences are so large that the quest of the proton shell can be answered with some certainty. The situation is shown in fig. 38, where predicted Q α values and calculated halflives are compared. Despite the large straggling of the predicted α energies and halflives there is seemingly a borderline at E α = 12.75 MeV evident between models precting Z = 120 als proton shell [183,184,185,186] and those predicting Z = 114 as proton shell [17,18] while halflives <10 −5 s hint to Z = 114, halflives >10 −5 s to Z = 120 as proton shell. This feature makes synthesis of element 120 even more interesting than the synthesis of element 119. Suited reactions to produce an even-even isotope of element 120 seem 50 Ti( 249 Cf,3n) 296 120 (N=176), 54 Cr( 248 Cm,2n,4n) 298,300 120 (N=178,180). Expected cross-sections are, however small. V.I. Zagrebaev and W. Greiner [187] predicted cross sections of σ ≈ 25 fb for 54 Cr( 248 Cm,4n) 298 120 and a slightly higher value of σ ≈ 40 fb for 50 Ti( 249 Cf,3n) 296 120. So far only few experiments on synthesis of element 120 reaching cross sections limits below 1 pb have been performed: 64 Ni + 238 U at SHIP, GSI with σ<0.09 pb [188], 54 Cr + 248 Cm at SHIP, GSI with σ<0.58 pb [97], 50 Ti + 249 Cf at TASCA, GSI with σ<0.2 pb [189], and 58 Fe + 244 Pu at DGFRS, JINR Dubna with σ<0.5 pb [190].
Challenges / Future
There are two major problems concerning the experimental techniques used in the investigation of superheavy elements. The first is connected with the implantation of the reaction products into silicon detectors which are also used to measure the α-decay energy, conversion electrons and fission products. This simply means that, e.g. in the case of α decay not only the kinetic energy of the α-particle is measured but also part of the recoil energy transferred by the α particle to the residual nucleus. Due to the high ionisation density in the stopping process of the heavy residual nucleus and partial recombination of the charge carriers, typically only about one third of the recoil energy is measured [61]. It results in energy shift of the α-decay energy by ≈50 keV, which can be compensated by a proper calibration, and a deterioration of the energy resolution of the detector by typically 5 -10 keV. A second item is more severe. It is connected with populating excited levels in nuclei decaying promptly (with life-times of some µs or lower) by internal conversion. In these cases energy summing of α particles with conversion electrons (and also low energy X-rays and Auger electrons from deexcitation of the atomic shell) is observed [62]. The influence on the measured α spectra is manifold, depending also on the energy of the conversion electrons; essentially are broadening and shifting the α energies often washing out peak structures of α decay pattern. An illustrative case is α decay of 255 No which has been investigated using the implantation technique [153] and the He-jet technique with negligible probability of energy summing [154]. Specifically, different low lying members of the same rotational are populated, which decay by practically completely converted M1 or E2 transitions towards the band-head. These fine structures of the α decay spectrum cannot be resolved using the implantation technique (see also [150]). Although in recent years successful attempts have been untertaken to model those influences by GEANT -simulations [65], direct measurements are preferred from experimental side. First steps in this direction have been recently undertaken by coupling an ion trap [191] or an MRTOF -system [192] to a recoil separator and the BGS + FIONA system, which was used to directly measure mass number of 288 Mc [193]. Also mass number measurement is an interesting feature, the ultimate goal is a save Z and A identification of a nuclide. This can be achieved via high precision mass measuremts, allowing for clear isobaric separation (ion traps and possibly also MRTOF -systems). Presently limits are set by the production rate. The most direct method to determine the atomic number of a nucleus is measuring characteristic X-rays in prompt or delayed coincidence with its radioactive decay. Such measurements are, however, a gamble as they need both, highly K -converted transitions (M1, M2) with transition energies above the K -binding energy. The latter is not a trivial problem as energies raise steadily and are in the order of 180 keV at Z = 110. Such measurements have been applied so far up bohrium (Z = 107 [158]). In the region of superheavy nuclei (Z > 112) such attempts have been recently performed by D. Rudolph et al. [67] and J. Gates et al. [68] by investigating the α decay chains starting from the odd-odd nucleus 288 Mc (Z = 115), but no positive result was obtained. Alternatively one can attempt to measure L -X-rays to have excess to lower energies and also to E2 transitions. Such measurement have been performed successfully up to Z = 105 [194], but are more complicated due to the more complex structure of the L X-ray spectra. An alternative method for X-ray identification is measuring the X-rays emitted during electron capture (EC) decay in delayed coincidence with α decay or spontaneous fission of the daughter nucleus. This technique has been recently for the first time successfully applied in the transactinide region [100], by measuring K α and K β X-rays from EC -decay of 258 Db in delayed coincidence with spoantaneous fission and α decay of the daughter nucleus 258 Rf. Application in the SHN region seems possible, problems connected with that technique are discussed in sect. 7.3. |
87910 | s2orc/train | v2 | 2018-04-03T03:04:29.589Z | 2001-11-02T00:00:00.000Z | Transmembrane topology of the sulfonylurea receptor SUR1.
Sulfonylurea receptors (SURx) are multi-spanning transmembrane proteins of the ATP-binding cassette (ABC) family, which associate with Kir6.x to form ATP-sensitive potassium channels. Two models, with 13-17 transmembrane segments, have been proposed for SURx topologies. Recently, we demonstrated that the amino-terminal region of SUR1 contains 5 transmembrane segments, supporting the 17-transmembrane model. To investigate the topology of the complete full-length SUR1, two strategies were employed. Topology was probed by accessibility of introduced cysteines to a membrane-impermeable biotinylating reagent, biotin maleimide. Amino acid positions 6/26, 99, 159, 337, 567, 1051, and 1274 were accessible, therefore extracellular, whereas many endogenous and some introduced cysteines were inaccessible, thus likely cytoplasmic or intramembrane. These sites correspond to extracellular loops 1-3, 5-6, and 8 and the NH2 terminus, and intracellular loops 3-8 and COOH terminus in the 17-transmembrane model. Immunofluorescence was used to determine accessibility of epitope-tagged SUR1 in intact and permeabilized cells. Epitopes at positions 337 and 1050 (putative external loops 3 and 6) were labeled in intact cells, therefore external, whereas positions 485 and 1119 (putative internal loops 5 and 7) only were accessible after permeabilization and therefore internal. These results are compatible with the 17-transmembrane model with two pairs of transmembrane segments as possible reentrant loops.
Sulfonylurea receptors (SURx) 1 are found in many tissues and play a pivotal role in synchronizing electrical excitability with cellular metabolic state. SURx, members of the ATP binding cassette (ABC) family of proteins, associate with inward rectifier Kir6.x in a heterooctameric 4:4 stoichiometry to form ATP-sensitive (K ATP ) channels (for review, see Refs. [1][2][3]. SURx subunits physically associate with pore forming subunits (Kir6.x) and regulate the flow of potassium ions through the channel. Together, SURx and Kir6.x coordinate ATP and ADP binding with channel opening and closing. Different isoforms of SURx and Kir6.x contribute to distinct nucleotide binding affinities and orchestrate tissue specific electrical responses to metabolism. The role of the channel is perhaps best understood in pancreatic  cells, where SUR1 and Kir6.2 form the K ATP channel that is the key mediator of insulin secretion. Increases in blood glucose cause closure of pancreatic K ATP channels, triggered by an increase in the cellular ATP to ADP ratio. This results in membrane depolarization, Ca 2ϩ influx, and insulin release (1)(2)(3). In vascular smooth muscle, the K ATP channel (SUR2B/Kir6.x) is important in the regulation of blood pressure, and in cardiac muscle cells, the channel (SUR2A/Kir6.2) is involved in the response to ischemia. Sulfonylurea drugs and potassium channel openers bind to SURx directly and have been used extensively to regulate K ATP channel activity. Accordingly, SURx subunits are drug targets for the treatment of type II diabetes, persistent hyperinsulinemic hypoglycemia of infancy, as well as hypertension (1)(2)(3). Elucidating the structure of the SURx family is vital to understanding their functions and is a valuable tool to further drug design.
SURs are multi-spanning integral membrane proteins with predicted molecular masses of ϳ170 kDa. SUR1 and SUR2A/B are highly homologous with similar hydrophobicity profiles and amino acid sequences (69% identical, 76% similar). Hydrophobicity analysis reveals three hydrophobic domains (TM0, TM1, and TM2), with hydrophilic nucleotide binding folds (NBF1 and NBF2) following TM1 and TM2 (4). The polypeptide within each hydrophobic domain has been hypothesized to cross the plasma membrane between four and six times. Although the transmembrane topology of the TM0 domain has been explored (5), the TM1 and TM2 domains remain undefined. The topology of the SURs has been debated, and considerably different models have been proposed, ranging from 13 transmembrane segments (in a 4ϩ5ϩ4 arrangement in TM0, TM1, and TM2) to 17 transmembrane segments (in a 5ϩ6ϩ6 arrangement) (1, 4 -6).
Sequence homology of the SURs to other ABC family members is limited to the two nucleotide binding folds (NBF1 and NBF2). However, hydrophobicity analysis suggests that ABC proteins may have topological similarities and form topologically distinct subfamilies. Comparing the structure of the SURs to other ABC family members is intriguing since the ABC family contains proteins that differ greatly in their function: channels, transporters, and channel regulators, with the SUR members being rare examples of channel regulators. The cystic fibrosis transmembrane conductance regulator (CFTR), a chloride channel involved in cystic fibrosis, and P-glycoprotein (Pgp), a transporter responsible for anticancer drug resistance have been proposed to typify the ABC proteins, with TM1 and TM2 domains each followed by a nucleotide binding fold, NBF1 and NBF2 (4). Topological characterizations of CFTR and P-gp are compatible with a structural model that contains 12 transmembrane segments in a 6ϩ6 arrangement of TM1 and TM2 (7)(8)(9)(10). In contrast, the SURs, as well as multidrug resistanceassociated protein (MRP), are members of a small subgroup of ABC proteins that are distinguished by the presence of three TM domains. Little is known of the structure of this subgroup, but partial characterization of MRP topology has been interpreted in the context of a 17-transmembrane segment model (11,12). Attention to the similarities and differences in ABC protein structures may reveal information on the relatedness of proteins and the identification of functional domains.
There have been few structural studies of SURx. Glycosylation studies of SUR1 have indicated two N-linked glycosylation sites at Asn-10 (near the NH 2 terminus in the TM0 domain) and Asn-1050 (in the TM2 domain), demonstrating these sites to be extracellular (5,6). Additionally, we have shown previously, using an in vitro protease protection assay with SUR1prolactin fusion proteins, that the TM0 domain consists of five transmembrane regions (5). These data are consistent with the 17-transmembrane (5ϩ6ϩ6) model.
Experiments here describe a comprehensive dual strategy using a biotinylation assay in conjunction with immunofluorescence to elucidate the topology of full-length SUR1 expressed on the plasma membrane of cultured cells. An impermeant biotinylation reagent was used in a surface labeling assay to identify the external loops of SUR1. A SUR1 construct was created that lacks external cysteines, and then individual cysteine residues were introduced and assayed for accessibility to reagent. Additionally, epitope tags were inserted into postulated internal or external regions of SUR1 and were assayed for immunofluorescence of permeabilized and nonpermeabilized cells to evaluate the internal or external location of the epitope tags. Together, these data indicate that SUR1 spans the membrane 17 times with two of these pairs of transmembrane segments as reentrant loops.
EXPERIMENTAL PROCEDURES
Mutagenesis-In all constructs, hamster SUR1 (a generous gift of Dr. J. Bryan) was tagged at the COOH terminus with the V5 epitope by subcloning into the vector pcDNA3.1/V5HisA (Invitrogen). To facilitate surface expression in the absence of Kir6.2, the SUR1 endoplasmic reticulum retention signal RKR (amino acids 648 -650) was replaced with alanines (SUR1 AAA ) (13). A SUR1 AAA construct lacking external cysteines was produced by replacing endogenous cysteines with serine or alanine. The resulting SUR1 AAA construct containing C6S, C26S, C170A, C1051S, and C1057S was referred to as NEC (no external cysteines). NEC was subjected to PCR mutagenesis to individually introduce cysteines at each putative external loop (T99C, K337C, Y454C, K567C, T1161C, S1186C, and R1274C). Endogenous cysteines Cys-6/Cys-26/Cys-170, Cys-170, and Cys-1051 were reintroduced to NEC with ligation. The Cys-170 construct also contains a conserved K620R mutation. The Flag epitope (DYKDDDDK) was introduced into SUR1 AAA at various positions with PCR mutagenesis. All mutations and epitope insertions were confirmed with restriction enzyme digests, and all regions that were amplified by PCR were sequenced.
Electrophysiology-COSm6 cells were transiently co-transfected with SUR1 AAA constructs, Kir6.2, and green fluorescent protein (GFP) using Fugene6 (Roche Molecular Biochemicals), and were plated onto coverslips. Patch clamp recordings were made 48 -72 h following transfection. The standard bath (intracellular) and pipette (extracellular) solution (K-INT) contained 140 mM KCl, 20 mM Hepes, 1 mM K-EGTA, pH 7.3. Currents were recorded from excised inside-out membrane patches exposed to K-INT bath solution or K-INT solution containing 1 mM ATP (as the potassium salt) at Ϫ50 mV at room temperature as described previously (14). Micropipette resistance was typically 0.5-1 megohms. Data were analyzed using Microsoft Excel.
Biotinylation of Surface Proteins-COS-1 cells were transiently transfected with SUR1 AAA constructs using Fugene6 (Roche Molecular Biochemicals). Forty-eight hours after transfection, cells were washed three times with PBSCM (phosphate-buffered saline (PBS), containing 0.1 mM CaCl 2 and 1 mM MgCl 2 ). Cells were then treated with 1 mM dithiothreitol for 10 min to reduce disulfide bonds. Following three washes with PBSCM, cells were incubated in the presence or absence of 5 mM [2-(trimethylammonium)ethyl]methanethiosulfonate bromide (MTSET, Toronto Research Chemicals) for 30 min. Cells were washed three times with PBSCM and then incubated with 50 M N ␣ -(3-maleimidylpropionyl)biocytin (biotin maleimide, Molecular Probes). The reaction was quenched with 2% -mercaptoethanol, and the cells were washed two times with PBSCM. Cells were solubilized in lysis buffer (150 mM NaCl, 20 mM Hepes, pH 7.0, 5 mM EDTA, 1% Igepal CA-630 (Nonidet P-40) containing protease inhibitors (Complete™, Roche)) by rotation at 4°C for 1 h, and insoluble material was removed by centrifugation for 5 min at 20,800 ϫ g. The relative concentration of SUR1 protein in cell lysates was determined by immunoblot, and equivalent amounts of complex-glycosylated SUR1 were used in each pull-down sample to compare surface accessibility of cysteines. Neutravidin-agarose beads (30 l; Pierce) were washed three times with lysis buffer and added to cell lysate to pull down biotinylated protein. The mixture was allowed to incubate overnight at 4°C with rotation. The beads were washed three times with lysis buffer, three times with high salt solution (500 mM NaCl, 10 mM Tris, pH 7.5, 0.1% Nonidet P-40), and two times with 50 mM Tris, pH 7.5. Proteins were eluted in 50 l of SDS sample buffer by incubation at 85°C for 5 min. Proteins were separated by SDS-PAGE (10%), transferred onto nitrocellulose, and blocked with 5% nonfat milk in TBS for 1 h. The blot was incubated with anti-V5horseradish peroxidase antibody (1:1000, Invitrogen) to detect SUR1 and visualized using enhanced chemiluminescence SuperSignal ® West Femto (Pierce).
Biotinylation of Flag-tagged constructs was carried out essentially as described above. Anti-V5 antibody (1:400, Invitrogen) was used to immunoprecipitate Flag-SUR1 AAA constructs after biotinylation. Whole cell lysates were precleared with Protein A-agarose beads (Pierce) for 1 h, incubated with anti-V5 antibody for 1-2 h, and then incubated with Protein A-agarose beads with rotation at 4°C overnight. Proteins were separated on 10% SDS-polyacrylamide gels, transferred to nitrocellulose, incubated with streptavidin-horseradish peroxidase (1:2000, Pierce), and visualized using enhanced chemiluminescence to detect surface Flag-tagged SUR1 AAA constructs.
Immunofluorescence of Flag-SUR1 AAA Constructs-COS-1 cells grown on coverslips were transiently cotransfected with Flag-SUR1 AAA constructs and pEGFP (CLONTECH), a plasmid encoding enhanced GFP, using Fugene6. Mock transfection consisted of co-transfection of SUR1 that contained no Flag epitope and pEGFP. Forty-eight hours after transfection, coverslips were washed two times with PBS and cells were fixed with 4% paraformaldehyde in PBS. To ascertain plasma membrane integrity of individual fixed cells, coverslips were incubated with DEAD ® blue stain (Molecular Probes) for 30 min, washed with PBS followed by 1% bovine serum albumin. Coverslips then were incubated for 1 h at 4°C with Block (3% bovine serum albumin, 1% horse serum, in PBS with (permeabilized) or without (non-permeabilized) 1% Nonidet P-40). Cells then were incubated with anti-Flag M2 antibody (1:1000, Sigma) for 1 h at 4°C in Block. Following four washes, cells were incubated with secondary donkey anti-mouse antibody conjugated to Cy3 (1:400, Jackson Laboratory). Coverslips were mounted onto slides with Pro-Long (Molecular Probes) mounting media. Cells were visualized with a 40ϫ objective lens using an Olympus BX60 epifluorescent microscope, and images were recorded with an Optronics CCD camera (DEI-750). For data analysis, images were displayed and analyzed using Photoshop. Cells that were negative for the DEAD blue stain and positive for GFP were scored for Flag labeling in the Cy3 channel. Numeric intensity cutoff criteria were used to determine whether a cell was positive or negative for labeling. At least 50 cells were counted for each experiment with n ϭ 3 experiments for Flag-337, Flag-1050, Flag-1119, and SUR1 no Flag (control) and n ϭ 2 experiments for Flag-485. The average percentage of cells labeled (calculated by dividing the number of Cy3-positive cells by the number of GFPpositive cells) was determined for permeabilized and non-permeabilized experiments. Accessibility was calculated as the average percentage of non-permeabilized cells that were labeled divided by the average percentage of permeabilized cells that were labeled. Student's t test was used to compare each sample with the Flag-1050 external positive control.
RESULTS
Functional Characterization of SUR1 AAA and SUR1 AAA Mutants-To assess the topology of SUR1, two approaches were employed. In one method, cysteine scanning was used with a biochemical surface labeling assay to determine the location of external loops. For these experiments, endogenous external cysteines were removed, and individual cysteine residues were introduced into the cysteineless background. In the second method, Flag epitope tags were introduced and evaluated for internal or external location using immunocytochemistry.
All mutations and Flag tags were introduced into a SUR1 background with the endoplasmic reticulum retention signal RKR mutated to AAA to promote surface expression (13). It has been demonstrated that SUR1 AAA when co-expressed with Kir6.2 responds to ATP and ADP with a similar concentration dependence as does wild type SUR1 (WT SUR1) co-expressed with Kir6.2 (14), suggesting that mutation of the endoplasmic reticulum retention signal does not alter the structure of the receptor. To maximize surface expression and to avoid labeling of cysteine residues in Kir6.2, SUR1 constructs were expressed in the absence of Kir6.2.
Complex glycosylation was used as an indication of proper folding and surface expression. It has been shown that oligosaccharides play a significant role in the quality control of newly synthesized proteins (for review, see Refs. 15 and 16). All constructs that were evaluated for topological orientation displayed two bands on immunoblots when expressed in COS-1 cells (Fig. 1). The upper band (Fig. 1, filled arrow) migrated with an apparent molecular mass of ϳ250 kDa, and is the complex glycosylated form of SUR1 as shown previously (5,17,18). Upon treatment with the endoglycosidase peptide:N-glyco-sidase F, the upper band shifted to an apparent molecular mass of ϳ180 kDa (data not shown). The lower band of ϳ180 kDa ( Fig. 1, open arrow) is the core glycosylated form of SUR1 and did not change significantly with peptide:N-glycosidase F treatment, in agreement with previous studies showing similar mobility of core glycosylated and unglycosylated SUR1 (5). Introduction of Flag tags in some sites of SUR1 did not produce a complex-glycosylated form when expressed in COS-1 cells (Flag-402, Flag-424, Flag-1189, and Flag-1232), and these were not further studied.
To test for the ability of SUR1 constructs to form functional channels when coexpressed with Kir6.2, patch clamp currents were recorded. As reported previously (14), SUR1 AAA coexpressed with Kir6.2 produced functional ATP-sensitive potassium channels (Fig. 2). All Flag-tagged SUR1 AAA constructs coexpressed with Kir6.2 also produced currents, and these currents were inhibited by ATP, consistent with WT SUR1 (Fig. 2). Conversely, NEC and introduced cysteine mutants did not yield detectable current with patch clamp analysis (data not shown). However, when a combination of cysteines 6, 26, and 170 were reintroduced into NEC, currents were equivalent to SUR1 AAA . Introduction of Cys-170 alone did not rescue channel activity. This suggests that cysteines 6 and 26 may be important in channel activity. Another assay for normal folding is coassembly of SUR1 with Kir6.2. Although NEC and introduced cysteine mutants did not yield functional channels, all constructs that were tested (SUR1 AAA , 6/26/170, and NEC) inside-out patches that were exposed to K-INT solution with or without ATP as indicated by the bar above the records. Inward currents are shown as downward deflections and were inhibited by ATP.
were able to coassemble with Kir6.2 as assayed by coimmunoprecipitation (data not shown). Thus, the introduced cysteine SUR1 constructs displayed normal complex glycosylation, coassembly with Kir6.2, and surface expression (see below). They were used to further characterize SUR1 transmembrane topology relying on the stringent glycosylation standard and agreement with results from the Flag-tagged constructs as indications of correct folding.
Biotinylation of Endogenous Cysteines-The accessibility of endogenous cysteines of SUR1 to extracellular biotinylation was determined in order to evaluate the position of external loops and to establish boundaries for internal cysteines. Identification of endogenous external cysteines also allowed us to create a SUR1 construct lacking external cysteines for subsequent introduction of individual cysteines. SUR1 contains 30 endogenous cysteines, but few are in putative external regions.
Surface labeling of transiently transfected COS-1 cells with the cysteine modifying reagent biotin maleimide was used to determine the extracellular accessibility of endogenous cysteines. Following labeling with biotin maleimide, cells were solubilized and a fraction of the whole cell lysate was analyzed by immunoblot to measure relative amounts of glycosylated SUR1 protein in each sample. Surface-biotinylated proteins from whole cell lysates containing equivalent amounts of complex glycosylated SUR1 were pulled down with Neutravidin beads and visualized by immunoblot following SDS-PAGE.
When cells expressing SUR1 AAA were biotinylated by this procedure, a band of 250 kDa was labeled, corresponding to the complex glycosylated form of SUR1 (Fig. 3A, arrow). Additional upper bands also were sometimes observed, most likely a result of aggregation that occurs due to the high temperature required to elute biotinylated SUR1 from the Neutravidin beads.
To demonstrate that only external cysteines of SUR1 were labeled, and that biotin maleimide did not permeate the plasma membrane at the concentration used, control samples were prelabeled with the well established impermeant sulfhydryl reagent MTSET. SUR1 AAA biotinylation was blocked in an MTSET-dependent manner, demonstrating that biotin maleimide does not permeate the plasma membrane (Fig. 3A). This demonstrates that WT SUR1 AAA contains endogenous cysteines that are external and accessible to biotin maleimide.
To assess which SUR1 cysteines contribute to surface labeling, five putative external cysteines (cysteines 6, 26, 170, 1051, and 1057) were replaced with serine or alanine. This construct, named NEC, was not biotinylated with biotin maleimide (Fig. 3A). When Cys-6/Cys-26/Cys-170 were introduced back into NEC, biotinylation was restored (Fig. 3A). Conversely, when Cys-170 was reintroduced alone, no biotinylation was detected, indicating that position 170 is inaccessible, possibly residing in the transmembrane region. When Cys-1051 was introduced into NEC, again, biotinylation was restored (Fig. 3A). These data, in parallel with glycosylation and protease protection data (5), establish that the NH 2 terminus and position 1051 (putative external loop 6) are external. Further, the absence of labeling in NEC indicates that all other endogenous cysteines are likely to be either internal or transmembrane. The location of identified endogenous external cysteine residues (Cys-6, Cys-26, and Cys-1051) and glycosylation sites, and predicted location of Cys-1057 are depicted in the context of a 17-transmembrane model (Fig. 3B, green). The diagram also illustrates regions that are proposed to be internal (Fig. 3B, orange) based on inaccessibility of endogenous cysteines in hydrophilic segments.
Biotinylation of Introduced Cysteines-To identify additional external loops of SUR1, cysteines were introduced into NEC systematically and evaluated as described above for endogenous cysteines to assess surface accessibility of putative external sites. Amino acids Thr-99, Asp-159, Ala-161, Lys-337, Arg-388, Tyr-454, Lys-567, Thr-1161, Ser-1186, Pro-1162, Arg-1274, and Arg-1300 were individually replaced with cysteines and tested for accessibility to reagent. Several of the mutations (A161C, R388C, P1162C, and R1300C) produced SUR1 proteins that did not show a complex glycosylated form, and these were not evaluated further. However, all others produced both complex and core glycosylated forms (Fig. 1). Positions 99, 159, 337, 567, and 1274, corresponding to putative external loops 1, 2, 3, 5, and 8 in the 17-transmembrane model, were all accessible to biotin maleimide and blocked by pretreatment with MTSET (Fig. 4A). Position 1186, corresponding to an area postulated to reside in the cytoplasmic side of the membrane (putative internal loop 8) did not label with biotin maleimide. These results, summarized in Fig. 4B, are consistent with the proposed 17-transmembrane topology of SUR1 (4, 5). However, positions 454 and 1161, hypothesized to be in short external loops 4 and 7, were not labeled (Fig. 4A).
To further probe accessibility of potential membrane interface regions, higher concentrations of biotin maleimide (up to 0.5 mM) were used with the Cys-454 and Cys-1161 constructs. Labeling did not exceed that of NEC, indicating the cysteines to be well protected and potentially residing within the plasma membrane (data not shown). Additionally, attempts to "boost" the cysteine in both positions into a more aqueous accessible environment by inserting two alanines before and after Cys-454, one alanine before and after Cys-1161, or two glycines before Cys-1161 and one following, resulted in constructs that did not glycosylate (data not shown).
These results demonstrate putative external loops 1, 2, 3, 5, and 8 to be accessible to biotinylating reagent and external. Putative external loops 4 and 7 were inaccessible, possibly residing in or near the transmembrane region (Fig. 4B).
Immunofluorescence of Flag-tagged SUR1 AAA Constructs-To confirm the location of external loops of SUR1 that were determined by surface biotinylation, as well as to investigate internal regions, Flag tags were inserted into various locations of SUR1 and probed for accessibility to antibody in the presence and absence of detergent. Labeling in the absence of detergent reflects positions external to the plasma membrane, whereas labeling that requires detergent indicates positions that are internal to the plasma membrane.
Transiently transfected COS-1 cells expressing Flag-tagged SUR1 AAA constructs and GFP, to indicate transfection, were fixed with paraformaldehyde and incubated with DEAD blue. Cells were stained blue when the plasma membrane integrity was compromised during fixation, and were excluded from analysis. Cells expressing GFP that were not stained blue were scored for antibody labeling in the presence or absence of detergent. Flag tags inserted at positions 337 and 1050 showed labeling both in the presence and absence of detergent, confirming the external position of loops 3 and 6 (Fig. 5). In contrast, Flag tags inserted at position 485 and 1119 were only accessible to antibody labeling upon permeabilization, indicating cytoplasmic orientations of these sites (Fig. 5). For statistical comparison of labeling, Flag-1050 was used as a positive control for external labeling, as site 1050 has been established to be external by cysteine accessibility (Fig. 3) and glycosylation (5, 6). Quantitation of labeling in Table I shows when compared with Flag-1050 using a Student's t test; thus, sites 485 and 1119 are internal. There was no significant difference, however, between labeling of Flag-337 when compared with Flag-1050; thus, both 337 and 1050 are external. Flags inserted at positions 402, 424, 1189, and 1232 did not glycosylate and were not assayed.
To establish that all tagged constructs utilized were present on the plasma membrane, surface biotinylation was used. For all tagged constructs (Flag-337, Flag-485, Flag-1119 and Flag-1050), the complex glycosylated form of SUR1 was labeled by biotin maleimide (Fig. 6). This surface biotinylation indicates that all constructs had trafficked to the plasma membrane, and strengthens the conclusion from immunofluorescence that sites 485 and 1119 are in a cytoplasmic orientation (Fig. 5). DISCUSSION Hydrophobicity analysis has led to many possible topology profiles for SUR1 (4 -6). SUR1 topology has been predicted with the use of sophisticated algorithms that consider amino acid hydrophobicity, charge, polarity, and distributions. However, each prediction program yielded a different model of SUR1 topology, and the location and number of transmembrane segments varied considerably from 13 to 18 (4 -6). Although hydropathy profiles are important and can offer insights, in the case of SUR1, experimental data are necessary to define the topology. Our previous study of SUR1 topology provided evidence to favor a 17-transmembrane-spanning model. That investigation targeted the amino-terminal region of the protein and showed that the amino-terminal TM0 domain con-FIG. 5. Immunofluorescence of Flag-tagged constructs. COS-1 cells expressing Flag-tagged SUR1 AAA constructs and GFP (as a transfection control) were labeled with anti-Flag M2 antibody, followed by labeling with secondary Cy3-conjugated antibody in the presence or absence of detergent. Representative micrographs show that the Flag epitope was accessible in non-permeabilized cells in the Flag-337 and Flag-1050 constructs, but was inaccessible in the Flag-485 and Flag-1119 constructs. All cells that were analyzed have intact plasma membranes, as determined by the absence of labeling with DEAD blue (data not shown).
TABLE I
Accessibility of Flag-tagged SUR1 constructs COS-1 cells co-transfected with Flag-tagged SUR1 constructs and a plasmid encoding enhanced green fluorescent protein were scored for antibody labeling in non-permeabilized and permeabilized cells. External accessibility, defined as the normalized percentage of cells labeled, was determined by comparison of the percentage of cells labeled without and with permeabilization. The mean values Ϯ S.E. are shown for n ϭ 3 (constructs Flag-1050, Flag-337, and Flag-1119) and n ϭ 2 (Flag-485). A Student's t test was performed to compare external accessibility of each construct with Flag-1050, an established external site of SUR1 that has similar accessibility to labeling independent of permeabilization. The accessibilities of Flag-485 and Flag-1119 were significantly different from that of Flag-1050, establishing the 485 and 1119 sites as internal. In contrast, Flag-337 accessibility was not significantly different from Flag-1050, indicating site 337 to be external. *, statistically significant, indicating that these sites are not localized on the external side. (5). The data presented here provide an extensive investigation of SUR1 topology. Using a surface labeling biotinylation assay in conjunction with epitope insertion, we mapped the topology of SUR1 in the context of the intact protein. We provide direct evidence to support the 17-membrane-spanning model and suggest that two pairs of these transmembrane segments may form reentrant loops (Fig. 7). Seventeen-transmembrane Segment Model of SUR1 Topology-Surface biotinylation has been used successfully to probe the topology of many transmembrane proteins (9, 19 -21). Although the number of endogenous cysteines made the assay impractical for directly examining internal regions, cysteine accessibility provided stringent boundaries for defining the position of external loops. Endogenous or introduced cysteines inserted into NEC (lacking cysteines 6, 26, 170, 1051, and 1057) were labeled with the cysteine modifying biotinylating reagent, biotin maleimide, and evaluated for external labeling. Our previous topology study of SUR1, limited to the TM0 domain, provided a good starting point for investigation (5).
Direct analysis of the location of internal regions of SUR1 using the biotinylation assay is limited due to the large number of internal cysteine residues. Interpretation, however, can be made from the absence of cysteine labeling. Twenty-six of 30 endogenous cysteines did not label, suggesting their location to be intramembrane or cytoplasmic (Fig. 3). Additionally, introduction of Cys-1186 in putative internal loop 8, did not label, indicating this region to be internal (Fig. 4). The absence of biotinylation of cysteine residues in all of the hydrophilic internal loops 3-9 provides strong, although indirect, evidence of their cytoplasmic localization.
Insertion of epitope tags into integral membrane proteins has been used frequently to determine topology (8, 10 -12). Flag tags were inserted into SUR1 AAA at several positions. Flags inserted at external loops 3 (position 337) and 6 (position 1050) were confirmed to be external with antibody labeling of nonpermeabilized cells (Fig. 5). These data complement the biotinylation data by supporting the topology derived from the SUR1 cysteine constructs. Additionally, epitope labeling is useful to directly ascertain the position of putative internal loops. Flags inserted at position 485 and 1119, corresponding to internal loops 5 and 7, were shown only to label with permeabilization (Fig. 5). These loops flank NBF1, an ATP binding domain known to reside within the cytoplasm (22)(23)(24). There has been much interest in the binding and coordination of ATP. It has been shown that drug binding influences ATP binding (25,26). Indeed, internal and transmembrane regions have been shown to be the primary sites involved with drug binding (27)(28)(29). Understanding the architecture of SUR1 may give insights to this mechanism. Reentrant Loops in Membrane-spanning Segments 8/9 and 14/15-The hydropathy profile of SUR1 suggests that there may be two additional external loops: putative loop 4 bracketed by transmembrane segments 8 and 9, and putative loop 7 bracketed by transmembrane segments 14 and 15. Close inspection of the hydrophobic regions that make up the proposed membrane-spanning segments 8/9 and 14/15 reveals only a limited number of amino acids available to form external loops, of which few are hydrophilic. Based on position and hydrophilicity, Tyr-454 and Thr-1161 were the likeliest candidates for externally residing residues. When replaced with cysteines, these positions were inaccessible to external biotinylating reagent. Attempts to probe regions at the transmembrane interface using increased concentrations of biotin maleimide also indicated that these sites were inaccessible. Topology studies of P-gp used a comparable strategy to examine the two small analogous putative loops. As with SUR1, attempts to label these P-gp regions with increased reagent concentration or incubation time did not result in detectable biotinylation (9). Conversely, glycosylation studies with CFTR revealed the two analogous loops to be external (7). In those studies, however, amino acids were added to introduce glycosylation sites, which may have contributed to the accessibility of the region in CFTR. Epitope insertion studies of P-gp were unable to map the region equivalent to putative external loop 4 of SUR1, but showed that the region analogous to external loop 7 is external (10). In that study, the addition of three tandem hemagglutinin tags (27 amino acids) was required to obtain accessibility to labeling for the P-gp loop analogous to SUR1 external loop 7, and the site was inaccessible with the addition of only one to two tandem hemagglutinin tags (10). In SUR1, attempts at "boosting" positions 454 and 1161 to a more accessible position with the introduction of alanines or glycines on either side of the cysteine resulted in SUR1 constructs that lacked complex glycosylation and were not used in analysis.
Do putative external loops 4 and 7 extend into the extracellular space? Evidence suggests that these short loops may not be exposed to the surface of the cell, and instead they may form reentrant segments that do not entirely cross the membrane. The hydrophobic putative membrane-spanning region 14/15 consists of only 33 amino acids (residues 1146 -1178). This length of hydrophobic stretch is too small to traverse the membrane twice as an ␣-helix in which 18 -20 amino acids per membrane span are required. In addition, the absence of a hydrophilic segment in the middle, and the lack of accessibility of this segment in SUR1 to modifying reagents, suggest that this region does not completely cross the membrane. Accessibility of the analogous site in P-gp and CFTR were only measured after insertion of additional amino acids (7,10). Similarly, the hydrophobic stretch for putative membrane-spanning region 8/9 (residues 428 -478) lacks a hydrophilic segment that may correspond to an external loop, and is inaccessible in SUR1 and P-gp (but accessible in CFTR after addition of amino acids). In contrast to the 14/15 segment, the 8/9 region is sufficiently long (50 amino acids) to span the membrane twice in an ␣-helical structure. However, its strongly hydrophobic makeup and lack of external accessibility suggest that it also forms a reentrant membrane loop. Interestingly, the putative 8/9 and 14/15 transmembrane regions each contain a pair of proline residues that is conserved among SUR, MRP, P-gp, and CFTR subfamilies, and we speculate that these prolines may participate in the unique structural arrangement of these segments.
Functional Implications of SUR1 Mutations-Experiments suggest that the two external cysteines 6 and 26 are required for expression of current. These cysteines are highly conserved, and are present in mammalian SUR1 and SUR2, Drosophila SUR, the MRP proteins, as well as Bpt1p from yeast (6, 30 -32). Since preincubation with a reducing agent is necessary for subsequent modification by sulfhydryl reagents in WT SUR1, it is possible that disulfide bonds are present and necessary for channel function. Although SUR1 constructs lacking Cys-6 and Cys-26 did not show detectable currents with patch clamp analysis, topological data from the cysteine mutants are consistent with epitope mapping of Flag-tagged constructs that do form functional channels (Fig. 2), suggesting that cysteine mutants retain normal topological structure.
Although the exact role of glycosylation of SUR1 is not understood, oligosaccharides have been shown to play an important role in protein folding (reviewed in Refs. 15 and 16) and have been correlated with surface expression of SUR1 (13). Consequently, glycosylation of SUR1 constructs was imposed as a requirement for analysis. In addition to the sites discussed, many sites subjected to mutation or Flag insertion resulted in constructs that did not glycosylate. Regions that showed sensitivity to disruption of glycosylation were found in internal loops 4, 5, 8, and 9 and external loop 2. These regions may have some importance in the proper folding of SUR1. Previous studies of insertion of epitope tags into MRP and P-gp similarly showed that function was more often impaired by insertion of tags into intracellular loops (10,12).
Concluding Remarks-The data presented here support a 17-transmembrane model for the topological structure of SUR1 with two pairs of transmembrane segments as possible reentrant loops (Fig. 7). The transmembrane segments are arranged in a 5ϩ6ϩ6 topology in the TM0, TM1, and TM2 domains of SUR1. It is likely that SUR2A and SUR2B have identical topologies to SUR1, given their high sequence homology, similar hydrophobicity profiles, and similar functions. Comparison of the topological structure of SUR1 with other ABC proteins suggests structural similarities among the proteins despite significant functional differences. MRP, a transporter involved in drug resistance of cancer cells, is the protein most closely related to SUR1. Partial mapping of MRP topology is consistent with the 5ϩ6ϩ6 topology model proposed for SUR1 (11,12). Determination of SUR1 topology also indicates the close relationship of SUR1 to the more common ABC proteins with the 6ϩ6 conformation such as CFTR and P-gp, and may have broad implications for the structural relationships among ABC proteins. |
253611460 | s2orc/train | v2 | 2022-11-18T16:25:02.360Z | 2022-11-01T00:00:00.000Z | Research on the Recognition of Various Muscle Fatigue States in Resistance Strength Training
Instantly and accurately identifying the state of dynamic muscle fatigue in resistance training can help fitness trainers to build a more scientific and reasonable training program. By investigating the isokinetic flexion and extension strength training of the knee joint, this paper tried to extract surface electromyogram (sEMG) features and establish recognition models to classify muscle states of the target muscles in the isokinetic strength training of the knee joint. First, an experiment was carried out to collect the sEMG signals of the target muscles. Second, two nonlinear dynamic indexes, wavelet packet entropy (WPE) and power spectrum entropy (PSE), were extracted from the obtained sEMG signals to verify the feasibility of characterizing muscle fatigue. Third, a convolutional neural network (CNN) recognition model was constructed and trained with the obtained sEMG experimental data to enable the extraction and recognition of EMG deep features. Finally, the CNN recognition model was compared with multiple support vector machines (Multi-SVM) and multiple linear discriminant analysis (Multi-LDA). The results showed that the CNN model had a better classification accuracy. The overall recognition accuracy of the CNN model applied to the test data (91.38%) was higher than that of the other two models, which verified that the CNN dynamic fatigue recognition model based on subjective and objective information feedback had better recognition performance. Furthermore, training on a larger dataset could further improve the recognition accuracy of the CNN recognition model.
Introduction
Among existing fitness training methods, resistance strength training, also known as "anaerobic training", is the most effective training method for muscle building and shaping [1,2]. Isokinetic exercise is one type of resistance strength training. The joints of the human body keep rotating at a constant speed during the whole exercise process, and the exercise resistance changes with the change in the muscle force output. It is recognized as the most advanced muscle strength training method, and it is considered a relatively safe and proper strength training method [3][4][5]. However, there are several problems when ordinary people undergo isokinetic strength training: (1) insufficient training, resulting in poor training effects; (2) excessive training, which may cause muscle damage. Exercise physiology research has shown that in resistance training, it is necessary to evaluate the initial fatigue state of the muscle, and then set the corresponding scientific exercise load and volume, so as to train the muscle to the extreme fatigue state, which is beneficial to muscle recovery and regeneration and for increasing muscle size and strength [6,7]. Therefore, how to accurately identify the initial fatigue state and extreme fatigue state of the relevant muscles during exercise has become a key question in the scientific research of resistance training.
Existing research on static muscle fatigue identification is relatively mature. Generally, sensors are used to collect sEMG signals. Time and frequency domain features are extracted, and classification algorithms such as SVM\LDA\CNN\LSTM were constructed to identify muscle fatigue status based on the subjective and objective fatigue level of the human body [8,9]. According to the study, the sEMG signal of static muscle contraction has a good regularity [10][11][12]. However, during dynamic muscle contraction, because of the influence of various factors, the nonlinearity and non-stationarity of the sEMG signal increase significantly. There are various uncertainties in the time and frequency domain features, which leads to greatly reduced effectiveness in identifying muscle fatigue states [9,13]. For instance, the effectiveness of an integral electromyogram (iEMG) in characterizing muscle fatigue changes when different subjects exercise at different intensities [14]. During walking and running, mean power frequency (MPF) has not changed significantly due to the joint effect of increased temperature in muscles and increased fatigue [15]. In addition, MPF has not showed a downward trend when the trainers are engaged in medium-and low-intensity sports [16]. Therefore, the recognition rate will be affected when using time and frequency domain features to identify dynamic muscle fatigue.
Due to the complexity of physiological systems, sEMG signals have the characteristics of chaotic signals [17][18][19]. Therefore, researchers have proposed to extract the nonlinear dynamic fatigue characteristics from sEMG signals [20]. For example, multi-scale entropy [21], fractal dimension [22,23], the Lyapunov index [24], wavelet packet entropy [25,26], power spectrum entropy [27,28], etc., have been proven to be good indicators of muscle fatigue. Although many nonlinear dynamic features are more effective in identifying the fatigue state of muscles in resistance training than the time-and frequency-domain linear features, extraction of nonlinear features is a manual process. There is information loss, and the actual recognition accuracy is not high because of the amount of calculation needed for feature extraction and fatigue state classification. There is room for further improvement. For example, Karthick et al. used a support vector machine (SVM) to combine various features, feature classifiers, and feature selection techniques to obtain a variety of muscle fatigue recognition models. The highest recognition accuracy was 91%, but the recognition accuracy of most models was only 60-88% [29].
The existing research mainly focused on the identification and classification of fatigue and non-fatigue states. However, in actual training, the initial fatigue state of the trainers' related muscles is different, and different initial muscle states need to be evaluated to match more scientific training guidance. This paper aimed to study the identification methods of different initial fatigue states and extreme fatigue states, followed by the division into four fatigue states: relaxed, a little tired, very tired, and extremely tired [30]. Knee flexion and extension training is one of the most effective methods of leg strength training. This paper constructed an isokinetic knee joint training experiment to explore an appropriate method which could recognize four muscle fatigue states in actual anaerobic training.
Therefore, works were carried out as follows to attain the above research purpose. Firstly, to avoid the impact of uncertainties in the time and frequency domain features, wavelet packet entropy and power spectrum entropy were extracted from the relevant muscle sEMG signals to verify the feasibility of characterizing different muscle fatigue. Then, two algorithms of Multi SVM and Multi LDA were constructed to classify and identify fatigue, respectively. Secondly, to avoid the loss of information in the process of feature extraction caused by the separation of feature extraction and decoding modules in traditional methods, this paper also attempted to build a CNN model based on deep learning theory, which has less data training, shorter training time and lower application cost in the later stage compared with other neural network algorithms mentioned in existing researches. Directly oriented to the original signal, CNN can extract wider, deeper, and more discriminative feature information than traditional manual feature extraction methods, so as to classify the sEMG of the target muscle according to the degree of fatigue in the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
Semimembranosus (SE)
Electrode location the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
the isokinetic strength training of the knee joint. Finally, using experimental data, we compared the CNN exercise fatigue model with the two methods of manual fatigue feature extraction and classification (i.e., Multi-SVM and Multi-LDA) to obtain the best fatigue recognition model with higher accuracy.
Subjects
In this experiment, 64 healthy men (n = 64; age: 25.8 ± 1.85 years; Height: 174.72 ± 3.88 cm; Weight: 64.75 ± 3.42 kg) with similar body sizes were recruited as the participants. Due to the influence of age and gender on local RPE [31], and considering the safety of this experiment, men of a similar age were selected as subjects. All participants were informed of the purpose of the experiment, the experimental procedure, the expected duration of each experimental stage, and the possible risks. They all signed an informed consent form.
Data Acquisition
In this experiment, subjects were required to perform isokinetic knee flexion and extension strength training on isokinetic testing training equipment. As the calf muscles are mainly involved in knee flexion and extension, considering the different functions of the muscles in the knee flexion and extension exercise [32][33][34], six muscles of the right leg were selected. The name, electrode location, and sEMG signal acquisition channels of the target muscles are listed in Table 1.
sEMG signals of the target muscles were collected through the MP150 multi-channel physiological signal acquisition system throughout the experiment. The German IsoMed2000 isokinetic test trainer was used as the isokinetic test trainer. The equipment is shown in Figure 1.
During the experiment, the experimenter recorded the subjective muscle fatigue through the corresponding rating of perceived exertion (RPE) scale, as listed in Table 2. In order to well reduce the cognitive difference of subjective fatigue among different subjects, the real-time fatigue of the subjects was obtained by the grade of 6-20 in this experiment, which was finally divided into 4 muscle fatigue classification labels: relaxed (6-12), a little tired (13)(14)(15)(16), very tired (17)(18), and extremely tired (19)(20). During the experiment, the experimenter recorded the subjective muscle fatigue through the corresponding rating of perceived exertion (RPE) scale, as listed in Table 2. In order to well reduce the cognitive difference of subjective fatigue among different subjects, the real-time fatigue of the subjects was obtained by the grade of 6-20 in this experiment, which was finally divided into 4 muscle fatigue classification labels: relaxed (6-12), a little tired (13)(14)(15)(16), very tired (17)(18), and extremely tired (19)(20).
Evaluation Grade Subjective Exercise Fatigue Classification Label 6
Not hard at all Relaxed 7 Extremely relaxed 8 9 Very relaxed 10 11 Relaxed 12 13 A little tired A little tired 14 15 Tired 16 17 Very tired Very tired 18 19 Extremely tired Extremely tired 20 Try the best Very relaxed 10 11 Relaxed 12 13 A little tired A little tired 14 15 Tired 16 17 Very tired Very tired 18 19 Extremely tired Extremely tired 20 Try the best
Experimental Process
The experimental process is shown in Figure 2.
Experimental Process
The experimental process is shown in Figure 2.
Pre-Experiment Preparation
To reduce the impedance, it is vital to remove the body hair of the tested parts and wipe it with alcohol to remove the surface stains. After the alcohol was air-dried, the Ag-AgCl electrodes were pasted on and connected to the sEMG acquisition channels through a wireless connection. Subsequently, the subjects randomly applied force, and the experimenter observed the collection of EMG signals and verified that the connection was feasible.
Afterwards, the subjects were instructed to sit on the isokinetic training equipment and adjust the knee joint training devices. The experimenter had made them be familiar with the usage of the equipment. At the same time, the experimenter observed the samplings of muscle strength information to ensure the normal operation of isokinetic equipment.
Acquisition of Experimental Basic Data
According to fitness guidance, 60°/s speed can be used to train muscle endurance in knee joint strength training, and 180°/s speed can be used to train muscle explosive force [35]. Therefore, this study chose these two modes for training experiments.
First, the participants performed right leg (single leg) knee joint flexion and extension strength training at a fixed slow speed (60°/s). The participants were required to exert maximum muscle strength throughout the training, and the range of motion of the joint was set to 15°-105° (0° is knee extension to the horizontal level). A total of four continuous movements were tested without interruption (completing knee flexion and extension was considered one movement). After all the slow speed tests were completed, they rested for 10 min and then performed knee isokinetic training at a fast speed (180°/s). Similarly, a total of four continuous movements were tested without interruption [4,36,37].
In the above test of 2 groups with different speeds, the MP150 multi-channel physiological signal acquisition system was used to collect the sEMG signals synchronously and obtain the dynamic changes in sEMG during maximum voluntary contractions (MVCs) for each muscle at each speed. To eliminate the impact of individual differences, four segments of MVC data from each group were used to generate mean dynamic MVCs for each muscle.
Pre-Experiment Preparation
To reduce the impedance, it is vital to remove the body hair of the tested parts and wipe it with alcohol to remove the surface stains. After the alcohol was air-dried, the Ag-AgCl electrodes were pasted on and connected to the sEMG acquisition channels through a wireless connection. Subsequently, the subjects randomly applied force, and the experimenter observed the collection of EMG signals and verified that the connection was feasible.
Afterwards, the subjects were instructed to sit on the isokinetic training equipment and adjust the knee joint training devices. The experimenter had made them be familiar with the usage of the equipment. At the same time, the experimenter observed the samplings of muscle strength information to ensure the normal operation of isokinetic equipment.
Acquisition of Experimental Basic Data
According to fitness guidance, 60 • /s speed can be used to train muscle endurance in knee joint strength training, and 180 • /s speed can be used to train muscle explosive force [35]. Therefore, this study chose these two modes for training experiments.
First, the participants performed right leg (single leg) knee joint flexion and extension strength training at a fixed slow speed (60 • /s). The participants were required to exert maximum muscle strength throughout the training, and the range of motion of the joint was set to 15 • -105 • (0 • is knee extension to the horizontal level). A total of four continuous movements were tested without interruption (completing knee flexion and extension was considered one movement). After all the slow speed tests were completed, they rested for 10 min and then performed knee isokinetic training at a fast speed (180 • /s). Similarly, a total of four continuous movements were tested without interruption [4,36,37].
In the above test of 2 groups with different speeds, the MP150 multi-channel physiological signal acquisition system was used to collect the sEMG signals synchronously and obtain the dynamic changes in sEMG during maximum voluntary contractions (MVCs) for each muscle at each speed. To eliminate the impact of individual differences, four segments of MVC data from each group were used to generate mean dynamic MVCs for each muscle.
Knee Joint Isokinetic Training Fatigue Experiment
When training at the two speeds, the participants exerted maximum muscle strength throughout the whole 40 movements [37]. The sEMG signals of the target muscles were collected by the MP150 multi-channel physiological signal acquisition system. Meanwhile, Acknowledge4.2 was used to record data. The IsoMed 2000 isokinetic test trainer was used to record the participants' exertion in real-time, and the experimenter recorded the participant's perceived muscle fatigue every other movement cycle (slow for 3 s and fast for 1 s) through the RPE scale.
Experimental Data Processing and Analysis
The experimental data were exported and analyzed by SPSS 26.0. MATLAB R2022b was used for programming recognition models.
Preprocessing of Experimental Data
In the above exercise fatigue experiment, the sEMG signals of the six target muscles of each participant were obtained at two exercise speeds. At the speed of 60 • /s and 180 • /s, the participants completed a knee flexion and extension exercise in 3 s and 1 s, respectively. After segmenting the raw sEMG data according to the starting point of each action, we performed time normalization processing. Each participant had two groups of sEMG signal sequences, one for 3 s per segment and the other for 1 s per segment. After that, the sEMG signal sequence segments at two speeds were sampled, respectively with the sampling time of 1 s and 250 ms.
Because of the large differences in the sEMG eigenvalues among the participants, the sEMG values were not suitable to be used directly for subsequent feature analysis and model input. In this study, MVC (sEMG MVC ) was used as the benchmark to preprocess the sEMG data. The calculation formula is as follows: After the calculation of the above formula, the preprocessed data was obtained. According to the four types of fatigue state in the experiment, the fatigue state represented by each sEMG signal at each sampling time point was calibrated.
sEMG Signal Processing and Analysis
Based on existing research [38], this experiment extracted the PSE and WPE from the sEMG signal data and analyzed the correlation between these features and dynamic muscle fatigue. According to the number of action cycles, the sEMG signals were divided into 40 groups of action cycle data and normalized according to the sEMG MVC . Power spectrum analysis and wavelet packet transformation were then performed on each group of sEMG signals to solve the PSE and WPE in order to assess muscle fatigue.
Both PSE and WPE were expanded based on frequency band decomposition and energy decomposition. First, the entire signal sample was decomposed into z sub-bands. The frequency band bandwidth was unequally divided, and the design was adjusted by the experimenter. The power spectrum energy E x (x = 1, 2, 3, . . . z) of each sub-band was calculated by fast Fourier transform to obtain the total power spectrum energy: Second, the probability density distribution was used to represent the energy distribution of the sEMG signal in each sub-band. The energy probability density distribution in each frequency sub-band is the ratio of the energy of each subpower spectrum to the total power spectrum energy: where P x is the signal energy of the x-th sub-band; 0 ≤ P x ≤ 1 (x = 1, 2, 3, . . . N); and ∑ z x=1 P x = 1. (1) PSE measures the complexity of the signal by analyzing the energy distribution of different frequency bands of the signal. According to the Shannon information entropy definition, the PSE is A larger PSE value indicates a more uniform energy distribution of the sub-band and a more complex sEMG signal distribution. The PSE can evaluate the complexity of the sEMG and then evaluate muscle fatigue.
Take VM for example, the PSE results are shown in Figure 3. At the two speeds, the PSE showed a steady downward trend with the movement process. The PSE had a very significant negative correlation with the number of movements (p < 0.01), and the correlation coefficient was r = 0.869 (slow) and r = 0.818 (fast). When moving at 60 • /s, the power spectrum entropy decreased slowly. Through linear fitting, the slope of the fitting curve was −0.0156. When moving at 180 • /s, the slope of the linear fitting curve was −0.0204.
in each frequency sub-band is the ratio of the energy of each subpower spectrum to the total power spectrum energy: where is the signal energy of the x-th sub-band; 0 ≤ ≤ 1 (x = 1, 2, 3, … N); and ∑ = 1 . (1) PSE measures the complexity of the signal by analyzing the energy distribution of different frequency bands of the signal. According to the Shannon information entropy definition, the PSE is A larger PSE value indicates a more uniform energy distribution of the sub-band and a more complex sEMG signal distribution. The PSE can evaluate the complexity of the sEMG and then evaluate muscle fatigue.
Take VM for example, the PSE results are shown in Figure 3. At the two speeds, the PSE showed a steady downward trend with the movement process. The PSE had a very significant negative correlation with the number of movements (p <0.01), and the correlation coefficient was r = 0.869 (slow) and r = 0.818 (fast). When moving at 60°/s, the power spectrum entropy decreased slowly. Through linear fitting, the slope of the fitting curve was −0.0156. When moving at 180°/s, the slope of the linear fitting curve was −0.0204. (2) WPE is based on wavelet transform. First, the sub-bands without subdivision are further refined and decomposed by wavelet packet analysis, and the original signal is reconstructed. Then, the energy of the reconstructed signal of each sub-band is calculated to improve the resolution of the frequency band. We first divided the signal into z subbands, denoted the x-th sub-band as , and decomposed it into y frequency bands through the wavelet packet. Then, we have , , = 1, 2, 3, … , The sub-band is Then, the signal of the sub-band is reconstructed: (2) WPE is based on wavelet transform. First, the sub-bands without subdivision are further refined and decomposed by wavelet packet analysis, and the original signal is reconstructed. Then, the energy of the reconstructed signal of each sub-band is calculated to improve the resolution of the frequency band. We first divided the signal into z sub-bands, denoted the x-th sub-band as S x , and decomposed it into y frequency bands through the wavelet packet. Then, we have S y, m x , m = 1, 2, 3, . . . , y The sub-band S x is Then, the signal of the sub-band S x is reconstructed: where ϕ x,y (τ) is the wavelet function. According to the orthogonal wavelet transform, the signal energy of each sub-band after reconstruction is obtained as Similarly, according to the Shannon information entropy definition, the WPE is Take VM, for example, the WPE results are shown in Figure 4. At the two speeds, the WPE also showed a steady downward trend with the movement process. The WPE had a very significant negative correlation with the number of movements, and the correlation coefficient was r = 0.853 (slow) and r = 0.829 (fast). When moving at 60 • /s, through linear fitting, the slope of the fitting curve was −0.0144. When moving at 180 • /s, the slope of the linear fitting curve was −0.0142.
where , ( ) is the wavelet function. According to the orthogonal wavelet transform, the signal energy of each sub-band after reconstruction is obtained as Similarly, according to the Shannon information entropy definition, the WPE is = − (9) Take VM, for example, the WPE results are shown in Figure 4. At the two speeds, the WPE also showed a steady downward trend with the movement process. The WPE had a very significant negative correlation with the number of movements, and the correlation coefficient was r = 0.853 (slow) and r = 0.829 (fast). When moving at 60°/s, through linear fitting, the slope of the fitting curve was −0.0144. When moving at 180°/s, the slope of the linear fitting curve was −0.0142. It can be seen from the above results that, when doing isokinetic knee flexion and extension at two speeds, the PSE and WPE of VM showed a good regular decline with the deepening of fatigue, which had a good characterization of dynamic muscle fatigue, and can be used as the sEMG features to identify muscle fatigue during dynamic muscle contraction.
The data of the other five muscles were processed with the same method, and the PSE and the WPE of the five muscles' sEMG all showed a significant negative correlation. The correlation coefficient is listed in Table 3. Therefore, the PSE and WPE of these six muscles can be used as the signal features of muscle fatigue identification in the isokinetic knee joint flexion and extension exercise. In the subsequent dynamic fatigue recognition, the PSE and WPE of these six muscles will be used as the fatigue features to build a fatigue recognition model. It can be seen from the above results that, when doing isokinetic knee flexion and extension at two speeds, the PSE and WPE of VM showed a good regular decline with the deepening of fatigue, which had a good characterization of dynamic muscle fatigue, and can be used as the sEMG features to identify muscle fatigue during dynamic muscle contraction.
The data of the other five muscles were processed with the same method, and the PSE and the WPE of the five muscles' sEMG all showed a significant negative correlation. The correlation coefficient is listed in Table 3. Therefore, the PSE and WPE of these six muscles can be used as the signal features of muscle fatigue identification in the isokinetic knee joint flexion and extension exercise. In the subsequent dynamic fatigue recognition, the PSE and WPE of these six muscles will be used as the fatigue features to build a fatigue recognition model.
Dynamic Fatigue Recognition Based on CNN Fatigue Feature Extraction Construction of the CNN Model Based on Deep Learning
sEMG signal is a mixed signal formed by the spatiotemporal superposition of the motor unit action potential at the electrode generated by the excitation of motoneurons [39]. Based on the spatiotemporal characteristics of the sEMG, this study designed a CNN structure, as shown in Figure 5. In the first convolution layer, a vector-type convolution kernel instead of a matrix-type convolution kernel was used. The single-layer convolution operation only extracts the spatial features and then the temporal features are extracted in the second convolution layer. Specifically, as shown in Figure 5, in the feature extraction, it is necessary to take into account both the temporal and spatial features of sEMG signals, and the classification part is similar to the back-propagation (BP) neural network.
Dynamic Fatigue Recognition Based on CNN Fatigue Feature Extraction
Construction of the CNN Model Based on Deep Learning sEMG signal is a mixed signal formed by the spatiotemporal superposition of the motor unit action potential at the electrode generated by the excitation of motoneurons [39]. Based on the spatiotemporal characteristics of the sEMG, this study designed a CNN structure, as shown in Figure 5. In the first convolution layer, a vector-type convolution kernel instead of a matrix-type convolution kernel was used. The single-layer convolution operation only extracts the spatial features and then the temporal features are extracted in the second convolution layer. Specifically, as shown in Figure 5, in the feature extraction, it is necessary to take into account both the temporal and spatial features of sEMG signals, and the classification part is similar to the back-propagation (BP) neural network. Taking the 60°/s exercise as an example, the CNN design included five layers. The first layer is the data input layer; the second and third layers are convolutional layers, which complete feature extraction; the fourth and fifth layers are the fully connected layers, which, together with the eigenvalues output by the third layer, perform the classification. The specific contents of each layer of the CNN are as follows: The first layer (I1) is the input layer. Each original input sample is a 6 × 120 matrix, where 6 represents the six target muscle channels, and 120 represents the sampling time point of each channel signal.
The second layer (C2) is the convolutional layer. This layer is locally connected to the input layer and performs spatial filtering on the original input samples. The convolution kernel size of layers I1 to C2 is 1 × 6. Ten types of filters are selected; the original input samples are convolved by each filter so there are 10 different types of feature mapping. Therefore, 10 feature maps with the size of 1 × 120 are generated. To separate the mixed spatiotemporal information, the convolution kernel is set as a vector instead of a matrix so that only spatial features are included in the features after the convolution operation. Taking the 60 • /s exercise as an example, the CNN design included five layers. The first layer is the data input layer; the second and third layers are convolutional layers, which complete feature extraction; the fourth and fifth layers are the fully connected layers, which, together with the eigenvalues output by the third layer, perform the classification. The specific contents of each layer of the CNN are as follows: The first layer (I1) is the input layer. Each original input sample is a 6 × 120 matrix, where 6 represents the six target muscle channels, and 120 represents the sampling time point of each channel signal.
The second layer (C2) is the convolutional layer. This layer is locally connected to the input layer and performs spatial filtering on the original input samples. The convolution kernel size of layers I1 to C2 is 1 × 6. Ten types of filters are selected; the original input samples are convolved by each filter so there are 10 different types of feature mapping. Therefore, 10 feature maps with the size of 1 × 120 are generated. To separate the mixed spatiotemporal information, the convolution kernel is set as a vector instead of a matrix so that only spatial features are included in the features after the convolution operation.
The third layer (C3) is the convolution-pooling layer. This layer realizes the temporal feature extraction of the sEMG signal by adding local links and weight sharing. To prevent overfitting, we set the convolution kernel size to 1 × 10 and the convolution stride to 10 to reduce the number of parameters and implement pooling operations. For the 10 feature maps in the C2 layer, four convolution filters are used for each. After the mapping, the C3 layer generates 40 different feature maps, each with a size of 1 × 12.
The fourth layer (F4) is the fully connected layer. This layer fully connects the C3 layer and the O5 layer to generate the classification. In this design, the number of neurons is set to 100.
For the 180 • /s sample, according to the data preprocessing, the sampling time was 250 ms, and the parameters of each layer of the CNN model were modified accordingly. Each original input sample of the I1 layer is a 6 × 160 matrix. The convolution kernels from the I1 layer to the C2 layer are unchanged, and the size of the feature map of the C2 layer is correspondingly adjusted to 1 × 160. The size of the feature map of the C3 layer is 1 × 16. Other settings remain consistent with the CNN model for 60 • /s.
Using the above method, the CNN dynamic fatigue model for the two speeds was established.
Learning and Training of the CNN Model
In this study, the BP method was used to complete the learning and training of the above-mentioned CNN. First, we input the preprocessed training data and obtained the activation value of each neuron according to the forward calculation method. We then carried out the reverse error calculation. Each weight and bias gradient were obtained according to the error. Finally, the original weight and bias values were adjusted based on the new weight and bias gradient.
We set β(l, p, q) as the expression of any neuron in the CNN, where l represents the layer number; p represents the pth feature map of that layer; and q represents the qth neuron in that feature map. Based on this, x l p (q) and y l p (q) represent the input and output data of a neuron, respectively. We constructed an activation function as follows: where f (x) is the activation constant. The activation function of the C2 and C3 layers of the CNN network was set to be a hyperbolic tangent function as follows: where a and b are constants. We took a = 1.71159 and b = 2/3 [40]. The activation function of the F4 layer and the O5 layer of the CNN network was a sigmoid function: In the CNN, data transmission was performed between neurons in each layer. The specific transmission relation was as follows: The first layer (I1), with 6 channels × 120 sampling time points, is represented as Y j,t , where j represents the channel number, and t represents the sampling time point.
In the second layer (C2), the feature map of the I1 layer is convolved by a convolution kernel with a pre-defined size of 6 × 1 activated by the hyperbolic tangent function. The feature map of the output C2 layer can be obtained, and the transfer function was where k 2 p is the convolution kernel of [6 × 1], and b 2 p (q) is the bias. The data transfer method of the third layer (C3) was similar to the C2 layer: where k 3 p is the convolution kernel of [10 × 1], and b 2 p (q) is the bias.
In the fourth layer (F4), all the neurons are fully connected with all the neurons in the C3 layer. The connection function was as follows: where ω 4 j (w) is the weight of the connections between the C3 layer neurons and the F4 layer neurons, and b 4 (q) is the bias.
In the fifth layer (O5), all the neurons are fully connected with all the neurons in the F4 layer. The connection function is where ω 5 j (w) is the weight of the connections between the F4 layer neurons and the O5 layer neurons, and b 4 (q) is the bias.
Next, we initialized the CNN weights and biases to provide preconditions for the effective training and convergence of the CNN network. First, the connection weights and biases in the initialized CNN were evenly distributed in the interval ± 1 n(l,p,q) Ninput , where n(l, p, q) Ninput represents the number of neurons in the previous layer of the lth layer that are connected to the qth neuron in the pth feature map of the lth layer. We set the learning rate of the C2 and C3 layers to γ [41], and it was calculated as follows: Nshared l p n(l, p, q) Ninput (17) where Nshared l p represents the number of shared-weight neurons in the pth feature map of the lth layer in the network. Finally, the formula for setting the learning rate γ between F4 and O5 was as follows: γ = λ n(l, p, q) Ninput (18) Thus, the weights and biases of the connections between the CNN layers were set. In order to minimize the fatigue recognition error, the gradient descent method was used to adjust the weights and biases. We set the maximum number of network iterations to 10,000. In the CNN training process, by analyzing the loss function, we determined whether the network had converged, and finally, the optimal fatigue recognition model was selected.
Experimental Sample Construction
There were 64 subjects in the experiment. The data of 60 random subjects were selected from 64 subjects. The experimental data of the 60 subjects were divided into 5 segments according to the ratio of 3:1:1. Three segments (60%) of the data were used as training data; one segment (20%) of the data was used as the validation data; one segment (20%) of the test data was used as the test data. The training data was used for constructing the model, while the validation data was used to select the optimal parameters of the model, and the test data was used to evaluate the model recognition rate. The data of the rest 4 subjects was used for further model validation and was not utilized in the training.
In this study, while building the CNN model to identify muscle fatigue status, Multi-SVM and Multi-LDA were constructed based on PSE and WPE to identify fatigue. Three types of models were trained with the same training data and were tested on the same test data.
(1) Multi-SVM: Used PSE and WPE of 6 muscles' sEMG as features, and then classified them by SVM classifier using the Gaussian kernel function. The kernel function formula is as follows.
K(x, y) = exp x − y 2σ 2 (19) (2) Multi-LDA: The sEMG's PSE and WPE of 6 muscles were also extracted, and then classified by the Multi-LDA classifier. Mark the characteristics of the input sEMG signal as x i (i = 1, 2, 3, . . . ., n), set the input sample set as X = {( § 1 , † 1 ), . . . , ( § n , † n )}, where † n corresponds to the LDA classification label as X a (a = 1, 2, . . . , 4). Based on the above marks, the four-classification algorithm formula of Multi-LDA's is as follows: In the formula, S B represents the between-class scatter matrix, S W represents the within-class scatter matrix. The Multi-LDA is maximized to achieve the maximum S B and the minimum S W , so that the dimension-reduced classification sample obtains the maximum inter-class distance and the minimum intra-class distance.
Evaluation Index of Exercise Fatigue Identification
In this study, accuracy (Acc) and the receiver operating characteristic (ROC) were used for evaluation.
Accuracy is the ratio of the number of samples truly classified by the model to the total number of samples participating in the classification, which can be expressed as: where TP is true positives, TN is true negatives, FP is false positives, FN is false negatives. This experiment is a multi-classification problem. Thus, we used micro-Precision (Pre m ), micro-Recall (Rec m ), and micro-F1score (F1 m ) to assess the identification of fatigue states by the three classification models. In the multi-classification case, the value of the three is equal and the same as accuracy. At this point, accuracy is the ratio of the sum on the diagonal to the total number of samples, so it can be understood as: where TP i is the number of truly classified samples in each category, and All is the total number of samples participating in the classification. The closer the ROC curve is to the upper-left boundary, the better the performance of the classification model.
CNN Training Process and Results
The training data were used to train the CNN model. A convergent network model was obtained after training. Taking 60 • /s as an example, the loss curve in model training is shown in Figure 6. The abscissa is the number of iterations, and the ordinate is the loss value. The dotted and solid lines indicate the percentages of the loss value of the CNN network at different iterations when the model is trained with the training data and validation data, respectively.
CNN Training Process and Results
The training data were used to train the CNN model. A convergent network model was obtained after training. Taking 60°/s as an example, the loss curve in model training is shown in Figure 6. The abscissa is the number of iterations, and the ordinate is the loss value. The dotted and solid lines indicate the percentages of the loss value of the CNN network at different iterations when the model is trained with the training data and validation data, respectively. As shown in Figure 6, after 3,093 iterations, the validation loss reached the lowest point at 1.8297% and then slightly rose and remained stable with an increasing number of iterations, whereas the training loss decreased slightly and remained stable with an increasing number of iterations. It can be seen that the CNN fatigue recognition model after the 3,093rd iteration is the optimal fatigue classification model for the 60°/s exercise. The 180°/s exercise's optimal fatigue recognition model can be determined in the same way.
Exercise Fatigue Recognition Results Based on Test Samples
To evaluate the performance of the proposed fatigue recognition model, three models were applied to the test samples: (1) Multi-SVM; (2) Multi-LDA; (3) CNN.
The confusion matrix of subject classification of the test data under three methods is shown in Figure 7. The numbers in the matrix diagonal grid (gray grid) represent the average percentage of the number of correctly classified samples for all subjects; the numbers in the off-diagonal grid (white grid) represent the average percentage of the number of incorrectly classified samples for all subjects. As shown in Figure 6, after 3,093 iterations, the validation loss reached the lowest point at 1.8297% and then slightly rose and remained stable with an increasing number of iterations, whereas the training loss decreased slightly and remained stable with an increasing number of iterations. It can be seen that the CNN fatigue recognition model after the 3,093rd iteration is the optimal fatigue classification model for the 60 • /s exercise. The 180 • /s exercise's optimal fatigue recognition model can be determined in the same way.
Exercise Fatigue Recognition Results Based on Test Samples
To evaluate the performance of the proposed fatigue recognition model, three models were applied to the test samples: (1) Multi-SVM; (2) Multi-LDA; (3) CNN.
The confusion matrix of subject classification of the test data under three methods is shown in Figure 7. The numbers in the matrix diagonal grid (gray grid) represent the average percentage of the number of correctly classified samples for all subjects; the numbers in the off-diagonal grid (white grid) represent the average percentage of the number of incorrectly classified samples for all subjects. Table 4 lists the identification of the 3 models testing at test samples. For the samples of 60 • /s, the overall recognition accuracy of the CNN model was 91.38%, which was higher than that of Multi-SVM (90.17%) and Multi-LDA (88.85%). For the samples of 180 • /s, the overall recognition accuracy of the CNN model was 89.87%, which was higher than which of Multi-SVM (89.21%) and Multi-LDA (87.69%). Figures 8 and 9 show the fatigue recognition performance of the three models in the samples of the two speeds. It can be intuitively learned from the figures that the area under the ROC curve of the CNN model is larger than that of the Multi-SVM and Multi-LDA models, i.e., the CNN model had better performance than the other two models. Table 4 lists the identification of the 3 models testing at test samples. For the samples of 60°/s, the overall recognition accuracy of the CNN model was 91.38%, which was higher than that of Multi-SVM (90.17%) and Multi-LDA (88.85%). For the samples of 180°/s, the overall recognition accuracy of the CNN model was 89.87%, which was higher than which of Multi-SVM (89.21%) and Multi-LDA (87.69%). 9 show the fatigue recognition performance of the three models in the samples of the two speeds. It can be intuitively learned from the figures that the area under the ROC curve of the CNN model is larger than that of the Multi-SVM and Multi-LDA models, i.e., the CNN model had better performance than the other two models. To evaluate the interaction between the classification models and fatigue categories and their influences on the classification results, analysis of variance (ANOVA) was used in this paper to conduct variance analysis on three classification models × 4 fatigue categories, and the confidence level was set to 95%. The results of the ANOVA showed that there was no interaction between the classification model and the fatigue category (p > 0.05). The classification model had a significant effect on the classification results (F = 6.32, p < 0.01), whereas the exercise fatigue category had no significant effect on the classification results (p > 0.05).
Because CNN can directly process the original signal, it can extract more useful features and at the same time reduce the loss of information during data processing; so, it shows a higher recognition accuracy than the other two classification models [42]. To evaluate the interaction between the classification models and fatigue categories and their influences on the classification results, analysis of variance (ANOVA) was used in this paper to conduct variance analysis on three classification models × 4 fatigue categories, and the confidence level was set to 95%. The results of the ANOVA showed that there was no interaction between the classification model and the fatigue category (p > 0.05). The classification model had a significant effect on the classification results (F = 6.32, p < 0.01), whereas the exercise fatigue category had no significant effect on the classification results (p > 0.05).
Because CNN can directly process the original signal, it can extract more useful features and at the same time reduce the loss of information during data processing; so, it shows a higher recognition accuracy than the other two classification models [42].
Verification of the CNN Exercise Fatigue Recognition Model Based on New Samples
According to the comprehensive evaluation of the above test results, the fatigue recognition model based on CNN exercise fatigue feature extraction had better recognition performance than other models. To further verify the practical application of this model, the rest four participants' experimental data were input into the CNN fatigue recognition model. The results showed that the average recognition accuracy of the CNN model for the four participants was 86.75% ± 4.01% for 60 • /s movement and 84.86% ± 4.17% for 180 • /s movement. The ROC mean curves of the classification results at the two speeds are shown in Figure 10.
Compared with the former recognition results of the test data, the CNN model's recognition accuracy for the rest of the four participants had descended. The reason may be that the training sample was not large enough. After adding the data of the four participants to the training, the model was tested again. We found that the recognition accuracy increased to 92.16%. Therefore, before future application, the training samples should be increased to obtain a high recognition accuracy. recognition model based on CNN exercise fatigue feature extraction had better recognition performance than other models. To further verify the practical application of this model, the rest four participants' experimental data were input into the CNN fatigue recognition model. The results showed that the average recognition accuracy of the CNN model for the four participants was 86.75% ± 4.01% for 60°/s movement and 84.86% ± 4.17% for 180°/s movement. The ROC mean curves of the classification results at the two speeds are shown in Figure 10. Compared with the former recognition results of the test data, the CNN model's recognition accuracy for the rest of the four participants had descended. The reason may be that the training sample was not large enough. After adding the data of the four participants to the training, the model was tested again. We found that the recognition accuracy increased to 92.16%. Therefore, before future application, the training samples should be increased to obtain a high recognition accuracy.
Conclusions
In this paper, we constructed three types of fatigue recognition models, the first two were Multi-SVM and Multi-LDA based on manual feature extraction, and the third was the CNN recognition model using the original sEMG signal. In this experiment, two nonlinear kinetic indices of PSE and WPE were manually extracted from sEMG and used to train Multi-SVM and Multi-LDA. Simultaneously, the sEMG data was directly used to learn and train CNN models, and then, the recognition accuracy of the three models was compared. The results showed that for the test data, the average recognition accuracy (91.38%) and the area under the ROC curve of the CNN fatigue recognition model were larger than those of the other two recognition models. The results indicate that the CNN model has better classification performance. At last, after training with four more samples,
Conclusions
In this paper, we constructed three types of fatigue recognition models, the first two were Multi-SVM and Multi-LDA based on manual feature extraction, and the third was the CNN recognition model using the original sEMG signal. In this experiment, two nonlinear kinetic indices of PSE and WPE were manually extracted from sEMG and used to train Multi-SVM and Multi-LDA. Simultaneously, the sEMG data was directly used to learn and train CNN models, and then, the recognition accuracy of the three models was compared. The results showed that for the test data, the average recognition accuracy (91.38%) and the area under the ROC curve of the CNN fatigue recognition model were larger than those of the other two recognition models. The results indicate that the CNN model has better classification performance. At last, after training with four more samples, the recognition accuracy of the CNN recognition model reached 92.16%. The results verified that the CNN dynamic fatigue state recognition model based on subjective and objective information feedback presented satisfactory recognition performance.
This paper makes several contributions to the literature. First, this study realizes the recognition of four muscle fatigue states in resistance strength training. No other study has been able to do this in the past. This will lay a foundation for the design and development of intelligent fitness equipment and personalized fitness guidance. Second, the results of this study help provide two nonlinear kinetic indices of PSE and WPE, which could be used in further study of the higher rate of accurate classifiers. Third, our study gives a dynamic muscle fatigue recognition method with relatively higher accuracy and lower time cost, which could be better commercialized.
Future Prospects
This research can provide theoretical support for the design and research of wearable devices in the future and help to develop a more effective fitness training model. However, this experiment has several limitations: (1) Only knee flexion and extension exercises were used as the target, and other training movements need to be studied; (2) Isokinetic training was investigated in this experiment, so other training methods in anaerobic training need to be studied; (3) The number of participants was small. Subsequent research should increase the number of participants to train the recognition model; (4) The classifiers of this research were limited. In addition to the classifiers used in this study, there are ANN, LSTM, GSVCM, etc. The practical application effect of these classifiers remains to be verified. In the future, more research about these classifiers will be carried out.
Author Contributions: All authors contributed equally to this work and were involved at every stage in its development. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Industrial Design Department, Zhejiang University of Technology.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent statements has been obtained from all subjects to publish this paper.
Data Availability Statement:
Involving confidentiality agreement, the data will not be disclosed. |
13397520 | s2orc/train | v2 | 2014-10-01T00:00:00.000Z | 2001-10-11T00:00:00.000Z | A New Approach to the Geometrization of Matter
We show that the sum over geometries in the Lorentzian 4-D state sum model for quantum GR in [1] includes terms which correspond to geometries on manifolds with conical singularities. Natural approximations suggest that they can be interpreted as gauge bosond for the standard model, plus fermions plus matter.
Introduction
Over the last few years, a new model for the quantum theory of gravity has appeared. The model we are referring to is in the class of spin foam models; more specifically it is a Lorentzian categorical state sum model [1]. It is based on state sums on a trangulated manifold, rather than differential equations on a smooth manifold. The model has passed a number of preliminary mathematical hurdles; it is actually finite on any finite triangulation [2]. The biggest hurdle it still has to overcome is an explicit physical interpretation, or differently put, a classical limit.
The purpose of this paper is to outline a radically new way to include matter in this type of theory. While it may seem premature in light of the abovementioned hurdle, it is extremely natural in the setting of the model, and perhaps easier to find than the classical limit itself. If one accepts the approximate arguments we make, the bosonic part of the standard model, rather than any random collection of matter fields, is what appears. The approach yields a fermionic sector as well, but we do not yet understand it. Also a natural family of candidates for dark matter appears in it. The crucial point of departure for this paper is the observation that for a discrete state sum, unlike for a Lagrangian composed of continuum fields, there is no need for the spacetime to consist entirely of manifold points. We find that in investigating the kind of singular points which the model naturally allows a number of intriguing parallels to the standard model arise.
The realization that a specific possibility appears for including matter in the model came as two complimentary points of view on the construction of the model met. The first, the quantum geometric point of view, is an interpretation of the categorical state sum as a sum over Lorentzian discrete quantum geometries [1]. The second, the Group Field Theory picture [3], interprets the state sums on particular triangulations as Feynman diagrams for a quantum field theory on a group manifold. The cross fertilization of these two approaches, as discussed in [2] was an important motivation for the finiteness proof. The sum over Feynman diagrams in the GFT picture can be interpreted as a superposition of quantum geometries in the LCSS point of view.
However, there is an important discrepancy between the two pictures. Not every Feynman diagram in the GFT picture corresponds to a manifold. The most general diagram, as we shall discuss below, is a manifold with conical singularities over surfaces propagating along paths and connecting at vertexlike cobordisms of the surfaces.
Thus we are faced with a dilemma. We must take one of the following paths: 1.Restrict the class of Feynman diagrams we sum over in the GFT picture in a nonlocal and unnatural way, 2. Abandon the GFT picture altogether, and try to add matter to the LCSS picture; or 3. Reinterpret the conical singularities in the GFT picture as matter.
Thus, within the line of development we are pursuing, the new proposal for matter, namely that it results from geometric excitations at conical singularities, is actually the most parsimonious (as well, needless to say, as the most optimistic) possibility to consider.
Put differently, the GFT picture seems to be telling us to take the departure of the LCSS model, namely substituting a superposition of discrete geometries for a continuum picture, to its logical conclusion of including all simplicial complexes.
The development of the LCSS approach to quantum gravity has proceeded, rather surprizingly, at the mathematical level of rigor. The finiteness result cited above is a theorem. Regrettably, the proposal in this paper cannot be formulated as rigorously at this point. We are only able to progress by making approximations. However, it is at least possible to state the results as conjectures, which a careful study of some well defined integrals could in the future prove.
Since we are making a radical departure from existing lines of development towards a fundamental understanding of matter fields, we preface our proposal with a brief historical discussion, which shows that the new suggestion is not as far conceptually from other approaches as might at first appear. This is what motivates the phrase "geometrization of matter" in our title.
Matter and Space
Our current understanding of the physics of matter is rooted in the idea of symmetry. Fields in quantum field theories are determined by their quantum numbers, which index representations of the symmetries of the theory. Interactions (vertices in Feynmanology) are linear maps on state spaces which intertwine the action of the symmetries, or as physicists like to say "are not forbidden by the symmetries of the theory." Our ideas about symmetry are much older than quantum field theory and derive from our experience of space. Already in the nineteenth century, mathematicians had the idea that different types of geometry correspond to different types of symmetry.
When mathematicians and physicists have tried to understand the symmetries of quantum physics, they have invariably resorted to explaining matter fields in terms of one or another sort of geometry. Aside from the manifestly spacetime symmetries of spin and energy-monentum, every approach to find a fundamental explanation of the internal or gauge symmetries has invoked one or another geometric setting.
Thus gauge theory is formulated as the geometry of vector bundles, Kaluza Klein theories resort to higher dimensional spacetime, supergravity is based in superspace, string theory originally lived in the geometry of loop spaces, while its M theoretic offspring seem to be dwelling in bundles over manifolds of various dimensions again, perhaps with specified submanifolds as well.
One could also mention noncommutative geometry, which studies deformations both of families of symmetries and of the spaces they act on to noncommutative C* algebras.
Nevertheless, at this point, we cannot say that any of these approaches have really succeeded. A particular difficulty in many of them has been the failure of the standard model to emerge from a limitless set of possibilities.
We want to propose that this historical survey suggests the following points: 1.Our understanding of symmetry is so rooted in geometry that if the fundamental theory of matter is not geometrical we will not find it anytime soon.
2, We need to try a different type of geometry, and hope to get lucky as regards the standard model. Now we want to claim that the ideas of quantum geometry which have developed in the process of understanding the LCSS/GFT models point to a natural generalization of the geometry of spacetime, namely simplicial complexes. In this new geometric framework, it is plausible that the standard model emerges naturally. To reiterate, the shift from smooth manifolds to simplicial complexes is natural because we have substituted combinatorial state sums for differential equations.
The topology of a class of simplicial complexes.
The GFT picture is a generalization of the idea of a state sum attached to a triangulation of a 4-manifold. The picture is to think of a triangulation of a 4manifold as a 5-valent graph with each edge of the graph refined into a bundle of 4 strands. Different matchings of the strands at an edge are different diagrams.
The vertices of the diagram correspond to the 4-simplices, the edges are the 3-simplices, and the strands are the faces of the 3-simplices. Mathematicians would describe this as the dual 2-skeleton of the triangulation, following strands around until they close into loops, and attaching disks along the loops.
A crucial observation is that the LCSS model in [1], unlike the topological models that preceeded it [4], requires only the combinatorial data of the dual 2-skeleton to formulate it, since it has no terms on the edges or vertices of the triangulated manifold. The GFT picture actually goes beyond this and produces all possible strand diagrams as terms in an expansion of a field theory into Feynman graphs.
A standard argument from PL topology tells us when the complex we would build up from such a diagram by adding simplices of dimensions 1 and 0 is a manifold: the links of all simplices must be spheres of appropriate dimension. (The combinatorial picture described above in fact tells us how they should be added). In the situation we are considering, this will be automatically satisfied for the 4-3-and 2-simplices, the link of a 1-simplex can be any 2-manifold, and the link of a vertex can be any 3-manifold with conical singularities on the surfaces corresponding to the links of the incident edges. The links of vertices can also be described as 3-manifolds with boundary components the links of the edges incident to the vertex, leaving the cones out for simplicity.
For the nonmathematical reader, we note that a cone over any space is the cross product of an interval with the space, with the copy at one end of the interval contracted to a point. A point in a manifold has neighborhoods which are homeomorphic to the cone over a sphere, which is just a ball. A point with a neighborhood homeomorphic to a cone over some other manifold is not a manifold point, and is referred to as a conic singularity. In the case of an isolated singularity, the submanifold over which the cone is constructed is called the link of the point. If instead we have a simplex crossed with a cone on a lower dimensional submanifold, it is the link of the simplex. In a triangulated manifold, all links of simplices are spheres of appropriate dimension.
The proposal we are making is to interpret the web of singularities in such a complex as a Feynman graph; that is to say, we want to interpret the low energy part of the geometry around the cones over surfaces as particles, and the 3-manifolds with boundary connecting them as interaction vertices.
We shall make an attempt at this below, using several approximate techniques. At this point, we wish to underscore the extreme parsimoniousness of this proposal. Nothing is added to the model for quantum GR, no extra dimensions, no larger group, We simply allow a natural larger set of configurations. Once we abandon smooth manifolds for PL ones, there is really no reason not to allow such configurations. In the GFT picture, where geometries appear as fluctuations of a nongeometric vacuum, they are on an equal footing.
Conical matter
Now we want to get some picture of what degrees of freedom would appear on a conical singularity in the model of [1]. Since we need to average over all triangulations this is not easy. Also, in order to obtain a model to compare with particle physics as we see it today, we need to describe a universe which has cooled enormously from the Planck temperature, i.e. we need a low energy limit of the model. At present we do not know how to abstract such a limit from the model directly, so we approach this problem by making use of the connection between the discrete models for TQFTs in [4] and the model in [1].
A TQFT is an automatic solution to the renormalization group, in the sense that if we make a refinement of the triangulation on which it is computed the result is unchanged. We want to suggest that the state space of low energy states which survive summing over refinements of triangulation of a cone over a surface is given by the space of states for an associated 2+1 dimensional TQFT on the surface with a puncture. (The puncture would allow information to flow out, thus imitating the conical singular point. The state space for a TQFT with puncture is larger than the one for a closed surface in a TQFT.) This should be taken as a physical hypothesis at this moment. One reason for believing it is that the LCSS model in [1] is itself obtained by constraining a TQFT.
Then there is the question of what TQFT to expect. Since the state sum in [1] is from the unitary representations of SL(2,C), which is a sort of double for Su (2), the TQFT should be the one for SU q (2) × SU q (2), i.e. a left-right symmetric TQFT produced from a quantum group in the by now standard way. This is also the TQFT we constrain to produce the euclidean signature model for quantum general relativity in [5].
Approximating limits of theories by states of other theories is not an unknown technique in theoretical physics. At this point we do not know how to set q or what value it should take. We could introduce a q into our original model by passing to the Quantum Lorentz Algebra [7]. It may also be that a q emerges from the poorly understood limit of low energy in the model, as a cosmological constant. As we note below, certain choices for q have interesting implications for the particle content of the low energy theory.
We believe that in the future it may be possible to make a stronger argument for this. The reason has to do with the relationship between conformal structures on a surface and flat Lorentzian metrics on the cone over the surface. A Riemann surface can be obtained by quotienting the hyperboloid in three dimensional Minkowski space by a discrete subgroup of the 2+1 dimensional Lorentz group, which is isomorphic to SL(2,R). Quotienting the entire forward timelike cone by the same group yields a flat Lorentzian metric on the entire cone over the surface, except that the conical singularity (the origin in Minkowski space), is not a manifold point, so naive definitions of metrics fail there. Thus, the approach to producing CSW theory by quantizing a bundle over Riemann moduli space could be interpreted as a quantization over the space of flat geometries around a conic singularity. States arising from effects around flat geometry should be important in understanding the low energy behavior of the model. This argument will be difficult because it will be necessary to treat the effect of the singular point, so we do not attempt it here. We will make further use of the relationship between flat Lorentzian metrics on a cone and constant negative sectional curvature (hyperbolic) metrics on the boundary of the cone in what follows.
At this point, let us note that the space of states assigned to a once punctured torus by a TQFT is a very special object. As demonstrated in [6], it is always a Hopf algebra object in the category associated to the TQFT. In the case of the TQFT produced from SU (2) q , also known as the CSW [8] model, it is a sum of matrix rings, one at each dimension, up to the cutoff determined by q. The unitary part of this has been suggested by Connes and Lott [9] as a natural origin for the gauge symmetry of the standard model.
Thus, according to our ansatz using TQFT states, we find a copy of the gauge bosons of the standard model in the states on a toroidal conical singularity. If we choose the q in such a way as to get exactly 3 matrix blocks in our space [10], we could get exactly the standard model, otherwise we could be led to the conjecture that the standard model is really part of a gauge theory with group U(1)+SU(2)+ SU(3)+SU(4)... where particles charged in the higher dimensional pieces acquire very large masses and are therefore unseen.
It is therefore interesting to ask what sort of interaction vertices toroidal and other conical singularities might admit. Are the toroidal singularities special, as compared to the higher genus ones?
Hyperbolic manifolds and interaction vertices for conic matter.
We remind the reader that we are interpreting regions which look like a conic singularity over a surface crossed with an interval as propagating particles. Now we want to think of the vertices where such topologies meet as interaction vertices. As we explained above, the regions around these vertices are cones over 3-manifolds with conic singularities over surfaces.
We now want to propose a second approximation. The low energy vertices corresponding to these cones over 3-manifolds should be dominated by the flat Lorentzian metrics on them. The physical argument justifying this is that topologies which did not admit flat geometries would become very high energy as we summed over refinements of the triangulation. In the related context of 3d manifolds discussed above we noted a possible connection between this approximation and the TQFT ansatz. Now we discover an interesting connection. Flat Lorentzian geometries on the cone over a 3-manifold arise in a natural way from hyperbolic structures on it. This is because hyperbolic structures can be recovered as the quotient space of the forward timelike hyperboloid of Minkowski space by discrete subgroups of SL(2,C) acting isometrically on it. Extending the action to the entire forward cone yields a flat Lorentzian 4-geometry on the cone. If we do a similar construction to produce a 3-manifold with boundary, we obtain a conformal (=hyperbolic) structure on the two dimensional boundary of the 3-manifold at the same time. Thus we are led to a picture where we match the hyperbolic structures on the surfaces linking the edges to the hyperbolic structures assigned to the boundary components of the 3-manifolds linking the vertices to obtain flat geometries surrounding the entire singular part of a 4-D simplicial complex which could arise in our model.
An interesting theorem about hyperbolic structures on 3-manifolds with boundary, called Mostow rigidity [11], tells us that the degrees of freedom of a hyperbolic structure on the bulk are exactly the degrees of freedom of the conformal structure on the boundary components. This means that when we sum over flat geometries in our situation, we get a multiple integral over Teichmuller parameters. This produces a sort of mathematical convergence with the Polyakov approach to string theory. We do not yet know if when we go to quantizing over the space of flat structures any deeper connections to string theory will result. Now we make another critical observation: the only complete hyperbolic 3manifolds with finite volume are the ones whose boundary components are tori and Klein bottles [12]. We believe that infinite volume metrics would not make an important low energy contribution to the model, while incomplete hyperbolic metrics would not match flatly at the surfaces linking the edges.
This leads us to a picture in which the low energy interacting world would contain only toroidal and Klein bottle singularities, leaving the higher genus surfaces to decouple and form dark matter. Since our TQFT ansatz suggested that the states on tori could reproduce the gauge bosons for the standard model, while the Klein bottle, being nonorientable, would produce fermionic states, this yields a picture with many similarities to the standard model plus dark matter. We have not yet tried to find an argument for the state space on a Klein bottle.
We would also like to find an approximate argument for how TQFT states might propagate across a vertex described by some cobordism between the incoming and outgoing surfaces. The most obvious would be to simply take the linear map between the surface states given by the TQFT itself. It is interesting to note that for a particularly simple cobordism from two tori to a third this would just give the multiplication of the associative algebra we mentioned above, yielding the gauge algebra of the standard model.
Conclusions
It is clear that the arguments presented here to analyse the behavior of the LCSS model near singular points are very preliminary; it would be rash to jump to the conclusion that we had the unified field theory in hand. Nevertheless the trilogy of standard model bosons, fermions, and weakly interacting higher genus states, especially embedded in a plausible model for quantum general relativity, cannot be ignored.
It does seem safe to say that the simplicity of this model makes it an interesting problem for mathematical physics to study. The connection between particle interactions and hyperbolic structures on 3-manifolds has the advantage that it poses problems for a mathematical subject which has been deeply studied from several points of view and concerning which much is known [12]. It is interesting to note that the question of hyperbolic structure on 3-manifolds with toroidal boundaries is deeply connected to knot theory. We may find knotted vertex structures play a role in this theory; interestingly, they are chiral.
Given the finiteness proof in [2], the conjectures in this paper pertain to limits of families of finite integrals; at least in principle there is reason to hope they can be rigorously formulated and proven.
The fact that the families of flat structures which appear here are parametrized by the Teichmuller parameters on the boundary (Mostow rigidity) means that the subject takes on an unexpected mathematical resemblance to string theory, although it is "world sheets" rather than loops which propagate through spacetime.
We do think the moral can be drawn from this model that there are more possibilities for forming fundamental quantum theories of nature than contemporary theoretical physics seems to recognize. |
234741310 | s2orc/train | v2 | 2022-01-31T23:02:03.272Z | 2021-01-01T00:00:00.000Z | Machine Learning Classification of Price Extrema Based on Market Microstructure and Price Action Features. A Case Study of S&P500 E-mini Futures
The study introduces an automated trading system for S\&P500 E-mini futures (ES) based on state-of-the-art machine learning. Concretely: we extract a set of scenarios from the tick market data to train the models and further use the predictions to statistically assess the soundness of the approach. We define the scenarios from the local extrema of the price action. Price extrema is a commonly traded pattern, however, to the best of our knowledge, there is no study presenting a pipeline for automated classification and profitability evaluation. Additionally, we evaluate the approach in the simulated trading environment on the historical data. Our study is filling this gap by presenting a broad evaluation of the approach supported by statistical tools which make it generalisable to unseen data and comparable to other approaches.
Introduction
As machine learning (ML) changes and takes over virtually every aspect of our lives, we are now able to automate tasks that previously were only possible with human intervention. A field in which it has quickly gained in traction and popularity is finance [18]. This field, which is often dominated by organisations with extreme expertise, knowledge and assets, is often considered out of reach to individuals, due to the complex decision making and high risks. However, if one sets aside the financial risks, the people, emotions, and the many other aspects involved, the core process of trading can be simplified to decision making under pre-defined rules and contexts, making it a perfect ML scenario.
Most current day trading is done electronically, through various available applications. Market data is propagated by the trading exchanges and handled by specialised trading feeds to keep track of trades, bids and asks by the participants of the exchange. Different exchanges provide data in different formats following predetermined protocols and data structures. Finally the dataset is relayed back to a trading algorithm or human to make trading decisions. Decisions are then relayed back to the exchange, through a gateway, normally by means of a broker, which informs the exchange about the wish to buy (long) or sell (short) specific assets. This series of actions rely on the understanding of a predetermined protocol which allows communication between the various parties. Several software tools exist to ensure that almost all these steps are done for you, with the decisions made being the single point that may be uniquely done by the individual. After a match is made (either bid to ask or ask to bid) with another market participant, the match is conveyed back to the software platform and the transaction is completed. In this context, the main goal of ML is to automate the decision making in this pipeline.
When constructing algorithmic trading software, or Automatic Trading Pipeline (ATP), each of the components of the exchange protocol needs to be included. Speed is often a key factor in these exchanges as a full round of the protocol may take as little as milliseconds. So to construct a robust ATP, time is an important factor. This extra layer adds further complexity to the machine learning problem.
A diagram of what an ATP looks like in practice, is presented in Fig 1. 1.
Bid Ask
Order Book In Fig 1 it can be observed that the main ML component is focused on training of the decision making and the strategy. This is by no means a straightforward feat as successful strategies are often jealously guarded secrets, as a consequence of potential financial profits. Several different components are required, not the least analysing the market to establish components of interest. Historical raw market data contains unstructured information, allowing one to reconstruct all the trading activity, however that is usually not enough to establish persistent and predictable price-action patterns due to the market non-stationarity. This characterisation is a complex process, which requires guid-ance and domain understanding. While traditional approaches have focused on trying to learn from the market time series over the whole year or potentially across dozens of years, more recent work has proposed the usage of data manipulation to identify key events in the market [40], this advanced categorization can then become focus for the machine learning input to improve performance.
This methodology focuses on identifying states of a financial market, which can then be used to identify points of drastic change in the correlation structure, whether positive or negative. Previous approaches have used these states to correlate them to world wide events and general market values to categorize interesting scenarios [6], showing that using these techniques the training of the strategy can be greatly optimized. At the same time there is lack of research proposing a full-stack automated trading platform and evaluating it using statistical methods. In this work we build above the existing body of knowledge by proposing an approach for the extraction and classification of financial market patterns based on price action and market microstructure. Moreover we quantify the effects of the approach by using effect sizes, as well as perform hypothesis testing.
The contributions of this paper are as follows: 1) a methodology to construct an automated trading platform using a state of the art machine learning techniques; 2) we present an automated market profiling technique based on machine learning; 3) we formulate research questions and assess them statistically and 3) we propose and evaluate the performance of a futures trading strategy based on market profiles.
The remainder of the paper is structured as follows: Section 2) provides financial and machine learning background needed to understand our approach; Section 3) provides descriptions of some related work; Section 4) states the research questions of the paper; Section 5) presents the studied data and the study methodology for the automated market profiling approach and assessment methodology; Section 6) details our constructed APT results; in Section 7) a discussion on results, their implications and limitations is provided; and Section 8) concludes the work.
Background
There are several different types of financial data, and each of these has a different role in financial trading. They are widely classified into four categories: i. Fundamental Data, this kind of data is formed by a set of documents, for example financial accounts, that a company has to send to the organization that regulates its activities, this is most commonly accounting data of the business, ii. Market Data, this constitutes all trading activities that take place, allowing you to reconstruct a trading book, iii. Analytics, this is often derivative data acquired by analysing the raw data to find patterns, and can take the form of fundamental or market analytics, and iv. Alternate data, this is extra domain knowledge that might help with the understanding of the other data, such as world events, social media, Twitter and any other external sources. In this work we analyse type ii data to extract patterns and construct market profiles, and therefore focus our background on this data type, a more comprehensive review of different data types is available in Prado & Lopez [12].
Data pre-processing
In order to prepare data for processing, the raw data is structured into predetermined formats to make it easier for a machine learning algorithms to digest. There are several ways to group data, and various different features may be aggregated. The main idea is to identify a window of interest based on some heuristic, and then aggregate the features of that window to get a representation, called Bar. Bars may contain several features and it is up to the individual to decide what features to select, common features include: Bar start time, Bar end time, Sum of Volume, Open Price, Close Price, Min and Max (usually called High and Low) prices, and any other features that might help characterise the trading performed within this window. The decision of how to select this window may be a make or break for your algorithm, as it will mean you either have good useful data or data not representative of the market. An example of this would be the choice of using time as a metric for the bar window, e.g. take n hours snapshots. However, given the fact there are active and non-active trading periods, you might find that only some bars are actually useful using this methodology. In practice, the widely considered way to construct bars is based on the number of transactions that have taken place or volumes traded. This allows for the construction of informative bars which are independent of timings and get a good sampling of the market, as it is done as a function of trading activity. There are of course many other ways to select a bar [12], so it is up to the prospective user to select one that works for their case.
Discovery of the price extrema
In mathematics an extremum is any point at which the value of a function is largest (a maximum) or smallest (a minimum). These can either be local or global extrema. At local extremum the value is larger/lower at immediately adjacent points, while at a global extremum the value of the function is larger than its value at any other point in the interval of interest. If one wants to maximise their profits theoretically, their intent would be to identify an extremum and trade at that point of optimality i.e. the peak. This is one of the many ways of defining the points of optimality.
As far as the algorithms for an ATP are concerned, they will often perform with active trading, so finding a global extremum serves little purpose. Consequently, local extrema within a pre-selected window are instead chosen. Several complex algorithms exist for this with use cases in many fields such as biology [15]. However, the objective is actually quite simple: identify a sample for which neighbours on each side have a lower values for maxima, and higher values for minima. This approach is very straight forward and can be implemented with a linear search. In the case where there are flat peaks, which means several entries are of equal value the middle entry is selected. Two further metrics of interest are, the prominence and width of a peak. Prominence of a peak measures how much a peak stands out from the surrounding baseline of the near entries, and is defined as the vertical distance between the peak and lowest point. Width of the peak is the distance between each of the lower bounds of the peak, signifying the peaks duration. In case of peak classification, these measures can aid a machine learning estimator to relate the obtained features with the discovered peaks, this avoids attempts to directly relate properties of narrow or less prominent peaks with wider or more prominent peaks. These measures allow for the classification of good points of trading as well as giving insight as to what led to this classification with the prominence and width.
Derivation of the market microstructure features
A market microstructure is the study of financial markets and how they operate. It's features represent the way that the market operates, how decisions are made about trades, price discovery process and many more [32]. The process of market microstructure analysis is the identification of why and how the market prices will change, in order to trade profitably. These may include, 1) the time between trades, as it is usually an indicator of trading intensity [3] 2) volatility, which might represent evidence of good and bad trading scenarios, as high volatility may lead to unsuitable market state [23], 3) volume, which may directly correlate with trade duration, as it might represent informed trading rather than less high volume active trading [37], and 4) trade duration, high trading activity is related to greater price impact of trades and faster price adjustment to trade-related events, whilst slower trades may indicate informed single entities [16]. Whist several other options are available they are often instrument related and require expert domain knowledge. In general it is important to tailor and evaluate your features to cater the specific scenario identified.
One such important scenario to consider when catering to prices, is the aggressiveness of buyers and sellers. In an Order Book, a match implies a trade, which occurs whenever a bid matches an ask and conversely, however the trade is only ever initiated by one party. In order to dictate who is the aggressor is in this scenario (if not annotated by the marketplace), the tick rule is used [1]. The rule labels a buy initiated trade as 1, and a sell-initiated trade as -1. The logic is the following an initial label l is assigned an arbitrary value of 1 if a trade occurs and the price change is positive, then l = 1 if the price change is negative, and l = 0 and if there is no price change l is inverted. This has been shown to be able to identify the aggressor with high degree of accuracy [17]
Machine learning algorithms
Machine learning is a field that has come to take over almost every aspect of our lives, from personal voice assistants to healthcare. So it comes at no surprise that it is also gaining popularity in the field of algorithmic trading. There is a wide range of techniques for machine learning, ranging from very simple such as regression, to techniques used for deep learning such as neural networks. Consequently, its important to choose an algorithm which is most suited to the problem one wishes to tackle. In ATPs one of the possible roles of the machine learning is to identify situations in which it is profitable to trade, depending on the strategy, for example if using a flat strategy the intent is to identify when the market is flat. In these circumstances, and due to the potentially high financial implications of false negatives understanding the prediction is key. Understanding the prediction involves the process of being able to understand why the algorithm made this decision. This is a non trivial issue and something very difficult to do for a wide range of techniques, neural networks being the prime example (although current advances are being made [53,19] . Perhaps one of the simplest yet highly effective techniques is known as Support Vector Machines (SVM). SVMs are used to identify the hyperplane that best separates a binary sampling. If we imagine a set of points mapped onto a 2d plane, the SVM will find the best line that divides the two different classifications in half. This technique can easily be expanded to work on higher-dimensional data, and since it is so simple, it becomes intuitive to see the reason behind a classification. However, this technique whilst popular for usage in financial forecasting [49,27], suffers from the drawback that it is very sensitive to parameter tuning, making it harder to use, and also does not work with categorical features directly, making it less suited to complex analysis.
Another popular approach are decision trees. The reason tree-based approaches are hugely popular is because they are directly interpretable. To improve the efficacy of this technique, several different trees are trained and used in unison to come up with the result. The most popular case of this is Random Forest. Random Forest operates by constructing a multitude of decision trees at training time and outputting the classification of the individual trees. However, this suffers from the fact that the different trees are not weighted and contribute equally, which might lead to inaccurate results. One class of algorithms which has seen mass popularity for its robustness, effectiveness and clarity is boosting algorithms. Boosters create "bins" of classifications that can be combined to reduce overfitting and improve the prediction. The data is split into n samples, either randomly or by some heuristic, and each tree is trained using one of the samples. The results of each tree are then used in an ensemble way to help make the final prediction, effectively each individual weak classifier helps contribute to a final grouped together and much more powerful classifier. Finally each tree features are internally evaluated leading to a weakness measure which dictates the overall contribution to the result.
The first usage of boosting using a notion of weakness was introduced by AdaBoost [21], this work presented the concept of combining the output of the boosters into a weighted sum that represents the final output of the boosted classifier. This allows for adaptive analysis as subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers. Following on from this technique two other techniques were introduced XGBoost [9] and LightGBM [29], both libraries gained a lot of traction in the machine learning community for their efficacy, and are widely used. In this category, the most recent algorithm is CatBoost. CatBoost [42] highly efficient and less prone to bias than its predecessors, it is quickly becoming one of the most used approaches, in part due to its high flexibility. CatBoost was specifically proposed to expand issues in the previous approaches which lead to target leakage, which sometimes led to overfitting. This was achieved by using ordered boosting, a new technique allowing independent training and avoiding leakage. Also allowing for better performance on categorical features.
Feature and prediction analysis
Feature analysis is the evaluation of the input data to assess their effectiveness and contribution to the prediction. This may also take the form of creating new features using domain knowledge to improve the data. The features in the data will directly influence the predictive models used and the results that can be achieved. Intuitively, the better the chosen and prepared features, the better the results that can be achieved. However, this may not always be the case for every scenario, as it may lead to overfitting due to a too large dimensionality of the data. The process of evaluating and selecting the best features is referred to as feature engineering. One of the methods supporting feature engineering is feature importance evaluation. The simplest way to achieve this is by feature ranking [24], in essence a heuristic is chosen and each feature is assigned a score based on this heuristic, ordering the features in descending order. This approach however may be problem specific and require previous domain knowledge. Another common approach is the usage of correlations, to evaluate how the features relate to the output. The intent of this approach is to evaluate the dependency between the features and the result, which intuitively might lead in a feature that contributes more to the output [4]. However these approaches evaluate the feature as a single component, in relation to the output, independent of the other features. Realistically one would want to understand their features as a whole and see how they contribute to a prediction as a group.
Beyond the initial understanding of the features it is important to get an understanding of the prediction. Compared to previously discussed approaches this starts from the result of the model and goes back to the feature to see which ones contributed to the prediction. This has advantages over the pure feature analysis approaches as it can be applied to all the different predictors individually and gives insights into the workings of the predictor. Recent advances into this approach, namely SHAP (SHapleyAdditive exPlanations) [36], are able to provide insight into a by prediction scoring of each feature. This innovative technique can allow the step through assessment of features throughout the different predictions, providing guided insight which can also be averaged for an overall assessment. This is very useful for debugging an algorithm, assessing the features and understanding the market classifications, making it particularly relevant for this case study.
Trading strategy
In this context we only refer to common strategies for active trading. Active trading seeks to gain profit by exploiting price variations, to beat the market over short holding periods. Perhaps the most common approach is trend-based strategies. These strategies aim to identify shifts in the market towards a raise or decrease in price and sell at the point where they are likely to gain profit. The second common approach is called flat strategy. Unlike trending markets a flat market is a stable state in which the range for the broader market does not move either higher or lower, but instead trades within the boundaries of recent highs and lows. This makes it easier to understand changes of the market and make a profit with a known market range. The role of the machine learning in both these strategies is to predict whether the market is entering a state of flatness or trending respectively.
To evaluate the effectiveness of the trading strategy the Sharpe Ratio is used. This is a measure for assessing the performance of an investment or trading approach and can be computed as the following: where R p & R f correspond to the portfolio and risk free returns, respectively, and σ p is a standard deviation of the portfolio return. While the equation gives a good intuition of the measure, in practice its annualised version is often computed. It assumes that daily returns follow the Wiener process distribution, hence to obtain annualised values, the daily Sharpe values are multiplied by 252 -the annual number of trading days. It should be noted that such an approach might overestimate the resulting Sharpe ratios as returns auto-correlations might be present, violating the original assumption [35].
Backtesting
In order to test a trading strategy evaluation is performed to assess profitability. Whilst it is possible to do so on real market data, it is generally more favourable to do so on historical data to get a risk-free estimation of performance. The notion is that a strategy that would have worked poorly in the past will probably work poorly in the future, and conversely. But as you can see, a key part of backtesting is the risky assumption that past performance predicts future performance. Several approaches exist to perform backtesting and different things can be assessed. Beyond testing of trading strategies, backtesting can show how positions are opened and likelihoods of certain scenarios taking place within a trading period. The more common technique is to implement the backtesting within the trading platform, as this has the advantage that the same code as live trading can be used. Almost all platforms allow for simulations on historical data, although it may differ in form from the raw data one may have used for training. For more flexibility one can implement their own backtesting system in languages such as Python or R. This specific approach enables for the same code pipeline that is training the classifier to also test the data, allowing for much smoother testing. Whilst this will ensure the same data that is used for training may be used for testing it may suffer from differences to the trading software that might skew the results. Another limitation of this approach is that there is no connection to the exchange or the broker, there will be limitations on how order queues are implemented as well as the simulating of latency which will be present during live trading. This means that the identification of slippages, which is the difference between where the order is submitted by the algorithm and the actual market entry/exit price, will differ and impact the order of trades.
Statistical Reproducibility
In order to evaluate the results of our research, answer the research questions, add explainability and increase reproducibility of our study, we make use of several statistical techniques.
Effect Sizes
The first step that has to be done is quantifying the effectiveness of the approach in relation to a control. Statistically this is done using effect sizes. Effect size is a measure for calculating the strength of a statistical claim. A larger effect size indicates that there is a larger difference between the treatment (method) and the control sample. Reporting effect sizes is considered a good practice when presenting empirical research findings in many fields [47,25,52]. Two types of effect sizes exist: relative and absolute. Absolute ones provide a raw difference between the two groups and are usually used for quantifying the effect for a particular use case. Relative are obtained by normalising the difference by the absolute value of the control group. Depending on the setting, computing effect sizes one might look at differences in variances explained, differences in mean, and associations in variables [30].
Differences in variance explained measures the proportion to which a mathematical model accounts for the variation. One of the most common ways to do this is making use of Pearson Correlation [31]. Pearson correlation is defined as the covariance of the two variables, divided by the product of their standard deviations. This normalises the measurement of the covariance, such that the result always has a value between -1 and 1. A further commonly used measure is known as r-squared, taking the Pearson correlation and squaring it. By doing this the we can measure the proportion of variance shared by the two variables. The second approach is instead to look at the differences in population means, using a standardization factor. Popular approaches include Cohen's d [10], which calculates the difference between two samples means with pooled standard deviation as the standardization factor. However it was found that the standard deviation may be biased as the standardization factor, meaning that when the two means are compared and standardized by division as follows f r acu 1 − u 2 SD, if the standard deviation SD is used it may cause some bias and alternative standardizations may be preferred. This is rectified in the Hedge's g [26] method, which corrects the bias using a correction factor when computing the pooled standard deviation. A further extension that can be added on top of this correction is to use av or rather average variance instead of variance, this is more powerful when applied to two correlated samples, and uses the average standard deviation instead, once again the corrected Cohen's d a v is referred to as Hedge's g av [11, 34, ?]. The final type of effect size is categorical variable associations, which checks the inter-correlation of variables, and can evaluate the probability of variables being dependent on one another, examples of this are the chi-squared test [20], also effective on ordinal variables.
Inferential Statistics
Another core component of a statistical assessment is Inferential Statistics. With inferential statistics one aims to determine whether the findings are generalisable to the population. It is also used to determine if there is a significant difference between the means of two groups, which may be related in certain features. The most popular approaches fall under the general linear model [38]. The general concept is that we want to use a null hypothesis to test the probabilistic difference between our sample population and another population. Popular approaches include t-test [48], ANOVA [22], Wilcoxon [51], and many more, depending on the considered setting. A t-test is a type of inferential statistic used to determine if there is a significant difference between the means of two groups, which may be related in certain features. The basic functioning is that you take a sample from two groups, and establish the null hypothesis for which the sample means are equal, it then calculates the mean difference, standard deviations of the two groups and number of data values of two groups and attempts to reject the null hypothesis. If the null hypothesis is rejected it means that the mean difference is statistically significant. However the t-test relies on several assumptions: 1) that the data is continuous, 2) that the sample is randomly collected from the total populations, 3) that the data is normally distributed, and 4) that the data variance is homogeneous [50]. This makes the t-test not suited to analysis of small samples, where normality and other sample properties are hard to assess reliably. An approach which doesn't face the same limitations is the Wilcoxon test [51]. The advantage of this approach is that instead of comparing means, it repeatedly compares the samples to check if the mean ranks differ, this means it will check the arithmetic average of the indexed position within a list. This type of comparison is applicable for paired data only and done on individual paired subjects, increasing the power of the comparison. However a downside of this approach is that it is non-parametric. A parametric test is able to better observe the full distribution, and is consequently able to observe more differences and specific patterns, however as we saw with t-test they rely on stronger assumptions and are sometimes impractical.
Inferential Corrections
The more inferences are made, the more likely erroneous inferences are to occur. Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a discovery, of the same dataset or dependent datasets. Hence, the overall probability of the discovery increases. This increased chance should be corrected. Some methods are more specific but there exist a class of general significance level α adjustments. Examples of these are the Bonferroni Corrections [5] and Šidák Corrections [45]. The general idea follows on from the following: given that the p-value establishes that if the null hypothesis holds what is likelihood of getting an effect at least as large in your own sample. Then if the p-value is small enough you can conclude that your sample is inconsistent with the null hypothesis and reject it for the population. So the idea of the corrections is that to retain a prescribed significance level α in an analysis involving more than one comparison, the significance level for each comparison must be more stringent than the initial α. In case of Bonferroni corrections, if for some test performed out of the total n we ensure that its p-value is less than 1.0 − α/n, then we can conclude, as previously, that the associated null hypothesis is rejected.
Related Work
In this section we break down related work in this area, including market characterization, price extrema and optimal trading points, and automated trading systems. Each of the previous works is compared to our approach.
In their seminal work Munnix et al. [41] first proposed the characterization of market structures based on correlation. Through this they were able to detect key states of market crises from raw market data. This same technique also allowed the mapping of drastic changes in the market, which corresponded to key points of interest for trading. By using k means clustering, the authors were able to predict whether the market was approaching a crisis, allowing them to react accordingly and construct a resilient strategy. Whilst this approach was a seminal work in the understanding of market dynamics, it was still based on statistical dependencies and correlations which are not quite as advanced as more modern machine learning approaches. Nonetheless their successful results initiated a lot more research in this area. Their way of analysing a market as a series of states proved to be a winning strategy allowing for more focused decision making and improving the understanding of the market. Following on from this same approach we seek to characterise the market as a series of peaks of interest and understand whether the market structure allows their classification in difference scenarios. This constitutes the initial stage of the ATP, or the preprocessing, and with their approach several steps of manual intervention are still required.
Historically there has been an intuition that the changes in market price are random. By this it is understood that whilst volatility is due to certain events, it is not possible to extract them from raw data. However, despite this, volatility is still one of the core metrics for trading [2]. In an effort to statistically analyse price changes and break down key events in the market Caginalp & Caginalp [7], propose a method to find peaks in the volatility, representing price extrema. The price extrema represent the optimal point at which the price is being traded before a large fluctuation. This strategy depends on the exploitation of a shift away from the optimal point to either sell high or buy low. The authors describe the supply and demand of a single asset as a stochastic equation where the peak is found when maximum variance is achieved. Since the implied relationship of supply and demand is something that will hold true for any exchange, this is a great fit for various different instruments. In a different context, Miller et al [39], analyse Bitcoin data to find profitable trading bounds. Bitcoin, unlike more traditional exchanges, is decentralised and traded 24h a day, making the data much more sparse and with less concentrated trading periods. This makes the trends harder to analyse. Their approach manipulates the data in such a way that it is smoothed, through the removal of splines, this seeks to manipulate curves to make its points more closely related. By this technique they are able to remove outliers and find clearer points of fluctuation as well as peaks. The authors then construct a bounded trading strategy which proves to perform well against unbounded strategies. Since Bitcoin is more decentralised and by the very nature of those investing in it, this also reduces barriers to entry, making automated trading is much more common. This means that techniques to identify bounds and points of interest in the market are also more favoured and widely used. An automated trading system, is a piece of code that autonomously trades in the market. The goal of such machine learning efforts is the identification of a market state in which a trade is profitable, and to automatically perform the transaction at that stage. Such a system is normally tailored for a specific instrument, analysing unique patters to improve the characterisation. One such effort focusing on FX markets is, Dempster & Leemans [13]. In this work, a technique using reinforcement learning is proposed, to learn market behaviours. Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward. This is achieved by assigning positive rewards to desired actions and negative rewards to undesired actions, leading to an optimization towards actions that increment rewards. In financial markets this naturally corresponds to profitable trades. Using this approach the authors are able to characterise when to trade, perform analysis of associated risks, and automatically make decisions based on these factors. In the more recent work, Booth et al. [6], describe a model for seasonal stock trading using an ensemble of Random Forests. This leveraged variability in seasonal data to predict the price return based on these events taking place. Their random forest based approach reduces the drawn-down change of peak to through events. Their approach is based on domain knowledge of well-known seasonality events, usual approaches following this technique find that whilst the event is predictable, the volatility is not. So their characterisation allows to predict which events will lead to profits. The Random Forests are used to characterise features of interest in a time window, and multiple of these are aggregated to inform the decision process. These ensembles are then weighted based on how effective they are and used to inform decision with higher weights having more input. Results across fifteen DAX assets show increases in profitability and in prediction precision.
As can be seen, statistical and machine learning techniques have been successfully applied in a variety of scenarios proving effective as the basis of automatic trading and identification of profitable events. This makes the further investigation into more advanced machine learning techniques a desirable and interesting area. Our work expands into these previous concepts to improve performance and seek new ways to characterise the market.
Aim
The aim of the study is comprised of multiple aspects: i) propose an automated trading system and check the significance of its performance ii) propose a two-step way of feature design in the context of the automated trading system and assess its performance, iii) interpret the system outputs, iv) demonstrate benefits of the statistical methods in the context of financial time series analysis.
We formulate the research questions and statistical hypotheses which are later evaluated as follows. As the first research question we aim to assess whether the estimator is capable of fitting to the data on the considered feature space. We believe, it is a necessary initial step, as without this information it would be hard to judge on the following findings. RQ1: Is it feasible to classify the extracted price extrema using the proposed feature space and Cat-Boost estimator better than the baseline precision? By the baseline precision we mean using an estimator with an always positive class output.
As the second research question we aim to investigate if the proposed 2-step method for feature extraction gives any benefit in comparison to using any of the extraction steps alone. RQ2: Does the use of the 2-step feature extraction improve the extremum classification performance with respect to the individual steps?
We provide more details and formulate the null and alternative hypotheses in Material and Methods section.
Material and Methods
In our study we demonstrate how off the shelf machine learning methods can be applied to financial markets analysis and trading in particular. Taking into account the non-stationary nature of the financial markets, we are achieving this goal by considering only subsets of the time series. We propose a price-action-based way of defining the subsets of interest and perform their classification. Concretely: we identify local price extrema and predict whether the price will reverse (or 'rebound') or continue its movement (also called 'crossing'). For the demonstration purposes, we set up a simplistic trading strategy, where we are trading a price reversal after a discovered local extremum is reached as shown in Fig. 4. We statistically assess our choices of the feature space and feature extraction method. In the simulated trading, we limit our analysis to backtesting and do not perform any live trading. In the Discussion section we address the limitations of such an approach. We share the reproducibility package for the study [46].
In the section we first describe the datasets and pre-processing procedures. Then, we outline the experiment design comprising of entries labelling and setting up the classification task, designing features, evaluating the model performance, evaluating the results statistically, interpreting model results and, finally, simulated trading.
Data
In the study we use S&P500 E-mini CME December (Z). We operate on ticks data which includes Time&Sales records statistics, namely: bid and ask volumes and numbers of trades, as well as the largest trade volumes per tick. We consider a tick to incorporate all the market events between the two price changes by a minimum price step. For the considered financial instrument the tick size is $0.25.
Data pre-processing
We sample the contracts data to the active trading periods by considering only the nearest expiring contracts with the conventionally accepted rollover dates. The samples end on the second Thursday of the expiration month -on that day the active trading is usually transferred to the following contract. This decision ensures highest liquidity, and, due to the double-auction nature of the financial markets, stable minimum bid-ask spreads [28].
In the current study we consider two simplest scenarios of the price behaviour after it reaches the local extremum -reversal and extremum crossing. When labelling the entries, we require up to 15 ticks price movement as a reversal (or rebound) and only 3 ticks for the extremum crossing. The labelling approach allows us to study a range of configurations of the reversals and investigate how the configurations affect the performance of the models. At the same time these ranges are well within the boundaries of the intraday price changes.
An essential part of the proposed automated trading system is detection of the price extrema. The detection is performed on a sliding window of the streamed ticks with a window size of 500. We capture peaks with the widths from 100 to 400. The selected widths range serves three purposes: i) ensures that we do not consider high-frequency trading (HFT) scenarios which require more modelling assumptions and different backtesting engine; ii) allows to stay in intraday trading time frames and have a large enough number of trades for analysis; iii) makes the price level feature values comparable across many of the entries.
Classification task
In order to incorporate machine learning into the automated trading system, we design a binary classification task, where the labels correspond to price reversals (positives) and crossings (negatives). Due to the labelling design, we are more flexible with take profits and stop losses when trading reversals (up to 15 ticks versus 3 ticks) -this explains their positive labelling.
Feature design
To perform the extrema classification, we obtain two types of features: i) designed from the price level ticks (called price level (PL) features), and ii) obtained from the ticks right before the extremum is approached (called market shift (MS) features) as we illustrate in Fig. 2. We think it is essential to perform the two-step collection since the PL features contain properties of the extremum, and the MS features allow to spot any market changes happened between the time when the extremum was formed and the time we are trading it. Considering different extrema widths, varying dimensionality of the data does not allow to use it directly for classification -most of the algorithms take fixed-dimensional vectors as input. We ensure the fixed dimensionality of a classifier input by aggregating per-tick features by price. We perform the aggregation for the price range of 10 ticks below (or above in case of a minimum) the extremum. This price range is flexible -10 ticks are often not available within ticks associated with the price level (red dashed rectangle in Fig. 2) in this case we fill the empty price features with zeros. We assume that the further the price from the extremum the less information relevant for the classification it contains. Considering the intraday volatility of ES, we expect that the information beyond 10 ticks from the extremum is unlikely to improve the predictions. If one considers larger time frames (peak widths), this number might need increasing.
PL features are obtained from per-tick features by grouping by price with sum, max or count statistics. For instance: if one is considering volumes, it is reasonable to sum all the aggressive buyers and sellers before comparing them. Of course, one can also compute mean or consider max and min volumes per tick. If following this line of reasoning, the feature space can be increased to very large dimensions. We empirically choose a feature space described in Tables 1 and 2 for Price Level and Market Shift components, respectively. Defining the feature space we aim to make the feature selection step computationally feasible. Too large feature space might be also impractical from the optimisation point of view, especially if the features are correlated.
To track the market changes, for the MS feature component we use 237 and 21 ticks and compare statistics obtained from these two periods. Non-round numbers help avoiding interference with the majority of manual market participants who use round numbers [12]. We also choose the values to be comparable to our expected trading time frames. No optimisation was made on them. We obtain the MS features being 2 ticks away from the price level to ensure that our modelling does not lead to any time-related bias where one cannot physically send the order fast enough to be executed on time.
Model evaluation
After the features are designed and extracted, the classification can be performed. As a classifier we choose CatBoost estimator. We feel that CatBoost is a good fit for the task since it is resistant to overfitting, stable in terms of parameter tuning, efficient and one of the best-performing boosting algorithms [42]. Finally, being based on decision trees, it is capable of correctly processing zero-padded feature values when no data at price is available. Other types of estimators might be comparable in one of the aspects and require much more focus in the other ones. For instance, neural networks might offer a better performance, but are very demanding in terms of architecture and parameter optimization.
In this study we use precision as the main scoring function (S): where TP is the number of true positives and FP the number of false positives. All the statistical tests are run on the precision scores of the samples. This was chosen as the main metric since by design every FP leads to losses, and false negative (FN) means only a lost trading opportunity. To give a more comprehensive view on the model performance, we report F1 scores, PR-AUC (precision-recall area under curve) and ROC-AUC (receiver-operating characteristic area under curve) metrics. We report model performances for the two-step feature extraction approach as well as for the each of the feature extraction steps separately. In order to avoid large bias in the base classifier probability, we introduce balanced class weights into the model. The weights are inversely proportional to the number of entries per class. The contracts for training and testing periods are selected sequentially -training is done on the active trading phase of contract N , testing -on N +1, for N ∈ [0, B −1], where B is the number of contracts considered in the study.
Sum bid volume close to the price extremum divided by the close ask volume [PL23]
Sum bid volume far from the price extremum divided by the far ask volume [PL24]
Key OB -order book T -trades t -ticks N -total ticks p -price w -tick window PL -extremum price V -volume b -bid a -ask P N -price level neighbours until distance M(X) -Max value in set X Table 2: Market shift feature space component used in the study. These features are obtained right before the already formed price level is approached. When discussed, features are referred by the codes in the square brackets at the end of descriptions.
Equation Description
Fraction of bid over ask volume for last 237 ticks [MS0] Fraction of bid over ask trades for last 237 ticks Fraction of bid/ask volumes for long minus short periods [MS2] We apply a commonly accepted in ML community procedure for input feature selection and model parameter tuning [33]. Firstly, we perform the feature selection step using a Recursive Feature Elimination with cross-validation (RFECV) method. The method is based on gradual removal of features from the model input starting from the least important ones (based on the model's feature importance), and measuring the performance on a cross-validation dataset. In the current study on each RFECV step we remove 10% of the least important features. Cross-validation allows robust assessment of how the model performance generalizes into unseen data. Since we operate on the time series, we use time series splits for cross-validation to avoid the look-ahead bias. For the feature selection, the model parameters are left default, the only configuration we adjust is class labels balancing as our data is imbalanced. Secondly, we optimize the parameters of the model in a grid-search fashion. Even though CatBoost has a very wide range of parameters which can be optimized, we choose the parameters common for boosting trees models for the sake of feasibility of the optimization and leaving the possibility of comparing the optimisation behaviour to the other boosting algorithms. The following parameters are optimized: 1) Number of iterations, 2) Maximum depth of trees, 3) has_time parameter set to True or False, and 4) L2 regularisation. For the parameter optimisation we use a cross-validation dataset as well. We perform training and cross-validation within a single contract and the backtesting of the strategy on the subsequent one to ensure relevance of the optimized model.
Statistical evaluation
Here we formalise the research questions by proposing null and alternative hypotheses, suggesting statistical tests for validating them, as well as highlighting the importance of the effect sizes.
The effect sizes are widely used in empirical studies in social, medical and psychological sciences [34]. They are a practical tool allowing to quantify the effect of the treatment (or a method) in comparison to a control sample. Moreover, they allow generalization of the findings to the whole population (unseen data in our case). Finally, effect sizes can be compared across studies [34]. We believe that introduction of the effect sizes into the financial markets domain contributes to the research reproducibility and comparability.
In the current study we report Hedge's g av -unbiased measure designed for paired data. The effect sizes are visualised in a form of forest plots with .95 confidence intervals (CIs), representing the range where the effect size for the population might be found with the .95 probability. We correct the confidence intervals for multiple comparisons by applying Bonferroni corrections.
When testing the hypotheses, the samples consist of the test precisions on the considered contracts, leading to equal sample sizes in the both groups, and entries are paired as the same underlying data is used. Comparing small number of paired entries and being unsure about normality of their distribution, we take a conservative approach and for hypothesis testing use the single-tailed Wilcoxon signed-rank test. This test is a non-parametric paired difference test, which is used as an alternative to a t-test when the data does not fulfill the assumptions required for the parametric statistics. When reporting the test outcomes, we support them with the statistics of the compared groups. Namely, we communicate standard deviations, means and medians.
We set the significance level of the study to α = .05. Also, we account for multiple comparisons by applying Bonferroni corrections inside of each experiment family [8]. We consider research questions as separate experiment families.
Model analysis
We perform the model analysis in an exploratory fashion -no research questions and hypotheses are stated in advance. Hence, the outcomes of the analysis might require additional formal statistical assessment. We use SHAP local explanations to understand how models end up with the particular outputs. Through the decision plot visualisations we aim to find common decision paths across entries as well as informally compare models with small and large numbers of features. The reproducibility package contains the code snippets as well as the trained models, which allow to repeat the experiments for all the models and contracts used in the study.
Simulated trading
The trading strategy is defined based on our definition of the crossed and rebounded price levels, and schematically illustrated in Fig. 3. It is a flat market strategy, where we expect a price reversal from the price level. Backtrader Python package 1 is used for backtesting the strategy. Backtrader does not allow taking bid-ask spreads into account, that is why we are minimizing its affects by excluding HFT trading opportunities (by limiting peak widths) and limiting ourselves to the actively traded contracts only. Since ES is a very liquid trading instrument, its bid-ask spreads are usually 1 tick, which however does not always hold during extraordinary market events, scheduled news, session starts and ends. We additionally address the impact of spreads as well as order queues in the Discussion section.
In our backtests, we evaluate performance of the models with different rebound configurations and fixed take-profit parameter, and a varying take-profits with a fixed rebound configuration to better understand the impact of both variables on the simulated trading performance.
Addressing research questions
In the current subsection we continue formalizing the research questions by proposing the research hypotheses. The hypotheses are aimed to support the findings of the paper, making them easier to communicate. For both research questions we run the statistical tests on the precision metric.
In the current study we encode the hypotheses in the following way: H 0X and H 1X correspond to null and alternative hypotheses, respectively, for research question X.
CatBoost versus no-information model (RQ1)
In the first research question we investigate whether it is feasible to improve the baseline performance for the extrema datasets using the chosen feature space and CatBoost classifier. We consider the baseline performance to be precision of an always positive class output estimator. The statistical test addresses the following hypothesis: H 11 : CatBoost estimator allows classification of the extracted extrema with precision better than the no-information approach. H 01 : CatBoost estimator allows classification of the extracted extrema with a worse or equal precision in comparison to the no-information approach.
Two-step versus single-step feature extraction (RQ2)
The second research question assesses if the proposed two-step feature extraction gives any statistically significant positive impact on the extremum classification performance. The statistical test addresses the following hypothesis: H 12 : Two-step feature extraction leads to an improved classification precision in comparison to using features extracted from any of the steps on their own.
H 02 : Two-step feature extraction gives equal or worse classification precision than features extracted from any of the steps on their own. In the current setting we are comparing the target sample (the 2-step approach) to the two control samples (MS and PL components). We are not aiming to formally relate the MS and PL groups, hence only comparisons to the 2-step approach are necessary. In order to reject the null hypothesis, the test outcomes for both MS and PL components should be significant.
Results
In the current section we communicate the results of the study. Namely: the original dataset and preprocessed data statistics, model performance for all the considered configurations, statistical evaluation of the overall approach and the two-step feature extraction, and, finally, simulated trading and model analysis. An evaluation of these results is presented in Sec. 7.
Raw and pre-processed data
The considered data sample is described in Tab. 3. We provide the numbers of ticks per contract. The contracts are sorted by the expiration date from top to bottom ascendingly. The number of ticks changes non-monotonically -while the overall trend is raising, the maximum number of ticks is observed for ESZ2018 contract. And the largest change is observed between ESZ2017 & ESH2018. We perform the whole study on 3 different rebound configurations: 7, 11 and 15 ticks price movement required for the positive labelling. We communicate the numbers of entries in the classification Table 4. As one can see, the numbers of the extracted extrema do not strictly follow the linear relation with the numbers of ticks per contract. Considering the numbers of positively labelled entries, the numbers decrease for the larger rebound sizes.
Automatic extraction
Since the first step of the pipeline is detecting peaks, we show a sample of data with automatically detected peaks and widths in Fig. 4. We provide a 2k ticks sample with an upward price trend. The automatically detected peaks are marked with grey circles and the associated peak widths are depicted with solid grey lines. Some of the peaks are not automatically discovered as not satisfying conditions of the algorithm by having insufficient widths or being not prominent enough (see Section 2.2 for the definitions of both).
Classification of the extrema
For all the considered models we perform feature selection and parameter tuning. We make the optimisation results available as a part of the reproducibility package. The model precisions obtained on a per-contract basis are provided in Table 5. The relative changes in the precision across contracts are preserved across the labelling configurations. There is no evidence that certain configuration shows consistently better performance across contracts. We report the rest of the metrics in the supplementary data, in Table S1. The precision of the models for each of the feature extraction steps: Market Shift (MS) and Price Level (PL) components are presented in Table 6. The rest of the metrics are reported in the supplementary materials, in Tables S3 and S2 for the MS and PL, respectively.
Price Levels, CatBoost versus No-information estimator (RQ1)
Below we present the effect sizes (Fig. 5) associated with the research question. Concretely, we use precision as the measurement variable for comparing between the no-information model and the CatBoost classifier. There are no configurations showing significant effect sizes. The largest effect size is observed for the 15 tick rebound labelling. Large CIs are observed partially due to a small sample size.
Crossing Figure 4: A sample of data demonstrating automated peak detection and peak widths annotation. We test the null hypothesis for rejection for the 3 considered configurations. The original data used in the tests is provided in Table 5. We report test outcomes in a form of test statistics and p-values in Table 7. Additionally, in the same table the compared groups statistics are included. We illustrate the performance of the compared groups in the supplementary materials, in Figure S1. There is now skew in any of the labelling configurations -medians and means do not differ within the groups. We see around 2 times larger standard deviations for the CatBoost model in comparison to the noinformation model. The potential reasons and implications are discussed in sections 7.2 and 7.5, respectively. There are 3 tests run in this experiment family, hence after applying Bonferroni corrections for multiple comparisons, the corrected significance level is α = .05/3 = .0167.
Price Levels, 2-step feature extraction versus its components (RQ2)
We display the effect sizes related to the second research question in Figure 6. We report them separately for the two feature extraction components versus the 2-step. There are no significant effects observed for any of the labelling configurations. Moreover, for the considered sample MS effect sizes are negative for rebounds 7 and 15. The negative effect size in the considered setting means that MS compound performs better than the 2-step approach. This effect is insignificant, hence does not generalise to the population.
We perform statistical tests to check if the null hypothesis H 02 can be rejected. The original data used in the tests is provided in Tables 5 and 6, for target (2-step) and control groups (MS and PL), respectively. We communicate the test outcomes in Table 8. For the sake of reproducibility, in the same table we report the compared groups' standard deviations, means and medians. We interpret the results of the tests in section 7.3. Finally, to support the reader, we plot the performance of the considered groups in the supplementary data, Figure S2. There are 6 tests run in this experiment family, hence after applying Bonferroni corrections, the corrected significance level is α = .05/6 = .0083.
Model analysis
Here we communicate the exploratory analysis of the trained models. We choose two models trained on the same contract but with different labelling configurations. Namely, we report the analysis of the models trained on the ESH2019 contract, with rebounds 7 and 11. The choice is motivated by very different final numbers of features after the feature selection step.
The core analysis is done on the decision plots, provided in Figures 7 and 8. Since no pattern was observed when plotting the whole sample, we illustrate a random subsample of 100 entries from top There are 29 features in the model trained on the rebound 11 configuration, and 8 features in the one trained on the rebound 7 labels. 7 features are present in both models: PL23, MS6_80, MS0, PL24, MS2, MS6_20, MS6_200. Contributions from the features in the rebound 7 model are generally larger, also, overall confidence of the model is higher. We notice that the most impactful features are coming from the Market Shift features. In Figure 7 one can see two general decision patterns: one ending up at around 0.12-0.2 output probability and another one consisting of a number of misclassified entries ending up at around 0.8 output probability. In the first decision path most of the MS features contribute consistently towards the negative class, however PL23 and PL2 often push the probability in the opposite direction. In the second decision path PL23 has the most persistent effect towards the positive class, which is opposed by MS6_80 in some cases and gets almost no contribution from MS0 and MS6_200 features.
In Figure 8 there is a skew in the output probabilities towards the negative class. Contributions from the PL features are less pronounced than for the rebound 7 model -top 8 features belong to the MS feature extraction step. There is no so obvious decision path with misclassified entries. At the same time, we see strong contributions towards negative outputs from MS6_20 and MS6_40. This is especially noticeable for the entries which have output probabilities around 0.5 before MS6 are taken into account.
Simulated trading
Here we report two types of the simulated trading experiments -with a fixed take profit (15 ticks, as shown in Fig 3) and varying rebounds of 7, 11 and 15 ticks, as well as fixed rebound with varying take Figures 9 and 10 for the fixed rebounds and take profits, respectively. In addition to the cumulative profits we report annualised Sharpe ratios with a 5% risk-free annual profit. Varying take profit configuration has more evident effects in 2019 (Fig 9), while behaviour divergence for varying rebounds is observed already in 2018 (Fig 10). When computing the net outcomes of the trades, we add $4.2 per-contract trading costs based on our assessment of current broker and clearance fees. We do not set any slippage in the backtesting engine, since ES liquidity is large. However, we execute the stop-losses and take-profits on the tick following the close-position signal to account for order execution delays and slippages. This allows to take into account uncertainty rooting from large volatilities and gaps happening during the extraordinary market events. The backtesting is done on the tick data therefore there are no bar-backtesting assumptions made.
Discussion
This section breaks down and analyses the results presented in Sec. 6. Results are discussed in relation to the overall pattern extraction and model performance, then research questions, model analysis and, finally, simulated trading. Additionally, we discuss limitations in regards to our approach and how they can be addressed. Finally we present our view on implications for practitioners and our intuition of potential advancements and future work in this area.
Pattern extraction
The numbers of price levels and ticks per contract follow the same trend (Table 3). Since the number of peaks is proportional to the number of ticks, we can say that the mean peak density is preserved over time to a large extent. In this context, the peak pattern can be considered stationary and appearing in various market conditions.
Price Levels, CatBoost versus No-information estimator (RQ1)
The precision improvement for the CatBoost over the no-information estimator varies a lot across contracts (Table 5). At the same time the improvement is largely preserved across the labelling configurations. It might be due to the original feature space, whose effectiveness relies a lot on the market state -it certain market states (and time periods) the utility of the feature space drops and the drop is quite consistent across the labelling configurations. An extensive research of the feature spaces would be necessary to make further claims. The overall performance of the models is weak in an absolute scale, however it is comparable to the existing body of knowledge in the area of financial markets [14]. Effect sizes in Fig. 5 are not significant. Significance here means that the chosen feature space with the model significantly contribute towards the performance improvement with respect to the noinformation model. Positive but insignificant effect size means that there is an improvement which is limited to the considered sample and is unlikely to generalise to the unseen data (population).
Assessing the results of the statistical tests, we use the significance level corrected for the multiple comparisons. Tests run on the rebound 7 and 15 configurations result in significant p-values, rebound 11 -insignificant (Table 7). Hence, we reject the null hypothesis H 01 for the labelling configurations of 7 and 15 ticks. This outcome is not supported by the effect sizes. This divergence between the test and effect sizes indicates a need for the feature space optimisation before use of the approach in the live trading setting. The insignificant outcome may be caused by particular market properties, or by unsuitable feature space for this particular labelling configuration. Interestingly, standard deviations differ consistently between the compared groups across the configurations. While no-information model performance depends solely on the fraction of the positively labelled entries, CatBoost performance additionally depends on the suitability of the feature space and model parameters -this likely explains higher standard deviations in the CatBoost case.
Price Levels, 2-step feature extraction versus its components (RQ2)
For the effect sizes in Figure 6 we compare the 2-step feature extraction approach to its components -PL and MS. We do not see any significant effects showing supremacy of any of the approaches. Negative effect sizes in case of the Market Shift component mean that in the considered sample MS performs better than the 2-step approach. This result does not generalise to the unseen data as its confidence intervals cross the 0-threshold. The possible explanation of the result is the much larger feature space of the 2-step approach (consisting of PL and MS features) than the MS compound. In case if PL features are generally less useful than MS, which is empirically supported by out model analysis (Figs S3 and S4), they might have a negative impact during the feature selection process by introducing noise.
Assessing the statistical tests, we use the significance level corrected for 6 comparisons. The pvalues from Table 8 show that there is no significant outcome and the null hypothesis H 02 cannot be rejected. While the 2-step approach does not bring any improvement to the pipeline, there is no evidence that it significantly harms the performance either. It might be the case that if PL features are designed differently, the method could benefit from them. In this study we withhold from iteratively tweaking the feature space to avoid any loss of the statistical power. We find this aspect interesting for the future work, however it would require increasing the sample size to be be able to account for the increased number of comparisons.
Simulated trading
In the simulated trading we observed an interesting result -data labelling configuration has more impact on the profitability than the take profits (Figures 9,10). We hypothesise that the reason is the simplistic trading strategy which is overused by the trading community in various configurations. In contrast, the labelling configuration is less straightforward and has more impact on the profitability. Note that the obtained precisions (Table 5) cannot be directly related to the considered trading strategy as there might be multiple price levels extracted within the time interval of a single trade. Consequently, there might be extrema which are not traded.
It is hard to expect consistent profitability considering the simplicity of the strategy and lack of optimisation of the feature space, however, even in the current setting one can see profitable episodes (Fig 9).
The objective of the study was not to provide a ready-to-trade strategy, but rather demonstrate a proof of concept. We believe that the demonstrated approach is generalisable to other trading strategies.
Limitations
The proposed experiment design is one of the many ways the financial markets can be studied empirically. Statistical methods often have strong use case conditions and assumptions. When there is no entirely suitable tool available, one chooses the closest matching solution. While it is advised to use Glass's ∆ in case of the significantly different standard deviations between groups, this measure does not have corrections for the paired data. Hence, in our experiment design we choose to stick to the Hedge's g av . For the sake of completeness, we verified the results using Glass's ∆ -15 ticks rebound effect size becomes significant in the RQ1.
In the backtesting we use the last trade price to define ticks, we do not take into account bid-ask spreads. In live trading, trades as executed by bid or ask price, depending on the direction of the trade. It leads to a hidden fee of the bid-ask spread per trade. This is crucial for intraday trading as average profits per trade often lay within a couple of ticks. Moreover, when modeling order executions, we do not consider per-tick volumes coming from aggressive buyers and sellers (bid and ask). It might be the case that for some ticks only aggressive buyers (sellers) were present, and our algorithm executed a long (short) limit order. This leads to uncertainty in opening positions -in reality some of the profitable orders may have not been filled. At the same time, losing orders would always be executed.
Another limitation is that we do not model order queues, and, consequently cannot guarantee that our orders would have been filled if we submitted them live even if both bid and ask volumes were present in the tick. This is crucial for high frequency trading (HFT), where thousands of trades are performed daily with tiny take-profits and stop-losses, but has less impact for the trade intervals considered in the study. Finally, there is an assumption that us entering the market does not change its state significantly. We believe it is a valid assumption considering the liquidity of S&P E-mini futures.
Implications for practitioners
We provide a systematic approach to evaluation of automated trading strategies. While large market participants have internal evaluation procedures, we believe that our research could support various existing pipelines. Considering the state of the matter with lack of code and data publishing in the field, we are confident that the demonstrated approach can be used towards improving generalisability and reproducibility of research. Specific methods like extrema classification and 2-step feature extraction are a good baseline for the typical effect sizes observed in the field.
Future work
In the current study we have proposed an approach for extracting extrema from the time series and classifying them. Since the peaks are quite consistently present in the market, their characteristics might be used for assessing the market state in a particular time scale.
The next step would be to propose a more holistic scenario extraction. We would aim to define the scenarios using other market properties instead of the price. Also, the approach can be validated for trading trends -in this case one would aim to classify price level crossings with high precision. Stricter definition of the crossings in terms of the price movement is necessary for that.
In terms of improving the strategy, there is a couple of things can be done. For instance: take-profit and stop-loss offsets might be linked to the volatility instead of being constant. Also, flat strategies usually work better in certain times of the day -it would be wise to interrupt trading before USA and EU session starts and ends, as well as scheduled reports, news and impactful speeches. Additionally, all the mentioned parameters we have chosen can be looked into and optimized to the needs of the market participant.
In terms of the chosen model, it would be interesting comparing CatBoost classifier to DA-RNN [43] model as it makes use of the attention-based architecture designed on the basis of the recent breakthrough in the area of natural language processing [44].
Finally, we see a gap in the available FLOSS (Free/Libre Open Source Software) backtesting tools. To the best of our knowledge, there is no publicly available backtesting engine taking into account bid and ask prices and order queues. While there are solutions with this functionality provided as parts of the proprietary trading platforms, they can only be used as a black box. An open source engine would contribute to transparency and has a potential to become the solution for both research and industry worlds.
Conclusion
Our work showcased an end-to-end approach to perform automated trading using price extrema. Whilst extrema have been discussed as potentially high performance means for trading decisions, there has been no work proposing means to automatically extract them from data and design a strategy. Our work demonstrated an automated pipeline using this approach, and our evaluation showed some interesting results. This paper has presented every single aspect of data processing, feature extraction, feature evaluation and selection, machine learning estimator optimisation and training, as well as details of the trading strategy. Moreover, we statistically assessed the findings. We rejected the null hypothesis answering RQ1 -our approach performs statistically better than the baseline. We did not observe any significant effect sizes for RQ2 and could not reject the null hypothesis. Hence, the use of the 2-step feature extraction does not improve the performance of the approach for the proposed feature space and the model. We hope that by providing every single step of the ATP, it will enable further research in this area and be useful to a varied audience. We conclude by providing samples of our code online [46]. Figure S4: SHAP summary plot of the model trained on ESH2019 contract, rebound 11 configuration. Each marker is a classified entry. X axis quantifies the contribution of the entries towards positive or negative class output. |
242532710 | s2orc/train | v2 | 2021-08-20T18:46:27.008Z | 2022-01-01T00:00:00.000Z | Comment on gmd-2020-439
This manuscript summarizes a mesoscale modeling study of a hailstorm using the GRAPESMeso NWP model with a 2-moment bulk microphysics scheme (BMS). The authors run GRAPES to simulate a hail-producing squall line for a 24-h simulation at a 3-km horizontal grid spacing. Comparisons are made between model fields and observations for accumulated precipitation and radar reflectivity. Some other model fields are also shown and discussed briefly, including vertical motion, hydrometeor mixing ratios and hail microphysical process rates. The authors claim that the simulation appears to be reasonable and conclude that GRAPES-Meso with the 2-moment BMS is therefore capable of simulating hailstorm (and presumably of predicting hail).
SUMMARY:
This manuscript summarizes a mesoscale modeling study of a hailstorm using the GRAPES-Meso NWP model with a 2-moment bulk microphysics scheme (BMS). The authors run GRAPES to simulate a hail-producing squall line for a 24-h simulation at a 3-km horizontal grid spacing. Comparisons are made between model fields and observations for accumulated precipitation and radar reflectivity. Some other model fields are also shown and discussed briefly, including vertical motion, hydrometeor mixing ratios and hail microphysical process rates. The authors claim that the simulation appears to be reasonable and conclude that GRAPES-Meso with the 2-moment BMS is therefore capable of simulating hailstorm (and presumably of predicting hail).
To be frank, there is very little scientific value in this manuscript for the meteorological, NWP, or modeling community. There is very little depth in the analysis. There is no way of telling what the effects of the 2-moment BMS are since no comparisons to other schemes are made, nor any sensitivity tests conducted. The comparisons to observations are very superficial and show little more than that the model happened to produce a reasonable simulation for this single case. The authors do not even show accumulated hail from the model for comparison to the surface observations ( Fig. 1). There is not really any knowledge demonstrated about hail or hail modeling in the manuscript. So, in my opinion there is no publishable material here. Any necessary revisions needed would be too great to turn this into a publishable paper. SOME SPECIFIC COMMENTS: The background description of natural hail growth is quite weak. For example, "conversion" of graupel to hail is discussed as though this were a natural process (it is not; it is a modeling concept). No mention of frozen raindrops as hail embryos is made.
The description of the 2-moment scheme is strange. For example, most of the equations given are "final" equations, without the original "base" equations (though references are given) -but what is the purpose of this? The reader is not going to try to code a BMS based on these equations. The functional form of the hydrometeor size distributions is not stated. There are also a few strange aspects to this scheme that I see -e.g., what is the basis for a fixed collection efficiency of 0.8 (line 98)? What is the physical basis and meaning of the conversion parameter A (line 103)? Is there no distinction between wet and dry growth of hail?
The comparison to observations is quite week. For the observed precipitation (Fig. 4a), is this radar-based or gauge-based? Why are not plots of model hail precipitation (similar to Fig. 4b) shown (for comparison to Fig. 1)? [I recognize that hail mixing ratios at the surface are shown in Fig. 8.] For the radar reflectivity comparisons, this is tempting and common thing to do, but there are subtleties in model reflectivity that must be understood (and should be discussed in this paper). For example, uncontrolled size sorting in 2-moment bulk schemes can lead to an artificially broad size distribution, which inflates the calculation of the 6 th moment (reflectivity).
There is no discussion about the impact of the specific model configuration. It is well recognized that grid spacing of 3 km is quite coarse and insufficient to resolve the updrafts in severe convection. But updrafts are strongly linked to hail (in nature and models). So what does this imply as far as this study is concerned? Discussion is needed.
What should be take away from the hail production rates shown in Fig. 10? There is a brief description of this figure in the text, but no discussion of what the reader should learn from this regarding the utility of this model to simulate hail. Is this better than a 1-moment BMS? What strikes me as the strangest is that most of the hail growth comes from "autoconversion" of graupel to hail, rather than accretion of liquid water (this does not seem realistic). |
21043720 | s2orc/train | v2 | 2018-04-03T04:34:36.803Z | 2017-08-09T00:00:00.000Z | Sequential Venous Percutaneous Transluminal Angioplasty and Balloon Dilatation of the Interatrial Septum during Percutaneous Edge-to-Edge Mitral Valve Repair
Percutaneous edge-to-edge mitral valve repair (PMVR) is widely used for selected, high-risk patients with severe mitral valve regurgitation (MR). This report describes a case of 81-year-old woman presenting with severe and highly symptomatic mitral valve regurgitation (MR) caused by a flail of the posterior mitral valve leaflet (PML). PMVR turned out to be challenging in this patient because of a stenosis and tortuosity of both iliac veins as well as sclerosis of the interatrial septum, precluding the vascular and left atrial access by standard methods, respectively. We managed to achieve atrial access by venous percutaneous transluminal angioplasty (PTA) and balloon dilatation of the interatrial septum. Subsequently, we could advance the MitraClip® system to the left atrium, and deployment of the clip in the central segment of the mitral valve leaflets (A2/P2) resulted in a significant reduction of MR.
Introduction
Percutaneous edge-to-edge mitral valve repair (PMVR) has proven to be beneficial for patients with severe mitral valve regurgitation (MR), who are not eligible for conventional mitral valve repair. PMVR, however, can turn out to be challenging. For instance, anatomical variations of vascular access and sclerosis of the interatrial septum in some patients may represent obstacles, which render the procedure more challenging.
Case Report
We report the case of an 81-year-old woman, who was admitted to our hospital with progressive dyspnea for further clinical evaluation. Transesophageal echocardiography (TEE) revealed severe MR with normal left ventricular systolic function. An eccentric jet was caused by a flail of the posterior leaflet (PML) in segment P2 (Figures 1(a) and 1(b)). Coronary angiography showed no significant coronary artery disease (CAD). Our interdisciplinary heart team recommended PMVR because of the high risk of open surgery in this patient.
After establishing vascular access through the right femoral vein, we were not able to advance the transseptal needle through the introduced Preface5 transseptal access sheath (Biosense Webster, CA) across an obstruction at the curvature of the tortuous iliac veins. Instead, we had to use a thin transseptal guidewire (SafeSept5, Pressure Products, CA), which was advanced through the transseptal access sheath. This SafeSept wire has a very sharp tip and requires 77% less force to cross the interatrial septum. Using this approach, we then switched to a more rigid transseptal guiding sheath . Furthermore, the left iliac vein had a similar obstruction precluding advancement of the MitraClip system. Thus, we carried out successive balloon dilatations of the right common iliac vein with gradually increasing balloon sizes up to 10 mm (Figure 1(g); Supplemental Movie 3). Deployment of the clip (Figures 1(h) and 1(i)) in the central segment of the mitral leaflets (A2/P2) resulted in a significant MR reduction (Figure 1(j)). We could discharge the patient from the hospital with a significantly improved 6-minute walk test few days after PMVR.
Discussion
PMVR presents a novel and innovative method for interventional mitral valve repair for patients with MR, who are not eligible for conventional heart surgery [1]. Less invasiveness and no need for extracorporeal circulation represent major advantages of PMVR over open heart surgery [2]. In some patients, however, anatomical circumstances may render PMVR difficult and in some cases even impossible. Besides difficult vascular access, existence of an atrial septal defect (ASD), a sclerotic interatrial septum, or a relatively small left atrium with additional presence of a prominent coumadin ridge, which can make stirring towards the mitral valve plane difficult, can complicate and prolong the procedure. If a significant ASD is present, the position of the MitraClip steering guide may not be stable enough. Crossing the iliac veins with the MitraClip guide catheter can represent a challenge during PMVR. Recently, a case of venous strangulation during PMVR was presented, which was overcome by implantation of a stent [3]. In our case, we were able to manage this vascular obstruction by percutaneous transluminal angioplasty (PTA) using PTA balloons of gradually increasing sizes.
Multiple studies have shown that anticoagulation is needed after iliac venous stenting and sometimes even additional antiplatelet treatment is necessary for prevention of recurrent deep venous thrombosis [4]. In contrast, our approach has the advantage where no prolonged antithrombotic therapy is required. In addition to anatomical variation of the iliofemoral veins, thickening or lipomatous hypertrophy of the interatrial septum can represent a challenge for achieving the left atrial access. There are different methods for transseptal puncture, for example, by using radiofrequency or excimer laser catheters. Radiofrequency or laser energy can be helpful for transseptal puncture of a scarred, calcified, or patched atrial septum [5]. However, in our case we achieved the left atrial access via a SafeSept transseptal guidewire, a LAMP rigid transseptal guiding sheath, and gradual dilatation of the interatrial septum with PTCA balloons. We performed the transseptal puncture and the whole PMVR procedure under the guidance of TEE and fluoroscopy. However, intracardiac echocardiography (ICE) can also be used for monitoring of the intervention [6].
In conclusion, balloon angioplasty may be necessary for successful positioning of the MitraClip system and should be kept in mind as an option for establishing both vascular and left atrial access during PMVR. |
210472810 | s2orc/train | v2 | 2020-01-09T09:10:54.701Z | 2020-01-01T00:00:00.000Z | Global Optimal Structured Embedding Learning for Remote Sensing Image Retrieval
A rich line of works focus on designing elegant loss functions under the deep metric learning (DML) paradigm to learn a discriminative embedding space for remote sensing image retrieval (RSIR). Essentially, such embedding space could efficiently distinguish deep feature descriptors. So far, most existing losses used in RSIR are based on triplets, which have disadvantages of local optimization, slow convergence and insufficient use of similarity structure in a mini-batch. In this paper, we present a novel DML method named as global optimal structured loss to deal with the limitation of triplet loss. To be specific, we use a softmax function rather than a hinge function in our novel loss to realize global optimization. In addition, we present a novel optimal structured loss, which globally learn an efficient deep embedding space with mined informative sample pairs to force the positive pairs within a limitation and push the negative ones far away from a given boundary. We have conducted extensive experiments on four public remote sensing datasets and the results show that the proposed global optimal structured loss with pairs mining scheme achieves the state-of-the-art performance compared with the baselines.
Introduction
The deep development of remote sensing technology in recent years has induced urgent demands for processing, analyzing and understanding the high-resolution remote sensing images. The most fundamental and key task for remote sensing image analysis (RSIA) is to recognize, detect, classify and retrieve the images belonging to multiple remote sensing categories like agricultural, airplane, forest and so on [1][2][3][4][5]. Among all these tasks, remote sensing image retrieval (RSIR) [2,[6][7][8] is the most challengeable in analyzing remote sensing data effectively. The main target of RSIR is to retrieve image through a given remote sensing dataset for a query and return the images with the similar visual information. RSIR has become more and more attractive due to the explosive increase in the volume of high-quality remote sensing images in the last decades [2,5,8].
Compared with content-based image retrieval (CBIR), RSIR is more challenging as there are vast geographic areas containing far-ranging semantic instances with subtle difference which is difficult to distinguish. Moreover, the images which belong to the same visual category might vary in positions, Figure 1. The optimization process under the proposed global optimal structured loss. The circles with different colors denote the samples with different label. The left part is the original distribution of sample pairs. The blue circle with small white circle in the center is the anchor, the green circle with small black circle in the center is the hardest negative sample to the anchor and the similarity of them is [ ] , the blue circle with small purple circle in the center is the hardest positive samples to the anchor and the similarity of them is [ ] . We use pairs mining strategy to sample more informative pairs for optimization. The black solid line is the negative border for negative pairs mining and the black dot line is the positive border for positive pairs mining. The cycles with arrow denote the mined informative samples and the arrows are the gradient direction. The right part is distribution optimization. The blue solid line is positive boundary used to limit positive pairs within a hypersphere. The blue dot line is negative boundary used to pull negative pairs far away from anchor.
As illustrated above, in our paper, we make the following contributions to improve the performance of RSIR task: (1) We propose to use a softmax function in our novel loss to solve the key challenge of local optimum in most methods. This is efficient to realize global optimization which could be significant to enhance the performance of RSIR. (2) We present a novel optimal structured loss to globally learn an efficient deep embedding space with mined informative sample pairs to force the positive pairs within a limitation and push the negative ones far away from a given boundary. During training stage, we take the information of all these selected sample pairs and the difference between positive and negative pairs into consideration; make the intraclass samples more compact and the interclass ones more separated while preserving the similarity structure of samples. (3) To further reveal the effectiveness of the RSIR task under DML paradigm, we perform the task of RSIR with various commonly used metric loss functions on the public remote sensing datasets. These loss functions aim at fine-tuning the pre-trained network to be more adaptive for a certain task. The results show that the proposed method achieves outstanding performance which would be reported in experiments section. (4) To verify the superiority of our proposed optimal structured loss, we conduct the experiment on multiple remote sensing datasets. The retrieval performance is boosted with approximately 5% on these public remote sensing datasets compared with the existing methods [28,[49][50][51] and this demonstrates that our proposed method achieves the state-of-the-art results in the task of RSIR.
We would like to present the organization of our paper as follows: We describe the related work from the aspects of metric learning and methods used in RSIR in Section 2. We give a detailed interpretation of our proposed method and the framework of the RSIR with our method in Section 3. In Section 4, we give some details of our experiments and present their results and analysis. Lastly, we present the conclusions of our paper. The optimization process under the proposed global optimal structured loss. The circles with different colors denote the samples with different label. The left part is the original distribution of sample pairs. The blue circle with small white circle in the center is the anchor, the green circle with small black circle in the center is the hardest negative sample to the anchor and the similarity of them is S [−1] , the blue circle with small purple circle in the center is the hardest positive samples to the anchor and the similarity of them is S [0] . We use pairs mining strategy to sample more informative pairs for optimization. The black solid line is the negative border for negative pairs mining and the black dot line is the positive border for positive pairs mining. The cycles with arrow denote the mined informative samples and the arrows are the gradient direction. The right part is distribution optimization. The blue solid line is positive boundary used to limit positive pairs within a hypersphere. The blue dot line is negative boundary used to pull negative pairs far away from anchor.
As illustrated above, in our paper, we make the following contributions to improve the performance of RSIR task: (1) We propose to use a softmax function in our novel loss to solve the key challenge of local optimum in most methods. This is efficient to realize global optimization which could be significant to enhance the performance of RSIR. (2) We present a novel optimal structured loss to globally learn an efficient deep embedding space with mined informative sample pairs to force the positive pairs within a limitation and push the negative ones far away from a given boundary. During training stage, we take the information of all these selected sample pairs and the difference between positive and negative pairs into consideration; make the intraclass samples more compact and the interclass ones more separated while preserving the similarity structure of samples. (3) To further reveal the effectiveness of the RSIR task under DML paradigm, we perform the task of RSIR with various commonly used metric loss functions on the public remote sensing datasets. These loss functions aim at fine-tuning the pre-trained network to be more adaptive for a certain task. The results show that the proposed method achieves outstanding performance which would be reported in experiments section. (4) To verify the superiority of our proposed optimal structured loss, we conduct the experiment on multiple remote sensing datasets. The retrieval performance is boosted with approximately 5% on these public remote sensing datasets compared with the existing methods [28,[49][50][51] and this demonstrates that our proposed method achieves the state-of-the-art results in the task of RSIR.
We would like to present the organization of our paper as follows: We describe the related work from the aspects of metric learning and methods used in RSIR in Section 2. We give a detailed interpretation of our proposed method and the framework of the RSIR with our method in Section 3. In Section 4, we give some details of our experiments and present their results and analysis. Lastly, we present the conclusions of our paper.
Related Work
In this section, we make a summary of various works related to DML and the task of RSIR. Firstly, we introduce some work about clustering-based losses, pair-based structured losses and informative pairs mining strategies. Then, we provide an overview on the development of RSIR which is based on handcraft and deep CNN features.
Deep Metric Learning
DML has been a long-standing research hotspot in improving the performance of image retrieval [42][43][44][45][46]52]. There are two different research direction of DML which are clustering-based and pair-based structured losses. We would like to give some detail introduction as follows.
Clustering-Based Structured Loss
The clustering-based structured losses aim to learn a discriminative embedding space by optimizing clustering metric and are applied in abundant fields of computer vision like face recognition [53,54] and fine-grained image retrieval (FGIR) [55,56]. Clustering loss [57] utilizes the structured prediction framework to realize clustering with higher score for ground truth than others. The quality of clustering would be measured by normalized mutual information (NMI) [58]. Center loss [54] suggested to learn a center for each category by compensating for softmax loss and obtain an appreciable performance in face recognition. The triple-center loss (TCL) [59] was proposed to learn a center for each category and separate the cluster centers and their relevant samples from different categories. To enhance the performance of FGIR, centralized ranking loss (CRL) [55] was proposed aiming to optimize centers and enlarge the compactness and separability of intraclass and interclass samples. Later, decorrelated global-aware centralized loss (DGCRL) [56] was proposed to optimize the center space by utilizing Gram-Schmidt independent operation and enhance the clustering result by combining softmax loss. However, all these clustering-based structured losses consume costly in computing and are hard to optimize. Moreover, these losses fail to make full use of the sample relationships which might contain meaningful information for learning a discriminative space.
Pair-Based Structured Loss
As a mass of structured losses [41][42][43][44][45][46][47] have obtained appreciable effectiveness in training networks to learn discriminative embedding features, we would like to make a brief review on the development of pair-based structured loss.
Contrastive loss [41] builds positive and negative sample pairs according to their labels as (x a , x k ), y ak and exploits these constructed pairs to learn a discriminative embedding space by minimizing the distance of positive sample pairs and increasing the distance of negative sample pairs larger than a given threshold m. And the loss function is defined as follows: where Q is the volume of samples in training set, y ak = 1 when a sample pair (x a , x k ) with the same label, and y ak = 0 when a sample pair (x a , x k ) with different label. The parameter m is a margin used to limit the distance of negative sample pairs, D ak indicates the Euclidean distance of a sample pair (x a , x k ) and is defined formularly as D ak = f (x a ) − f (x k ) 2 , and f (·) means the deep feature extracted from the network.
[·] + is hinge loss which is to limit the values to be positive.
From Equation (1), we could find that this loss function treats positive and negative pairs equally and fails to take into account the difference between positive and negative sample pairs. As it constructs all samples into pairs locally in training set, it might get fall into local optimum and result in slow convergence.
Triplet loss [42] utilizes abundant triplets to learn a discriminative embedding space to force positive sample pairs closer than negative ones with a given margin m. Each triplet is made up of an anchor sample, a positive sample with the same label to the anchor and a negative sample with different labels to the anchor. To be specific, we denote a triplet as x a , x p , x n , x a , x p and x n indicate the anchor, positive and negative sample separately. The loss is defined as: where T means the collection of triplets, x a , x p and x n are the index of anchor, positive and negative samples severally and |T| is the volume of triplets set. D ap = f (x a ) − f x p 2 and D an = f (x a ) − f (x n ) 2 denote the Euclidean distance of positive and negative pairs respectively. And f (·) means the deep feature extracted from the network.
[·] + is hinge loss which is to limit the values to be positive.
We could learn from Equation (2) that triplet loss does not consider the difference between positive and negative sample pairs which is important for identifying the pairs with more information. Although it takes the relationship between positive and negative pairs into consideration, the rate of convergence is still slow and might struck in local optimal as this loss encode the samples in a training set to triplets set which fails to make full use of sample pairs inside the training set globally.
N-pairs loss [43] takes advantage of the structured information between positive and multiple negative sample pairs in the training mini-batch to learn an effective embedding space. This loss function enhances the triplet loss by training the network with more negative sample pairs and the negative pairs are selected from all negative pairs of other categories. i.e., selecting one sample pair randomly per category. The N-pairs loss is defined as: where Q is the number of categories in a training set, and x a , x p N a=1 denote N sample pairs which are selected from N different categories, i.e., x a and x p are anchor and its positive sample for a certain category respectively; x n y n y a denotes negative samples for the current anchor; y n and y a denote the labels of x n and x a . S ap = f (x a ), f x p and S an = f (x a ), f (x n ) are dot product of positive and negative pairs respectively. The f (·) is the feature representation of an instance.
However, this loss fails to take the difference between negative and positive pairs and neglects some structured information inside the training set. Furthermore, it only selects one positive pair randomly for per class which could lose some significant information during training.
Lifted structured loss [44] was proposed to meet the challenge of local encoding by make full use of information among all the samples in a training batch. It aims to learn an effective embedding space by considering all negative sample pairs of an anchor and encourage the distance of positive pair as small as possible and force the distances of all negative pairs larger than a threshold m. Lifted structured loss is defined as: Sensors 2020, 20, 291 6 of 28 where x a and x p are anchor and positive samples respectively and x n and x k are both negative samples, P and N indicate the sets of positive and negative pairs respectively and the |P| is amount of P. D ap is the Euclidean distance of positive pair. D an and D pk are Euclidean distances of negative pairs. We could learn from Equation (4) that the lifted structured loss makes full use of the relationship between positive and negative sample pairs by constructing the hardest triplet with taking all negative pairs into consideration. However, it fails to keep the structured distribution inside the training set and still fails to realize global optimization as it is a form of hinge loss.
Ranked list loss [46] was proposed to restrict all positive samples into a given hypersphere with diameter as α − m and impel distance of negative sample pairs larger than a fixed threshold α. To be specific, this loss aims at learning a more discriminative embedding space where could separate positive and negative sample set by a margin m and it utilizes a weighting strategy to consider the difference of negative sample pairs: where x a , x p and x n denote anchor, positive and negative samples respectively and Q is the volume of a training set. P a and N a are the sets of positive and negative pairs for an anchor x a . D ap and D an are Euclidean distances of positive and negative pairs respectively which have been described above. β is a parameter which is used to reflect the degree of negative samples during weighting.
We could know that the ranked list loss has obtained an appreciable performance in multiple image retrieval tasks. However, it does not take the relationship between positive and negative sample pairs which is important to enhance the robustness and distinctiveness of network. Moreover, as it utilizes hinge function to optimize this loss which might be easy to lead to local optimum, the performance still couldn't meet our demands in RSIR.
To solve the limitations of existing DML methods, we propose to exploit the softmax function instead of the commonly used hinge function in our loss function to realize global optimization. Furthermore, we make full use of the structured information and maintain the inner similarities structure by setting positive and negative boundary for sample pairs during training stage.
Informative Pairs Mining
During the training stage, there are vast numbers of less informative sample pairs which might slow down convergence and result in a local optimum. It is significant to design a superior pairs mining scheme for training efficiency. There are many excellent studies on informative pairs mining scheme design [43][44][45][46]53,60]. A semi-hard mining strategy was proposed to sample a handful of triplets which contain a negative pair farther than positive one in FaceNet [53]. A more effective pairs mining framework was proposed to select hard samples from the database for training [60]. Sohn et al. proposed hard negative categories mining to collect more informative samples for training the network globally [43]. Song et al. proposed to select harder negative samples to optimize lifted structured loss [44]. Wang.et al. provided a simple pairs mining strategy which select the sample pairs in violation of distance restriction [46]. Wang. et al. designed a more effective pairs mining scheme to obtain more excellent performance which take the relationship between positive and negative sample pairs into consideration [45]. In this paper, we propose to utilize the pairs mining scheme proposed in [45] to realize more informative sample pairs mining and improve the performance of RSIR.
The Development of RSIR Task
In the last few decades, the task of RSIR has been received extensive attention from researchers and the wide studies have spawned a whole bunch of elegant methods. We would like to give some introduction on the methods for RSIR in terms of traditional handcrafted representation and deep representation methods. Moreover, we introduce some works related to the RSIR under DML.
In the initial time, researchers tended to extract textural features for remote sensing image classification [11,61]. Datcu et al. presented a special pipeline for the task of RSIR and proposed to utilize the model of Bayesian inference to capture spatial information for features extraction [62]. And at the same time, Schroder et al. proposed to exploit Gibbs-Markov random fields (GMRF) which could be used to capture spatial information to extract features [63]. Daschiel et al. suggested to utilize hierarchical Bayesian model to extract feature descriptors and these features are clustered by the dyadic k-means methods [64]. With the development of general image retrieval, Shyu et al. proposed a comprehensive framework defined as geospatial information retrieval and indexing system (GeoIRIS) for RSIR based on CBIR [65]. This system could be used to automatically extract features, mine visual content for remote sensing images and realize fast retrieval by indexing from database. The features are mainly based on patch which could be helpful to maintain some local information. And to enhance the retrieval precision, they extract various visual features including general features like spectral and texture features and anthropogenic features like linear and object features. However, these methods based on global visual features mentioned above are hard to maintain invariance to translation, occlusion and translation. With the introduction of SIFT descriptors [15], Yang et al. proposed to utilize BoW to encode SIFT features extracted from remote sensing images and the experiments have demonstrated that the method based on local features could be superior than global visual features [66]. Later, more works tend to use local features to realize efficient retrieval [16,67]. More recently, there are some studies that tend to utilize features extracted from remote sensing images to retrieve local climate zones [68,69]. However, these handcrafted features fail to extract richer information from remote sensing images as their limited descriptive ability.
With the successful application of deep learning in general image retrieval task, deep features extracted from CNN are gradually exploited to achieve more appreciable performance in RSIR [10,70,71]. Bai et al. proposed to map deep features into a BoW space [70]. Li et al. proposed to combine handcrafted features with deep features to produce more effective features for RSIR [71]. Ge et al. tended to combine and compress deep features extracted from pre-trained CNNs to enhance the descriptive power of features [10]. All these methods mentioned above have made great contributions on improving the performance of RSIR. However, these methods are mainly based on pre-trained networkd which might not be suitable for the task of RSIR. To further improve the performance, recent works tend to concentrate on fine-tuning the pre-trained network for RSIR [32,49,50,72,73]. Li et al. proposed to fine-tune a pre-trained CNN to learn more effective feature descriptors and the network is trained on remote sensing datasets [73]. Li et al. made a try on combining deep features learning network and deep hashing network together to develop a novel deep hash neural network which is trained in an end-to-end manner for RSIR [72]. Tang et al. proposed to utilize deep BOW (DBOW) to learn deep features based on multiple patches in an unsupervised way [50]. Wei et al. presented a multi-task learning network which is connected with a novel attention model and proposed to utilize center loss for network training [32]. Raffaele et al. proposed to conduct the aggregation operation of VLAD on the local deep features extracted from fine-tuned CNNs with two different attention mechanisms to eliminate the influence of irrelated background [49].
More and more elegant works prefer to apply DML in the field of remote sensing images to enhance the effectiveness of RSIR [30,[33][34][35][36][37]. Roy et al. proposed a metric and hash-code learning network (MHCLN) which could be used to learn semantic embedding space and produce hash codes at the same time [33]. It aims to realize accurate and fast retrieval in the task of RSIR. Cao et al. presented a novel triplet deep metric learning network for RSIR, the remote sensing images are embedded into the learned embedding space where the positive sample pairs closer and negative ones far away from each other [34]. Subhanker et al. presented a novel hashing framework which is based on metric learning [35]. Most existing DML methods for RSIR are mainly based on triplet loss which is limited with the local optimization and inadequate use of sample pairs. In this paper, we investigate the effectiveness of RSIR when applying more superior DML methods. Furthermore, we propose a more efficient loss function to learn a discriminative embedding space for remote sensing images to achieve elegant performance for the task of RSIR.
The Proposed Approach
In this section, we give some detailed descriptions about our proposed method which includes five parts. Firstly, we give the problem definition on the task of RSIR. In Sections 3.2-3.4, we describe our proposed loss function and the optimization process in detail.
Problem Definition
We denote the input images as x = x 1 , . . . , x a , . . . , x Q for a training set. There are C classes in a training set and we denote the labels for n input images as y = y 1 , . . . , y a , . . . , y n where y a ∈ {1, . . . , c, . . . , C}, particularly. There is only one label y a for an input image x a . The input images x are projected onto a d-dimension embedding space by utilizing a deep neural network with batch normalization which could be indicated as f (x, θ). To be specific, f is the deep mapping function of the network and θ is a set of parameters need to be optimized of the mapping function f . In this paper, we use inner product S ak to measure the similarity of any two images (x a , x k ) during the training and testing phases and we denote the similarity metric as S ak = f (x a ; θ), f (x k ; θ) . As we exploit all samples in a training batch as anchor and compute the similarity of all samples with an anchor, we could denote the similarities of a training batch as an n × n matrix S and use S ak to represent the element at (a, k).
Global Lifted Structured Loss
As described in Section 2.1.2, the lifted structured loss utilizes a set of triplets for training, which is dynamically constructed by considering all sample pairs except the positive pair as negatives. It takes all negative pairs but only one positive pair into consideration for each triplet. To meet this limitation, a more generative loss function is proposed to learn a more discriminative embedding space by considering all positive pairs in a training batch in person re-ID [74]. The loss is defined as: There are two parts in this loss function. The distance between positive and negative pairs is denoted as D ak = f (x a , θ) − f (x k , θ) 2 and m is a margin. In our paper, we utilize inner product to measure similarity. It's noted that the Euclidean distance could be converted to inner product as follows: where A is a constant. We could learn from Equation (7) that the Euclidean distance and inner product is inversely proportional to each other. In our paper, we exploit inner product to measure similarities. We recompute the generative lifted structured loss to inner product and we denote the formula as: where µ is a given margin. However, the generative lifted structured loss still fails to solve the limitation of encoding pairs locally which might result in local optimum. To breakthrough this limitation, we use the softmax loss to realize globally optimizing. As the softmax loss is used to deal with the task of Sensors 2020, 20, 291 9 of 28 classification, we here take our task as a classification of positive and negative similarity. The formula is defined as: As our target is to increase the similarities of positive pairs (i.e., draw the distance close for positive pairs) and reduce the similarities of negative pairs (i.e., make the distance further for negative pairs), we could take the limit for the similarities for positive and negative pairs. Specifically, we assume the positive and negative similarities (measured by inner product) are infinitely close to +1 and −1 respectively (i.e., positive and negative distances (measured by Euclidean distance) are 0 and +∞ respectively) which means that the numerator in Equation (9) is a constant. And we give definition of the probabilities for positive and negative similarities to an anchor as R y k =y a = A 1 / y k =y a e −S ak and R y k y a = A 2 / y k y a e µ+S ak . A 1 and A 2 are both constant. We combine the softmax loss with the generative lifted structured loss as: This global lifted structured loss could be likely to learn a discriminative embedding space globally. However, it still fails to eliminate the impact of less informative sample pairs and keep the sample pairs distribution inside the training batch. To achieve better performance in RSIR, we propose to use an efficient pairs mining strategy to select sample pairs with richer information and propose a global optimal structured loss which could increase the intraclass compactness and maintain the distribution of the selected sample pairs at the same time for network model training. We would like to give the detailed description about our mining scheme and global optimal structured loss.
Global Optimal Structured Loss
For the task of RSIR, our target is to increase intraclass compactness and interclass sparsity. However, the proposed global lifted structured loss described in Section 3.2 fails to keep the distribution of sample pairs inside the selected sample pairs set. In our paper, we propose a novel global optimized structured loss which is used to learn an efficient and discriminative embedding space. It aims to limit sample pairs with the same class label (positive sample pairs) within a hypersphere with diameter of (α − m). The fixed boundary could be important to maintain similarity distribution of the selected positive pairs for each category. And simultaneously all negative sample pairs could be pushed away from a fixed boundary α, the positive and negative sample pairs could be separated by a margin m.
We intend to use the pairs mining strategy described in [45], which exploits the hardest negative pair (with the largest similarity among all negative pairs) to mine informative positive pairs and similarly sample negative pairs with richer information by considering the hardest positive pair (with the smallest similarity among all positive pairs). In other word, for an anchor x a , we sample the informative positive and negative pairs according to the following two formulas. The informative positive and negative pairs sets are denoted as P a and N a respectively. The formulas are defined as: where = 0.1. From Equation (11), we could know that we select the positive pair x a , x p as an element of P a by comparing its similarity with the hardest positive similarity. And we could learn from Equation (12) that the negative pair (x a , x n ) is selected as an element of N a by comparing its similarity with the hardest positive similarity. And is a hyper-parameter used to control the scope of informative sample pairs.
To realize the target of pulling the mined positive pairs as close as possible and keeping the similarity distribution of each class sample pairs (positive pairs) simultaneously, we increase their similarities and force them to be larger than the positive boundary (α − m) by minimizing the positive part of our proposed loss function. It is defined as: Similarly, to achieve the goal of pushing the mined negative sample pairs far away from positive ones and realize the separation of positive and negative sample pairs, we propose to decrease the negative similarities and impel them to be smaller than the negative boundary α by minimizing the negative part of our proposed loss function. We define this as: For our proposed global optimal structured loss, we integrate the two part of minimization objectives and optimize them jointly. And as there is difference between positive and negative sample pairs, we utilize two different hyper-parameters β 1 and β 2 . Our proposed loss is represented as: where β 1 = 2, β 2 = 50. This global optimal lifted structured loss could be likely to pay more attention on the positive and negative pairs with more information, which would be helpful to further improve the performance and effectiveness of RSIR task.
To make full use of sample pairs among the mini-batch, we treat all images in a mini-batch as an anchor and the rest of images except the current anchor as gallery iteratively. And we would like to define the loss function for a mini-batch as follows: After the loss function has been defined, the network parameters could be learned by Back-Propagation. We minimize the L GOS with gradient descent optimization by conducting online iterative pairs mining and loss calculation in the form of matrix. We could compute the loss of deep features in training set f (x, θ) by utilizing Equation (16). And its gradient of with respect to f (x, θ) could be denoted as: In Equation (17), we could regard w + aj and w − aj as the weight for positive and negative similarity respectively. The network parameter update is determined by both positive and negative similarity, and the loss of positive (negative) similarity is used reflect intraclass compactness (interclass sparsity). We give the optimization process in Algorithm 1. For a = 1, . . . , Q do 9: Construct informative positive pairs set P a for anchor x a as Equation (11) 10: Construct informative negative pairs set N a for anchor x a as Equation (12) 11: Calculate L P as Equation (13) for the sampled positive pairs 12: Calculate L N as Equation (14) for the sampled negative pairs 13: Calculate L GOS (x a ) as Equation (15) for an anchor x a 14: end for 15: calculate L GOS (x) as Equation (16) for a mini-batch. 16: Backpropagation gradient and network parameters f (x, θ) update:
RSIR Framework Based on Global Optimal Structured Loss
In this section, we illustrate the RSIR framework based on our proposed global optimal structured loss which contains the stages of training and testing. We present this framework in Figure 2.
RSIR Framework Based on Global Optimal Structured Loss
In this section, we illustrate the RSIR framework based on our proposed global optimal structured loss which contains the stages of training and testing. We present this framework in Figure 2. Figure 2. The RSIR framework based on the global optimal structured loss. The upper part denotes training stage and we fine-tune the pre-trained network with our global optimal structured loss. We utilize the fine-tuned network for more discriminative feature representations extraction. The bottom part is testing stage. The query image and the testing set would be input in the fine-tuned network, and the top K similar images would be returned.
During the training stage, we utilize our proposed method to fine-tune the pre-trained network and we have illustrated the optimization process in detail in Section 3.4. We exploit the pre-trained network to extract deep features and generate a feature matrix for a training mini-batch. We perform similarity calculation on feature matrix by inner product operation to obtain a similarity matrix with Figure 2. The RSIR framework based on the global optimal structured loss. The upper part denotes training stage and we fine-tune the pre-trained network with our global optimal structured loss. We utilize the fine-tuned network for more discriminative feature representations extraction. The bottom part is testing stage. The query image and the testing set would be input in the fine-tuned network, and the top K similar images would be returned.
During the training stage, we utilize our proposed method to fine-tune the pre-trained network and we have illustrated the optimization process in detail in Section 3.4. We exploit the pre-trained network to extract deep features and generate a feature matrix for a training mini-batch. We perform similarity calculation on feature matrix by inner product operation to obtain a similarity matrix with size Q × Q. And then we utilize our proposed global optimal structured loss to optimize the embedding space by increasing the similarity of positive sample pairs and reducing the similarity of negative ones which are selected by using a superior pairs mining scheme. The optimal embedding space could be efficient to force positive pairs more compact within a fixed hypersphere and impel different class pairs apart away from each other with a given margin. At the stage of testing, we utilize the fine-tuned network to extract deep features which could be more discriminative. We conduct the similarity computing operation (inner product) on the feature matrix to return a similarity matrix for a test set. Lastly, the top K similar remote sensing images would be returned according the values of similarities for each query.
Experiments and Discussion
In this section, we represent some details about the implementation of our experiments and verify the effectiveness of our proposed method by conducting experiments on different remote sensing datasets.
Experimental Implementation
We perform the experiments on Ubuntu 16.04 with a single RTX 1080 Ti GPU and 64 GB RAM. We implement our method by using Pytorch. The Inception network with batch normalization [75] which is pre-trained on ILSVRC 2012-CLS [76] would serve as our initial network. Moreover, during training, a FC layer is added on the top of our initial network and it is behind the global pooling layer. We utilize Adam as optimizer to implement our experiments. The learning rate is set to 1e −5 during training for our all experiments; the training process would be converged at 600 epochs. We use retrieval precision [50] to report the experimental results. The retrieval precision could be defined as TP/R, where TP is the number of images belong to the same category and R is the amount of returned images (candidates) for a query q. We select all images in the test set as query images and the final results which would be denoted as AveP: where |Q| means the volume of query images in the test set, R denotes the returned images for a query q, TP is the number of true positive images for a query q. And in our paper, we only return the top 20 retrieval images (candidates) by following the setting in DBOW [50].
Datasets and Training
Datasets. We perform our experiments on four kinds of different remote sensing databases: UCMerced Land Use [16,66], Satellite Remote Sensing Image Database [77], Google Image Dataset of SIRI-WHU [17,19,78] and NWPU-RESISSC45 [1]. We would like to give an introduction to these benchmark databases as follows: UCMerced Land Use [16,66] is collected from large amount of images download from the United States Geological Survey (USGS) by the team at the University of California Merced. This dataset is commonly used in tasks of retrieval and classification in the field of RSIA. UCMerced Land Use includes 21 geographic categories and there are 100 remote sensing images per category, the size of an image is 256 × 256 pixel with 0.3 m spatial resolution. We denote this dataset as UCMD in the remaining parts of this section.
Satellite Remote Sensing Image Database [77] contains 3000 remote sensing images of 256 × 256 pixel and the spatial resolution of each pixel is 0.5 m. There are 20 geographic categories labeled manually and each category includes 150 images. We denote this dataset as SATREM for convenience in the remainder of this section.
Google Image Dataset of SIRI-WHU [17,19,78] contains 2400 remote sensing images with size of 200 × 200 pixel and the spatial resolution of each pixel is 2 m. This dataset contains 12 geographic categories and there are 200 images in a certain category. As a matter of convenience, we denote this dataset as SIRI in experiments and discussion.
NWPU-RESISSC45 [1] is collected from Google Earth and is a large-scale remote sensing dataset. There are 31,500 remote sensing images totally and the size of image is 256 × 256 pixel. The spatial resolution of them varies from 30 to 0.2 m. This dataset contains 45 geographic categories and each category owns 700 remote sensing images. In order to facilitate the discussion in the remaining parts of this section, we indicate this dataset as NWPU.
Training setting. By following the data split protocol used in DBOW [50], we divide the training and testing set on a scale of 4:1 for each dataset. We crop the size of all input images to 224 × 224. In order to avoid overfitting during training, the data augmentation operation of random crop with random horizontal mirroring is applied in our experiments. As for testing stage, we utilize single center crop to realize data augmentation. During training, we set the size of every mini-batch as B.
A mini-batch consists of a certain amount of random geographic categories, and we sample M random images from each geographic category for training. We set M = 5 in all experiments by following the work of Wang et al. [45]. According to the analysis described in the section of ablation study, we set the hyper-parameters mentioned in Section 3 as β 1 = 2, β 2 = 50, = 0.1, α = 0.8, m = 0.5 in following experiments.
Comparision with the Baselines
Baselines. Tang and Raffaele successively performed comprehension comparisons on multiple systems [49,50]. We record the method proposed by Tang et al. as DBOW [50] and the method proposed by Raffaele et al. as ADLF [49] for convenience. Besides the DBOW and ADLF, we also select other three excellent works provided in DBOW and ADLF as baselines for comparison. The baselines could be introduced in detail in Table 1. For DN7 [28] and DN8 [28], the results are obtained by using the DN features extracted from the 7th and 8th fully connected layers in DBOW. For ResNet50, the result is obtained by using the VLAD encodings following ResNet 50 [51]. We would directly utilize the obtained results in their works as reference for comparisons. To verify the superiority of our proposed global optimal structured loss, we conduct a set of experiments on four different remote sensing datasets. We compare our proposed method with the baselines in the task of RSIR. Convolutional + VLAD 1500 DBOW [50] Convolutional + BoW 16,384 ADLF [49] Convolutional + VLAD 16,384 As mentioned in Section 3, we fine-tune the network with our proposed global optimal structured loss. We utilize the features extracted from the fine-tuned network for four different remote sensing datasets to realize the task of RSIR and perform a comparison with the baselines mentioned above. We set the embedding size to 512 and batch size to 40 in our experiments. Herein, we denote our proposed global optimal structured loss with pairs mining strategy as GOSLm. We present the results in Table 2. We could conclude from Table 2 that our global optimal structured loss with pairs mining strategy obtains the state-of-the-art results on the datasets of SIRI and NMPU. The AveP (%) outperforms the DBOW by 4% (from 92.6% to 96.6%) on SIRI and obtains the improvement of 4.6% (from 85.7% to 90.3%) on NMPU over ADLF. As for the datasets of UCMD and SATREM, we achieved the second-best performance with the AveP (%) is 85.8% and 91.1% respectively. While the best results on UCMD is obtained by ADLF which is with the post-processing of query expansion (QE), but on the remaining three datasets, our method would achieve stronger performance than ADLF. DBOW obtains the best performance on SATREM. However, our proposed method would outperform the DBOW on the remaining three datasets. Furthermore, it's worth noting that we conduct our experiments with raw feature representations without any post-processing operations like whitening, re-ranking and QE. We could learn that our proposed method shows great effectiveness in the field of RSIR and could obtain the state-of-the-art results on commonly used remote sensing datasets. To further investigate the effectiveness of our proposed method, we would like to show the precisions of the different geographic categories in the four remote sensing datasets in Tables 3-6 and the best results would be highlighted in bold. We utilize the top 20 retrieval images to compute the precision results for per geographic category. We could learn from Table 3 that our method achieves a marked improvement in nearly half of categories. Specifically, our proposed method makes the most prominent promotion on "Golf" and "Sparse" with the increase of 7% (from 85% to 92%) and 12% (from 79% to 91%). Moreover, we also make some small promotion on some categories. Specifically, the proposed method increases the precision by 1% (from 94% to 95%) over DN7 on "Agriculture", 3% (from 87% to 90%) over DBOW on "Baseball", 2% (from 93% to 95%) over DBOW on "Storage" and 1% (from 94% to 95%) over DBOW and ADLF on "Tennis". However, the weaker performance is obtained on other categories and we would like to report the results as follows. The precisions are 82%, 92%, 78%, 95%, 95%, 83%, 95%, 80%, 78% and 91% on the categories of "Airplane", "Beach", "Buildings", "Chaparral", "Forest", "Freeway", "Harbor", "Intersection", "Overpass", "Runway" respectively which are about on average level. We also come in second place on "Mobile", "Parking" and "River" with the precisions are 80%, 95% and 86% respectively. And our proposed method obtains the worst results on "Dense" and "Medium-density" with the precision of 55% and 59% respectively. We make a further research on the retrieval results and it turns out that our method is confused by the images belong to "Dense" with "Medium-density", "Mobile" and "Buildings". The averages of all precisions on UCMD with our proposed method comes in the second place and the result is 85.8%. From Table 4, we could know that our method outperforms the state-of-the-art methods on half of the categories in SATREM. Especially, our proposed method could make a great enhancement on the categories of "Airplane", "Beach", "Chaparral" and "Ocean". The precisions on these categories are 100%, 98%, 100% and 100% respectively, which are increased nearly by 4% comparied with the existing best results. We also obtain fine improvements on some categories. Specifically, the precisions are increased by 1% (from 97% to 98%) on "Artificial" and 2% (from 96% to 98%) on "Forest". Moreover, we obtain the same best results compared with the existing best methods on the categories of "Cloud", "Harbor" and "Runway" with the precisions of 100%, 98% and 97% respectively. However, our method obtains weaker results on some other categories. We achieve the second-best results on "Agriculture", "Buildings", "Road" and "Storage", the precisions on these categories are reported as 92%, 94%, 90% and 99% respectively. And the results on the categories of "Container", "Dense", "Factory", "Parking" and "Sparse" are mundane and they are mainly on the average level, the precisions on these categories are reported as 92%, 92%, 72%, 88% and 78%. The worst result is obtained on the category of "Medium-density" with the precision of 53%. The further analysis of retrieval results has shown that abundant incorrect images belong to "Building", "Dense Residential" and "Factory" retrieved for "Medium-density" images. For the average of the precision of all categories in SATREM, we could achieve a competitive result compared with the state-of-the-art results. Our proposed method obtains the second-best result with 91.1%. The results in Table 5 show that our proposed method achieves the state-of-the-art performance in almost all categories. To be specific, we achieve significant improvements compared with the existing best results on the categories of "Harbor", "Overpass" and "Park" with the improvement of 9% (from 89% to 98%), 6% (from 94% to 100%) and 10% (from 90% to 100%) respectively. We increase the precision slimly by 1% (from 99% to 100%) over DBOW on "Commercial", 2% (from 97% to 99%) over DBOW on "Idle", 2% (from 96% to 98%) over ADLF on "Industrial", 2% (from 93% to 95%) over DBOW on "Meadow", 1% (from 97% to 98%) over DBOW on "Residential" and 1% (from 99% to 100%) over ADLF on "Residential". However, we obtain weaker results on the categories of "Pond" and "River" and the precisions are reported as 96% and 77% which are on the average level. The final AveP of all images in SIRI is increased by approximately 4% (from 92.6% to 96.6%). The improvement achieved on dataset of SIRI demonstrates that our method could be more effective and superior than the state-of-the-art methods in processing the task of RSIR.
Comparison with Multiple DML Methods in the Field of RSIR
As described in Section 2.1.2, there are many proposed elegant DML methods and these methods have achieved appreciable performance in the tasks of general and fine-grained image retrieval. To verify the generalization ability of DML in the task of RSIR, we perform a set of experiments on four datasets with common DML methods of N-pairs loss [43], global lifted structured loss [74], our proposed global optimal structured loss and the latter two methods with pairs mining scheme. For convenience, we denote the global lifted structured loss, N-pairs loss and our global optimal structured loss as GLSL, N-pairs and GOSL respectively. Moreover, we use the subscript m to indicate whether employing our mining scheme. For all these DML methods, we set the embedding size to 512 and batch size at B = 40 in our experiments unless otherwise stated. For GLSL, we follow the experimental implementation and training set of our proposed global optimal structured loss with pairs mining scheme and the hyper parameter is set as µ = 0.5. And the GLSL m would follow the same setting of GLSL and the hyper parameter of mining scheme is set as = 0.1. As for N-pairs, we follow the experimental implementation and training set of our proposed global optimal structured loss with pairs mining scheme but the batch size and the number of images sampled from each category would be set as B = 20 and M = 2. We would like to represent the results of AveP (%) in Table 7. We could learn from Table 7 that the task of RSIR could achieve appreciable performance on the public remote sensing datasets with common DML methods. Firstly, we analyze the performance of the methods on UCMD dataset as follows. Our GOSL m achieved the best performance with AveP = 85.5% and it outperforms GOSL, GLSL m , GLSL and N-pairs by 0.7%, 1.5%, 3.2% and 3.6% respectively. Moreover, we could conclude that the GLSL and our GOSL with pairs mining scheme could increase the AveP by 0.7% and 1.7% respectively over the counterparts without pairs mining scheme. Secondly, we make a conclusion on the SATREM dataset according to the results reported in Table 7 as follows. We achieve the best performance (AveP = 91.1%) with our GOSL m and it outperforms GLSL m and N-pairs with 3.9% and 5.8% respectively. We could also learn that with pairs mining scheme, the performance of GLSL and GOSL would be promoted by a wide margin. To be specific, GOSL m improves the AveP from 86.8% to 91.1% over GOSL and GLSL m improve the AveP from 85.1% to 87.2% over GLSL. Thirdly, we analyze the results on SIRI with different DML methods. With the pairs mining scheme, our GOSL m could obtain the best performance with AveP = 96.6% and outperforms the GOSL with 1.3%. The pairs mining scheme also improves the performance of GLSL from 94.9% to 95.2%. Moreover, the AveP of our GOSL m is better than GLSL m and N-pairs. In the end, we analyze the results on NWPU according to the results in Table 7. We achieve the best performance with our proposed GOSL m which is higher than GLSL m and N-pairs by 1.7% and 6.0% respectively. Furthermore, the GLSL m increases the AveP by 3.1% over GLSL and the proposed GOSL m increases the AveP by 4.5% over GOSL. In brief, our proposed global optimal structured loss with pairs mining scheme could achieve the best performance on the four popular remote sensing datasets. The proposed novel loss is more effective than the common DML methods and the pairs mining scheme could be helpful to further boost the performance of DML methods.
To further study the efficiency of our proposed method, we propose to utilize Recall@K [44] (K = 1, 2,4,8,16,32) to evaluate the performance of RSIR with these common DML methods and our proposed method. Recall@K is a common metric used in retrieval task which is the average recall scores over all query images in a test set. We perform the experiments on the four remote sensing datasets with the same settings as the first part of this section. The results would be reported in Tables 8-11. From Table 8, we could learn that we achieve the best performance with our proposed GOSL m at the metric of Recall@K (K = 1, 2,4,8,16,32) and the results are reported as Recall@1 = 98.5%, Recalll@2 = 98.8%, Recall@4 = 99.0%, Recall@8 = 99.0%, Recall@16 = 99.2% and Recall@32 = 99.7% respectively. It's worth noting that the metric of Recall@1 is the most important index to analyze the effectiveness of methods. The proposed GOSL m outperforms GOSL, GLSL m , GLSL and N-pairs with 2.9%, 3.8%, 4.3% and 3.2% respectively at Recall@1. The results of GOSL m are increased by 2.9% over GOSL at Recall@1 and GLSL increases the Recall@1 by 0.5% over GOSL m . We could conclude that the global optimal structured loss with pairs mining scheme is superior than other DML methods and the pairs mining scheme is significant in improving the retrieval performance on the dataset of UCMD. We could conclude according to the results in Table 9 that our proposed GOSL m achieves the best performance at Recall@K (K = 1, 2,4,8,16,32) and the results are reported as Recall@1 = 94.8%, Recalll@2 = 97.0%, Recall@4 = 98.5%, Recall@8 = 99.3%, Recall@16 = 100% and Recall@32 = 100% respectively. We could find that the Recall@1 of GOSL m outperforms the methods of GOSL, GLSL m , GLSL and N-pairs by 1.5%, 0.3%, 2.0% and 1.2% respectively. Moreover, the performance of GOSL m is increased by 1.5% over GOSL and the GLSL m is increased by 1.7% over GLSL at Recall@1. According to the analyses, we could know that our proposed GOSL m shows great superiority and effectiveness in the task of RSIR on SATREM. We could make a conclusion as follows from Table 10. We achieve the best results with our proposed GOSL m at Recall@K (K = 1, 2,4,8,16,32) and we would show the results as Recall@1 = 97.2%, Recalll@2 = 97.5%, Recall@4 = 98.1%, Recall@8 = 98.7%, Recall@16 = 99.1% and Recall@32 = 99.5% respectively. The proposed GOSL m outperforms GOSL, GLSL m , GLSL and N-pairs by 1.2%, 1.4%, 1.8% and 2.2% respectively at Recall@1. We observe that the methods with mining scheme could be helpful in improving the RSIR performance. To be specific, the Recall@1 of GOSL m and GLSL m are improved by 1.2% and 0.4% over GOSL and GLSL. We could conclude from the analyses above that our proposed global optimal structured loss with pairs mining scheme is superior than other DML methods and the pairs mining scheme is helpful in improving the retrieval performance on SIRI. We could learn from Table 11 that the proposed GOSL m obtains the best results at Recall@K (K = 1, 2,4,8,16,32) and the results are reported as Recall@1 = 91.1%, Recalll@2 = 94.3%, Recall@4 = 96.3%, Recall@8 = 97.6%, Recall@16 = 98.3% and Recall@32 = 98.7% respectively. The proposed GOSL m outperforms the methods of GOSL, GLSL m , GLSL and N-pairs with 3.7%, 0.8%, 3.9% and 3.8% at Recall@1 respectively. We could also learn that the GLSL and our GOSL could be improved by 3.7% (from 87.4% to 91.1%) and 3.1% (from 87.2% to 90.3%) respectively at Recall@1 when utilizing the pairs mining scheme. The analyses above further demonstrate that our proposed global optimal structured loss with pairs mining scheme is more effective than other DML methods and the pairs mining scheme is significant in promoting the retrieval performance on the dataset of NWPU.
We report the errors of omission and commission with several easy and hard retrieval cases on UCMD to further validate the effectiveness of our proposed method. We show the top-10 similar images which are returned by N-pairs, GLSL m and our proposed GOSL m and represent the results in Figure 3. For each retrieval case, the top, middle and bottom rows denote the results obtained by using the methods of our GOSL m , GLSL m and N-pairs. The returned images with green and red border denote true and false retrieval results respectively. We could learn from Figure 3 that there are no omission or commission on the three easy retrieve cases with the three methods which means that the three methods all achieve excellent retrieval performance for the three easy categories (i.e., agricultural, storage tanks and tennis court). However, on other three hard cases, GOSL m , GLSL m and N-pairs perform worse as the categories of buildings, dense residential and medium residential with very low interclass variabilities. On case 4, the errors of GOSL m are lower than of GLSL m and N-pairs. On case 5, the errors of GOSL m , GLSL m and N-pairs are three, five and five respectively and the results show that our proposed GOSL m outperforms GLSL m and N-pairs for the category of dense residential. On case 6, errors with GOSL m , GLSL m and N-pairs are two, four and five respectively which demonstrates that our proposed GOSL m is more effective than the other two DML methods.
In a word, our GOSL m achieves the best performance on some easy retrieval cases and exhibits great superiority in coping with the challenge of low interclass variabilities existing in most categories of remote sensing images comparing with other DML methods.
agricultural, storage tanks and tennis court). However, on other three hard cases, GOSLm, GLSLm and N-pairs perform worse as the categories of buildings, dense residential and medium residential with very low interclass variabilities. On case 4, the errors of GOSLm are lower than of GLSLm and N-pairs. On case 5, the errors of GOSLm, GLSLm and N-pairs are three, five and five respectively and the results show that our proposed GOSLm outperforms GLSLm and N-pairs for the category of dense residential. On case 6, errors with GOSLm, GLSLm and N-pairs are two, four and five respectively which demonstrates that our proposed GOSLm is more effective than the other two DML methods. In a word, our GOSLm achieves the best performance on some easy retrieval cases and exhibits great superiority in coping with the challenge of low interclass variabilities existing in most categories of remote sensing images comparing with other DML methods. Figure 3. Six retrieval cases with top-10 returned results on UCMD. The left part represents three easy retrieval cases and the right part represents three hard retrieval cases. For each retrieval case, the top, middle and bottom rows denote the results obtained by using the methods of our GOSLm, GLSLm, and N-pairs. The green and red border denote true and false retrieve results respectively.
Ablation Study
In this section, we perform an ablation study on sensing datasets. We make analysis on hyperparameters of our global optimal structured loss and analyze the performance of our method with different embedding size. We also study the impact of batch size for the performance of our proposed method. We would like to give more details as follows.
Hyper-Parameter Analysis
We conduct the analysis about the main parameters which have been mentioned in Section 3 on the dataset of Google Image Dataset of SIRI-WHU [17,19,78] on the Inception network with batch normalization [75]. We set embedding size to 512 and the batch size to 40 in our experiments And we set = 0.1 which is defined in Equations (11) and (12), = 2 and = 50 which are parameters in Equation (16) by following the setting of [45]. We use average value of precision (AveP) to measure the performance of RSIR as the same to DBOW. Figure 3. Six retrieval cases with top-10 returned results on UCMD. The left part represents three easy retrieval cases and the right part represents three hard retrieval cases. For each retrieval case, the top, middle and bottom rows denote the results obtained by using the methods of our GOSL m , GLSL m , and N-pairs. The green and red border denote true and false retrieve results respectively.
Ablation Study
In this section, we perform an ablation study on sensing datasets. We make analysis on hyper-parameters of our global optimal structured loss and analyze the performance of our method with different embedding size. We also study the impact of batch size for the performance of our proposed method. We would like to give more details as follows.
Hyper-Parameter Analysis
We conduct the analysis about the main parameters which have been mentioned in Section 3 on the dataset of Google Image Dataset of SIRI-WHU [17,19,78] on the Inception network with batch normalization [75]. We set embedding size to 512 and the batch size to 40 in our experiments And we set = 0.1 which is defined in Equations (11) and (12), β 1 = 2 and β 2 = 50 which are parameters in Equation (16) by following the setting of [45]. We use average value of precision (AveP) to measure the performance of RSIR as the same to DBOW.
The effectiveness of the fine-tuned network is crucial for more discriminative feature extraction which is significant to obtain more appreciable performance in the task of RSIR. In our proposed method, we aim to utilize a fixed positive boundary (α − m) to restrict the positive pairs into this boundary and use a given negative boundary α to force the negative pairs father than this boundary. Therefore, m is a fixed margin used to separate the two different boundaries. Herein, different values of α and m could differ the retrieval result. To achieve the best performance in RSIR task, we release our hyper-parameter analysis on α and m as follows.
As described in Section 3.4, factor α is a hyper-parameter used to limit the negative pairs far away from the positive pairs. We give a discussion on different α with {0.5, 0.6, 0.7, 0.8, 0.9, 1.0} by fixing m = 0.5. And we represent the results in Table 12. We could make a conclusion from Table 12 that when α is smaller than 0.6, the AveP keeps increasing monotonically. On the contrary, when α is larger than 0.6, the performance would decrease. We achieve the best result 96.6% when α is 0.6. We would like to set α = 0.6 in the section of experiments and discussion.
As for factor m, it is used to pull apart positive sample pairs away from negative ones. We conduct experiment to discuss the impact of hyper-parameter m by setting its value at {0.1, 0.2, 0.3, 0.4, 0.5, 0.6} and fixing α to 0.6. The results are shown in Table 13. From Table 13, we could conclude that when m is smaller than 0.5, the performance gradually increases. However, when m is larger than 0.5, the performance falls into degrading. The best result 96.6% would be achieved when m = 0.5. We prefer to select m = 0.5 for our following experiments according to the results in Table 13.
Impact of Embedding Size
Referring to the work of Wang et al. [45], the embedding size during training has an important impact on the retrieval performance. We compare the effectiveness of our proposed loss function on UCMD, SATREM, SIRI and NWPU datasets with embedding size at {64, 128, 256, 512, 1024}. We set batch size as B = 40. The results are reported in Table 14 and the best result is highlighted in bold. We could learn from Table 14 that the performance of UCMD, SATREM, SIRI and NWPU keeps sustained growth within the embedding size at 512 and it would go down with embedding size at 1024. The best results would be obtained when embedding size is set to 512 on the four datasets.
Impact of Batch Size
The batch size plays an important role in DML methods as it determines the size of problems need to be processed for each iteration in the training phase. We perform a set of experiments on UCMD, SATREM, SIRI and NWPU datasets with embedding size at 512, and we set batch size to {10, 20, 40, 60, 100, 160} for comparing. We report the results in Table 15. As the number of categories is limited in each dataset, the batch size of four datasets would be limited within 100, 105, 60 and 225 respectively. Once the batch size is larger than its upper limit, the related result would be invalid. We could learn from Table 15 that batch size has different degrees of influence on the four datasets. The changes of performance remain within about 1% on UCMD and SIRI, the SATREM and NWPU is most sensitive to the variation of batch size with the performance changes from 86.5% to 91.1% and 83.9% to 90.3% respectively. We obtain the best performance on the four datasets with batch size at 40.
The Retrieval Execution Complexity
In this section, we analyze the retrieval execution complexity of the retrieval system with our proposed method. We measure the time (in milliseconds) required for the retrieval process which includes deep features extraction and similarity matching. During the process of deep features extraction, it takes about 10 milliseconds to extract deep features for each image with size of 224 × 224 which is faster than the existing fasted RSIR methods [49]. We report the results on Table 16 and compare the retrieval time (similarity matching) taken from ADLF [49]. We could learn from Table 16 that as the size of test database grows, more time would be required for retrieval and the same conclusion is reached for the embedding size. Concretely speaking, the retrieve execution time is lower than ADLF which is the existing fast methods by 1. 36, 2.42, 9.64, 25.9, 45.64 and 73.63 milliseconds with DB size of 50, 100, 200, 300, 400 and 500, respectively, when the embedding size is 256. When the embedding size is 512, the retrieval execution time is lower than ADLF by 0.68, 2.97, 10.41, 15.24, 28.12 and 42.55 with DB size of 50, 100, 200, 300, 400 and 500, respectively. We achieve the lowest retrieve execution time with embedding size of 256 and the best results are 0.28, 0.40, 0.66, 1.03, 1.49 and 2.31 milliseconds at the DB size of 50, 100, 200, 300, 400 and 500, respectively. We could learn that the embedding size has less effect of lower than 2 milliseconds on the retrieval time comparing with DN7, DN8, DBOW and ADLF. Based on the discussions above, we could observe that our proposed method could achieve the state-of-the-art performance with lower retrieval time.
Conclusions
In this paper, we propose a novel global optimal structured loss under DML paradigm for more effective remote sensing image retrieval. Our proposed global optimal structured loss aims to learn an effective embedding space where the positive pairs would be limited within a given positive boundary and the negative ones would be pushed away from a fixed negative boundary, and the positive and negative pairs would be separated by a fixed margin. To deal with the key issue of local optimization in most DML methods, we propose to utilize a softmax function rather than a hinge function in our loss Sensors 2020, 20, 291 24 of 28 function to realize global optimization. To make full use of the sample pairs and take the difference and relationship between positive and negative sample pairs into consideration, we utilize a superior pairs mining strategy to mine more informative sample pairs in the confusion scope. It helps to eliminate the influence of less informative sample pairs and utilize the mined sample pairs to establish an elegant similarity structure for positive and negative sample pairs and the structure distribution could be preserved during embedding space optimization. Furthermore, our proposed global optimal structured loss would achieve the state-of-the-art performance with the lowest retrieval time on four popular remote sensing datasets compared with baselines.
Herein, we study the effectiveness of DML methods used in the task of RSIR and concentrate on how to design a more elegant loss function for more effective embedding space learning. The experimental results show that our proposed method achieves the state-of-the-art performance under the metric of AveP and Recall@K when compared with other common DML methods. We also improve the retrieval performance on SIRI and NWPU over the baselines by a large margin and refresh the state-of-the-art results. However, we could only achieve the second-best performance on UCMD and SATREM. It's worth noting that we don't conduct any post-processing operations and extra techniques like query expansion and attention mechanism on our proposed method. From the discussion we presented, our method fails to extract more informative feature representations which could be significant in improving retrieval performance. We prefer to combine the attention network with DML methods and utilize post-processing operations to further enhance the performance of RSIR in our future works. |
231964210 | s2orc/train | v2 | 2021-02-20T06:16:15.855Z | 2021-02-01T00:00:00.000Z | Analysis of D-A locus of tRNA-linked short tandem repeats reveals transmission of Entamoeba histolytica and E. dispar among students in the Thai-Myanmar border region of northwest Thailand
Intestinal parasitic infections, including those caused by Entamoeba species, are a persistent problem in rural areas of Thailand. The aims of this study were to identify pathogenic Entamoeba species and to analyze their genotypic diversity. Stool samples were collected from 1,233 students of three schools located in the Thai-Myanmar border region of Tak Province, Thailand. The prevalence of Entamoeba infection was measured by polymerase chain reaction (PCR) using species-specific primers. Thirty-one (2.5%) positive cases were detected for E. histolytica, 55 (4.5%) for E. dispar, and 271 (22.0%) for E. coli. Positive samples for E. histolytica and E. dispar were exclusively obtained from a few school classes, whereas E. coli was detected in all grades. No infections caused by E. moshkovskii, E. nuttalli, E. chattoni, and E. polecki were detected in the students studied. The D-A locus of tRNA-linked short tandem repeats was analyzed in samples of E. histolytica (n = 13) and E. dispar (n = 47) to investigate their diversity and potential modes of transmission. Five genotypes of E. histolytica and 13 genotypes of E. dispar were identified. Sequences of the D-A were divergent, but several unique genotypes were significantly prevalent in limited classes, indicating that intra-classroom transmission has occurred. As it was unlikely that infection would have been limited within school classes if the mode of transmission of E. histolytica and E. dispar had been through the intake of contaminated drinking water or food, these results suggest a direct or indirect person-to-person transmission mode within school classes. Positive rates for three Entamoeba species were 2-fold higher in students who had siblings in the schools than in those without siblings, suggesting that transmission occurred even at home due to heavy contacts among siblings.
Introduction (IRB Nos. 236/54 and 246/61). Written informed consent was obtained from all participants or from their parents or guardians prior to stool sample collection.
Study area and collection of samples
A cross-sectional study was conducted at three schools (A to C) located in the Thai-Myanmar border region in Tha Song Yang, the northwestern-most district of Tak Province, Thailand, in July 2018 (Fig 1). School A has a kindergarten (2 grades), primary school (6 grades), and secondary school (3 grades). School B is located on a hill, and is a branch of school A, comprising a small-scale kindergarten and primary school. School C is a secondary school (6 grades).
Clean, wide-mouthed screw-capped plastic containers and spatulas were distributed to the children (or their parents) with instructions for stool sample collection. All 1,788 students from three schools: school A (n = 1,144), school B (n = 82), and school C (n = 562), were requested to submit their stool samples. The next day, stool samples were collected, kept cool on ice, and transported to the laboratory at Chulalongkorn University, Bangkok. A total of 1,233 stool samples were obtained, accounting for 69% of total students from the three schools (70.6%, 53.7%, and 67.8% from schools A, B, and C, respectively). Main reasons for the students not providing their stool samples were no defecation on that morning or unwillingness to participate. Characteristics of students who participated in this study are summarized in Table 1. Herein, individual classrooms for kindergarten, primary, and secondary schools are referred to as 'Kin-', 'Pri-,' and 'Sec-', respectively, followed by classroom grades and specific rooms. and washed several times with distilled water. The samples were then cultured in Robinson's medium at 37˚C. Grown trophozoites were treated with a cocktail of antibiotics and were then cultured monoxenically with Crithidia fasciculata in YIMDHA-S medium supplemented with 15% adult bovine serum at 37˚C [22]. Finally, some of the isolates were cultured axenically in the medium.
Extraction of DNA and polymerase chain reaction (PCR) analysis
Genomic DNA was isolated from the stool samples using a QIAamp Fast DNA Stool Mini Kit (Qiagen). Genomic DNA from cultured trophozoites was isolated using a DNeasy Blood and Tissue Kit (Qiagen). PCR amplification of the partial 18S rRNA genes of E. histolytica, E. dispar, E. nuttalli, E. coli, and E. chattoni was performed using primers specific for each species [23][24][25]. PCR amplification of the 18S gene from E. moshkovskii was performed following the same condition described in a previous study but using newly designed primers [26]. Sequences of primers and annealing temperatures used are shown in Table 2. Genomic DNA isolated from cultured trophozoites of E. histolytica HM-1:IMSS, E. dispar SAW1734RclAR, E. moshkovskii Laredo, and E. nuttalli P19-061405 was used as a positive control. For E. chattoni and E. polecki, genomic DNA extracted from cysts in fecal samples of macaques and pigs was used as the positive control, respectively. The D-A locus of tRNA-STR was also amplified using common primers for E. histolytica and E. dispar [27].
Sequencing
PCR products were purified with either a QIAquick PCR purification kit (Qiagen) or QIAquick Gel Extraction Kit (Qiagen), and were sequenced using a BigDye Terminator v3.1 cycle sequencing kit (Applied Biosystems, Carlsbad, CA, USA) with an Applied Biosystems 3500 Genetic Analyzer (Applied Biosystems).
Data analysis
Categorical data were computed as odds ratio (OR) with the 95% confidence interval (CI), and were compared between groups with the Chi-square test, with a difference considered to be statistically significant at p < 0.05. Data analysis was performed using Prism ver. 6.
Prevalence of intestinal parasites observed by microscopy
The prevalence of each parasite was measured by microscopy, and is summarized in Table 3.
Prevalence of Entamoeba species detected by PCR
PCR was performed to increase the sensitivity of detection and to distinguish between E. histolytica and morphologically similar species. Thirty-one (2.5%) of the samples were positive for E. histolytica, 55 (4.5%) were positive for E. dispar, and 271 (22.0%) were positive for E. coli ( Table 4). The prevalence of each species was the highest in school B and was the lowest in school C. The prevalence of E. histolytica and E. dispar was 2.5% and 4.8% in school A, 6.8% and 20.5% in school B, and 2.1% and 1.8% in school C, respectively. The prevalence of E. dispar significantly differed among the three schools (p < 0.0001 for school A vs B and B vs C; p = 0.0126 for school A vs C), whereas no significant difference was found among schools for the prevalence of E. histolytica (S1 Table).
Distribution of students positive for Entamoeba in each grade and class
The distribution of students in school A who were positive for Entamoeba is represented in Table 5. The 20 students who were positive for E. histolytica were exclusively from grade 4 of primary school through grade 2 of secondary school, with the greatest number of positives found in grades 5 (n = 7) and 6 (n = 10) of primary school. However, the distribution of students who were positive for E. histolytica was different among classes in grades 5 and 6 ( Fig 2A). Of students who were positive, 5 (33.3%) were from class Pri-5c, 2 (8%) from Pri-5b, 9 (26.5%) from Pri-6a, and 1 (3.3%) from Pri-6c. No students from classes Pri-5a or Pri-6b were positive, whereas E. dispar was detected in all grades. However, the majority of students who were positive were from grades 4 and 5 (6 and 19 of 39, respectively) of primary school. Nine
PLOS NEGLECTED TROPICAL DISEASES
Transmission of E. histolytica/E. dispar among students (32.1%) students from class Pri-5a and 10 (40%) students from Pri-5b were positive, but there was no positive result for E. dispar detected among students from Pri-5c. In contrast, E. coli was detected in all grades with a prevalence ranging from 13.6% to 32.4%. E. coli was also detected in all classes, with a prevalence varying from 5% to 48%.
In school B, we found that students who were positive for E. histolytica were exclusively from grades 1 and 3 of primary school (Table 6 and Fig 2B). Similarly, 8 of the 9 students who were positive for E. dispar were also from grades 1 to 3. However, students who were positive for E. coli were distributed across all grades, with prevalence varying from 25% to 100%.
In school C, 7 of 8 students who were positive for E. histolytica were from grades 1 and 2 of secondary school (Table 7). One (3.4%) student from class Sec-1b, 2 (7.4%) from Sec-1c, 3 (23.1%) from Sec-2c, and 1 (5%) from Sec-2b were positive. No student in classes Sec-1a and Sec-2a was positive ( Fig 2C). All 7 students who were positive for E. dispar were from grade 4 of secondary school. Three (9.7%) students from class Sec-4a, 3 (12.5%) from Sec-4b, and 1 (4.7%) from Sec-4c were positive, but no positives were detected in Sec-4d. In contrast, E. coli was detected in all grades, with the prevalence varying from 8.8% to 19.2%. Only class Sec-4c had no cases of E. coli infections; in other classes, the prevalence ranged from 4% to 33.3%.
Isolation of E. histolytica and E. dispar
Successful in vitro culture of samples containing quadrinucleate Entamoeba cysts in Robinson's medium was achieved for 14 isolates. Of these, PCR identified E. histolytica in 5 isolates (school A, n = 2; school B, n = 3), while 9 isolates belonged to E. dispar (school A, n = 7; school B, n = 1; school C, n = 1). Genotypic analysis was subsequently performed. Three E. histolytica isolates (school A, n = 1; school B, n = 2) were established as axenic strains in YIMDHA-S medium.
Genotypic analysis of E. histolytica
Sequencing was used to identify polymorphisms of tRNA-STR at the D-A locus. Sequences were successfully obtained for 13 (including 5 cultures) of 31 E. histolytica samples, and 47 (including 9 cultures) of 55 E. dispar samples. Five E. histolytica genotypes were identified (Fig 3). Three genotypes were identified in school A, 1 in school B, and 4 in school C. Of the 5 genotypes, 3 were common to schools A and C, and 2 were found exclusively in school B or
PLOS NEGLECTED TROPICAL DISEASES
Transmission of E. histolytica/E. dispar among students school C (Fig 3). The most prevalent genotype (Eh2DA) was observed in 5 samples. The exclusive prevalence of genotype Eh5DA in school B was significant (p = 0.0047 in school A vs B and B vs C) (S2 Table). The prevalence in B-Pri-1d was also significnatly higher than the total prevalence from the other classes of the three schools (p = 0.0050).
Genotypic analysis of E. dispar
Thirteen E. dispar genotypes were identified in 47 samples, suggesting that it had relatively greater diversity (Fig 4). Eight genotypes were identified in school A, 3 in school B, and 4 in school C. Only two genotypes were common to different schools, and the 11 other genotypes were unique to each school. The most prevalent genotype, Ed4DA, was observed in 21 samples from school A, including all 7 positive samples from class Pri-5a and 7 of 8 samples from Pri-5b (Fig 5). The prevalence in Pri-5a was significantly higher than that in the remaining classes of school A (p = 0.0242) (S3 Table). The prevalence of Ed4DA in grade 5 (Pri-5a and Pri-5b) was also significantly higher than that in the remaining classes of primary school (p = 0.0204). These results indicated that transmission of Entamoeba occurred within classes and grades. Genotype Ed4DA was also prevalent in grades 4 (class Pri-4a) and 6 (class Pri-6c), whereas genotype Ed12DA was found exclusively in the lower grades of school A, including classes Kin-1b, Kin-2a, and Pri-2a. In contrast, the genotypes Ed3DA and Ed10DA were found exclusively in school A's secondary school. In school B, genotype Ed7DA was prevalent in grade 1 (class Pri-1d), whereas genotype Ed5DA was prevalent in grades 2 (class Pri-2d) and 3 (class Pri-3d). The prevalence of Ed7DA in Pri-1d was significantly higher than that in the remaining classes of school B (p = 0.0027), indicating that intra-class transmission occurred (S4 Table). The prevalence of Ed5DA in Pri-3d and Pri-2d was also significantly higher than that in other
PLOS NEGLECTED TROPICAL DISEASES
Transmission of E. histolytica/E. dispar among students classes of school B (p = 0.0027) (S5 Table). In class Sec-4b of school C, the 3 E. dispar samples had different genotypes: Ed2DA, Ed6DA, and Ed7DA.
Entamoeba prevalence and genotypes in siblings
Of the 1,233 students included in this study, 400 had siblings in the same schools. The positive rates for three Entamoeba species among students who had siblings were about 2-fold higher than those among students who did not have siblings in their school (p = 0.0028 for E. dispar
PLOS NEGLECTED TROPICAL DISEASES
Transmission of E. histolytica/E. dispar among students and p < 0.0001 for E. coli) ( Table 8). Although the difference in E. histolytica infection prevalence was not statistically meaningful (p = 0.055), it almost reached a significant level. To study the transmission of Entamoeba within families, genotyping was used to identify siblings who were positive (Fig 5). Two pairs of siblings were among the 31 E. histolytica-positive children.
The Eh5DA genotype of E. histolytica was shared with a sibling (circled 1) from school B, whereas genotypes from the other siblings from school A could not be analyzed. Moreover, siblings from six families were among the 55 E. dispar-positive children. Genotypes for E.
Discussion
This study revealed that the prevalence of intestinal parasites differed significantly among three schools located in the same district. School B, which is rural and small, had the highest prevalence, especially for nematodes transmitted through soil, such as A. lumbricoides and T. trichiura. In Thailand, a high prevalence of intestinal parasitic infections such as those caused by nematodes has been reported in children of the Karen Hill tribe and in immigrants from Myanmar [2,28,29]. We observed a much higher prevalence of A. lumbricoides in school B compared to that previously reported in Thailand [1][2][3]. Students who attended school B lived in close proximity to the school. Most of them lacked household latrines and drank untreated water from streams running through their villages. Therefore, the main reason for the high prevalence of parasitic infections in school B would be the poor hygiene. Further studies will be needed to investigate if the prevalence of this parasite is also relatively high within these students' families. The prevalence of E. histolytica/E. dispar and E. coli as detected by PCR was 6.9% and 22%, respectively, with a single mixed infection of E. histolytica and E. dispar detected. This was about twice as high as the prevalence observed by microscopy. Although concentration techniques may increase detection by microscopy, some studies have reported no difference for protozoa [3,29,30]. We showed that E. dispar, but not E. histolytica, was prevalent in the studied populations. Previous studies have reported that E. dispar was more prevalent than E. histolytica in hospitals in Bangkok [31,32] and that E. histolytica was prevalent in the Thai-Myanmar border region, such as in Phang-Nga Province [33].
Our main finding was that E. histolytica and E. dispar infections were distributed in a limited number of classes. In school A's grade 5 of primary school, the number of samples positive for E. histolytica was 0 (0%) in class Pri-5a, 2 (8%) in Pri-5b, and 5 (33.3%) in Pri-5c. In the same classes, the number of samples positive for E. dispar was 9 (32.1%), 10 (40%), and 0 (0%), respectively, suggesting that the transmission occurred within classes. This also suggested that transmission of these two species must have occurred independently, even if both species shared the same mode of transmission. It has previously been reported that the D-A locus of tRNA-STR is highly variable and is thus useful for Entamoeba fingerprinting [15,19,21,[34][35][36].
PLOS NEGLECTED TROPICAL DISEASES
Transmission of E. histolytica/E. dispar among students The present study showed that the prevalence of genotypes such as Ed4DA and Ed7DA was significantly higher in several classes, indicating that transmission occurred within the classroom. To our knowledge, this is the first report revealing the transmission of Entamoeba species within the classroom setting.
Concerning the high prevalence of Ed4DA in both Pri-5a and Pri-5b, in addition to intraclass transmission, the possibility that inter-class transmission between these two classes occurred could not be ruled out. However, it is difficult to prove whether transmission between these two classes occurred once (subsequent intra-class transmission), a few times, frequently, or did not occur at all (independent intra-class transmission) because of the crosssectional nature of the present study. It is also reasonable to consider that contact between school children is longer and more extensive in the same classroom than in different classrooms. In fact, there was no E. dispar-positive case in A-Pri5c, despite being part of the same grade. Therefore, it is also reasonable that the incidence of intra-classroom transmission is higher than that of inter-classroom transmission. As different genotypes were found in secondary schools, this also suggests that children in primary schools have more person-to-person contact than older students. This study also provides evidence about the great genetic diversity of E. dispar, and demonstrates that the mobility of the parasite is restricted to relatively small areas where transmission is maintained [19][20][21].
In general, the mode of transmission for Entamoeba is through ingestion of contaminated drinking water or food [37][38][39][40][41][42]. However, the supplied drinking water at school A was filtered, and for lunch, children either bought food that was sold at the school or were provided with side dishes that were cooked in a central kitchen at the school. If water or food was the source of infection, it is unlikely that infections would be limited to specific classes. Therefore, the most likely route of transmission would be through direct or indirect person-to-person contact during daily activities in classes and in grades. It is also improbable that Entamoeba was transmitted through sexual behavior in children. As such, we propose a possible mode of transmission that is similar to that of Enterobius vermicularis. Although the prevalence of E. vermicularis was not investigated in this study, a prevalence of 7.8% has previously been reported in 2 primary schools in Tak Province [43]. Further study is required to test this hypothesis through simultaneous detection of Entamoeba cysts and E. vermicularis eggs in samples of dirt collected from under the fingernails. Indeed, cleanliness of the fingernails has been shown to have a significant effect on the prevalence of intestinal parasitic infections in school children [44].
In the present study, students who had siblings in the same school showed a significantly higher prevalence of E. dispar and E. coli infections than students who did not have siblings in the school, although the higher prevalence of E. histolytica was not significant, suggesting that transmission of Entamoeba between siblings occurred at home. Indeed, the fact that four of six sets of siblings with E. histolytica and E. dispar infections had identical genotypes at the D-A locus further supports the significance of this transmission route. The prevalence of genotype Ed5DA in Pri-3d and Pri-2d of school B may be due to familial transmission. Because there were siblings infected with E. dispar showing an identical genotype, Ed4DA, in Pri-5a and Pri-5b, it might be possible that the siblings transmitted the infection from home to two classrooms in the school. Another possibility is that one of the siblings was infected in a classroom of the school, carried the infection to home and transmitted it to siblings, and then transmission was extended to another classroom from home. There is also a possibility of transmission from school to home and vice versa [20]. It is interesting and important to follow the direction of transmission. For further analysis, the following questions are important and need to be answered: Are the children infected before they enter into the school? Are their families exposed to the parasite or infected? Were children infected in the school? Initial infection might occur at home; however, it is difficult to prove the direction of transmission and to ascertain how to transmit the infections between home and school in this study, because of the limitations of it being a cross sectional study. Further studies would be required for answering these questions.
By contrast, we previously reported different E. dispar genotypes in a couple from a family in Nepal [21]. Different E. dispar genotypes at another locus have also been observed in children and their relatives in Mexico [20]. There may be differences in behavior between adults and children that affect transmission. It is probable that contact with siblings is closer than that with the parents at home. Further analysis of family members is required to confirm transmission within families, as the possibility of transmission by contaminated drinking water and foods in the homes could not be excluded.
Outbreaks of amebiasis have also been reported in institutions for individuals with mental disabilities [45][46][47][48][49]. Abnormal behaviors such as pica and fecal play (coprophilia) are suggested to be factors that promote the transmission of Entamoeba. However, the teachers reported that no such behavior was observed in the students from the schools studied.
A high prevalence of E. moshkovskii infection has recently been reported in rural communities of many countries. A prevalence rate of 18.2% was reported in Yemen [50], 15.9% in South Africa [30], 21.1% in preschoolers in Bangladesh [51], 12.3% in Malaysia [52], 61.8% in patients in Australia [53], and 13% for suspected HIV or HIV-positive inpatients in Tanzania [54]. However, no E. moshkovskii infections were detected in this study or in our previous study in Nepal [21]. In the present study, we designed a new primer set for amplification of the E. moshkovskii 18S rRNA gene, which covered the sequences recently stored in DNA databases. These sequences were from E. moshkovskii isolates from human (KP722601-KP722605), snake (MN536488), cockroach (MN535795, MN535796, MN536492), beetle (MN536495), and various water sediments (MN536493, MN536494, MN536496-MN536501). The primer set was shown to effectively amplify the partial 18S sequence of Laredo strain (AF149906) (S1 Fig). Although E. moshkovskii has previously been detected in clinical samples, its prevalence may be lower or may not be widely distributed in Thailand [31,32].
E. nuttalli and E. chattoni infections were also not detected. Macaques are the natural host of these parasites, and although we had previously reported E. chattoni infection in Nepal where macaques live in the same area as the study participants, few macaques are found in the studied area in Thailand [21]. Despite the presence of pigs that were kept near school B, E. polecki infection was not detected.
In conclusion, the mode of transmission of E. histolytica and E. dispar among school children on the Thai-Myanmar border region appears to be through direct or indirect person-toperson contact within classes, and it also seems to occur in siblings at home due to their more extensive contact. These findings suggest that specific measures are necessary to prevent transmission in both schools and at home. |
253425560 | s2orc/train | v2 | 2022-11-10T15:25:04.404Z | 2022-11-10T00:00:00.000Z | Effects of HIV-related worries on fertility motivation moderated by living children among couples living with HIV: A dyadic analysis
Introduction HIV-related worries are a major barrier to achieving fertility goals for couples living with HIV (CLWH). We examined the moderating role of living children in the association between HIV-related worries and fertility motivation in CLWH including happiness, well-being, identity, and continuity. Methods The data of 322 reproductive-aged CLWH were collected for this cross-sectional study from a referral antiretroviral therapy clinic in Kunming, China between October and December 2020. Intra- and interpersonal mechanisms of association between HIV-related worries and fertility motivation moderated by the number of living children in husband-wife dyads were analyzed by the actor-partner interdependence moderation model. Results The high-level HIV-related worries of the wives and husbands were associated with the spouses’ fertility motivation. Having at least one child helped to ameliorate the negative association between one’s own HIV-related worries and fertility motivation. However, there was no evidence of such moderation in the spouse. Conclusion Whether the CLWH has at least one living child should be taken into account in counseling. Childless families should be counseled on HIV-related worries as those worries have a greater negative effect on fertility motivation than couples who have a child.
Introduction
It was anticipated that by the end of 2021, fertility issues would impact 3.8% of reproductive-aged adults living with HIV worldwide (Global HIV and AIDS statistics-Fact sheet, 2022). This represents a growing public need to address this issue. A systematic review reported a 42.04% pooled prevalence of fertility desire among people living with HIV (Yan et al., 2021). Furthermore, fertility motivation is a critical contributor to the occurrence of fertility desire. Fertility motivation has been defined as the disposition to react positively or negatively to childbearing, which has changed dramatically throughout time (Miller, 1994;Dyer et al., 2008). The findings of Miller et al. demonstrated that an increased positive motive for childbearing enhances fertility desire (want to have a child) (Miller and Pasta, 1995). HIV-related worries are one of the most prevalent forms of psychological suffering among couples living with HIV (CLWH) (CBD et al., 2019). Consistently, health services to safely help CLWH to have a new child is available in China, such as the National Free ART program and prevention of mother-to-child HIV transmission (PMTCT). Yet HIV-related worries of CLWH on fertility are still an unsolved problem.
A previous study found that rural Malawians, especially HIV-positive men, want fewer children. Women fear the health repercussions of HIV-positive pregnancies and childbearing, while men consider childbirth fruitless since they predict their own early death and the deaths of their future offspring (Yeatman, 2009). Achieving a broad consensus regarding HIV and fertility is one of the most essential tasks HIV-positive couples must acquire. Both HIV-related and reproductive factors are strongly associated with fertility outcomes such as motivation and behavior (Nattabi et al., 2009;Joseph Davey et al., 2018;Siegel et al., 2018;Yan et al., 2021). Recent research suggested that fertility issues surrounding worry about the risk of HIV transmission and fertility are correlated (Haddad et al., 2017;Marston et al., 2017;Milford et al., 2021). HIV-related restrictions on a couple's fertility are alarming given that many HIV-positive couples report experiencing difficulties (Rogers et al., 2016).
Fertility motivation is a crucial moment in the process of establishing a family plan for a couple living with HIV. HIV-related worries experienced early in reproductive decision-making were associated with later life span extension due to the efficacy of antiretroviral therapy (ART) (Faraji et al., 2021), which suggests that HIV-related fertility hurdles may have long-term repercussions in CLWH. Other kinds of worries, including antiretroviral drug toxicity (Joseph Davey et al., 2018) and stigma or discrimination (Turan and Nyblade, 2013), are positively linked to fertility behavior.
From a previous study in China, 66.9% of households in the HIV context had two to three children, while 21.4% had one child (Ji et al., 2007). Having living children is a source of realizing reproductive goals. Various investigations have shown that reproductive desires are complicated and contradictory which reflect conflicts between the family and social expectations to have children and pressures to avoid HIV infection and reinfection (Finocchario-Kessler et al., 2010;Wekesa and Coast, 2014;Kimani et al., 2015). In the general population, having children is known to be a determinant of fertility motivation in the family (Irani and Khadivzadeh, 2018). However, although the number of living children is a unique measure of a couple's fertility (Moshoeshoe and Madiba, 2021), whether having a child can reduce the effect of HIV-related worries on fertility motivation has not been well examined. Fertility planning in CLWH needs to occur in the context of a dyad. Fertility motivation in CLWH is therefore a dyad variable to be measured on both sides of CLWH. On the other hand, HIV-related worries which are known to be one's own fertility desire may also influence that of the spouse (Cook et al., 2014).
This dyad complex is more complicated by the fact that a stable couple shares the same set of children, whose presence may moderate the effect of HIV-related worries on fertility motivation. A statistical analysis procedure that accounts for non-independence is essential. Kenny et al. devised the actor-partner interdependence moderation model (APIMoM) to solve this problem (Acitelli et al., 2013;Davey et al., 2018;Stas et al., 2018). APIMoM can simultaneously examine the relationship of the variable on husband-and-wife sides as well as investigate the influence of the present living children, which is a shared variable for CLWH. In this study, we adjusted this APIMoM model to understand how the complex psychology of the CLWH interacts with the couple. In this research, we aimed to examine the effects of HIV-related worries on fertility motivation in CLWH. At the same time, we wanted to document the moderating effect of when there is at least one living child in the dyad relationship. Such understanding will help health care providers of CLWH to fine-tune the counseling plan to suit the needs of couples with and without a child.
Study design
This was a cross-sectional study on CLWH attending the ART clinic of the referred hospital in the Kunming City of China.
Participants and data collection procedure
The principal investigator contacted the managers of ART clinics to request their permission for their patients to participate in this study. Our research comprised reproductive-aged PLWH in a stable, sexually active heterosexual relationship with no more than one kid who had received ART for more than a year. (1) Not speaking Chinese well; (2) Having a chronic ailment; (3) Infertility (e.g., history of hysterectomy, oophorectomy, vasectomy). The consent of an HIV-positive participant was the primary respondent. He/she was asked to let the researcher contact his/her spouse for an interview. The couple who consented was then using the same questionnaire separately interviewed in a face-to-face meeting in a private place at the hospital or via telephone. The one exception was that the spouse would not be asked whether or not they had living children to avoid interfering with each other. The interview lasted between 15 and 30 min.
Sample size estimation
A sample size in this range might guarantee that the variance between the calculated sample and the population parameter is steady and modest. Based on the Structural Equation Model Sample Size Calculator (Christopher, 2010; A-priori Sample Size for Structural Equation Models References -Free Statistics Calculators, 2021), we used this calculator to determine the sample size needed for the research using a structural equation model (SEM) with 39 observable variables, 12 latent variables, an expected effect size of 0.3, type I error at 0.05 and the desired statistical power levels of 0.8. The calculation indicated that the minimum sample size is necessary to detect specified effect was 200.
Application of actor-partner interdependence moderation model in this study
The application of APIMoM is illustrated in Figure 1. The husband and the wife comprise a dyadic unit. Each had his/had own pathway of a causal relationship between HIV-related worries and fertility motivation. Additionally, each acted as an "actor" whose worries cross-influenced his spouse or "partner. " These direct and cross-influences were "moderated" (modified) by the number of living children.
Frontiers in Psychology 04 frontiersin.org possible total scale scores ranged from 4 to 20 with higher scores reflecting greater levels of anxiety. Cronbach's alpha coefficients for the scales of husband and wife in this study were 0.77 and 0.86, respectively.
Fertility motivation
The scale for fertility motivation was adjusted from van Balen and Trimbos-Kemper (1995). The fertility subscale of the CLWH scale was used to evaluate fertility motivation (Supplementary Table 2). Fertility motivation consisted of 11 statements representing four components recognized and labeled as happiness (three items), well-being (two items), identity (three items), and continuity (three items). On a 5-point Likert scale, responses ranged from 1 (strongly disagree) to 5 (strongly agree or definitely agree). The possible scores on both subscales ranged from 11 to 55. Higher scores suggested stronger fertility motivation. The Chinese version of the fertility motivation questionnaire was validated and is frequently used among HIV-positive individuals in the pilot study. A principal component analysis using a varimax rotation in the basic structure generated four components with eigenvalues >0.5 that accounted for 85% of the variance. Cronbach's alpha values for the fertility of the husbands and wives in the reliability tests of this study were 0.81 and 0.96, respectively.
Number of living children
We asked the respondents whether the couple had any living children. The dummy variable coded as 0 if the couple had no children and 1 otherwise (regardless of which parent was the primary respondent). For the ethical reason, we did not attempt to solve any discrepancy in the answer to avoid any conflicts within the couple.
Statistical analysis procedure
This APIMoM was conducted using the lavaan package in R (Rosseel, 2012). The main pathways were the relationship between HIV-related worries and fertility motivation with self and crossing to the partner (spouse). The moderation terms included whether the CLWH had living children and whether HIV status was discordant. Structural equation modeling (SEM) was used to estimate these effects (Kenny and Ledermann, 2010).
To ensure that the SEM was valid, the Pearson correlation matrix of the HIV-related worries and fertility motivation scale was computed. Cronbach alpha values of each subdomain (HIV-related worries, well-being, happiness, identity, etc.). We first conducted confirmatory factor analyses (CFA) (Brown, 2006). Then we employed SEM using maximum likelihood estimation (Lam and Maguire, 2012). Multiple fit indices, as proposed by the literature, were used to assess the overall model fit.
Characteristics of the couples living with HIV
Of 322 study couples, 28.9% (93) were both HIV positive, 31.1% (100) had only an HIV test on the husband's side and 68% (129) only on the wife. The average (SD) ages of husbands and wives were 37.3 (6.37) and 33.95 (5.22) years, respectively. In the husband group, the most frequent educational level was senior high school or less, whereas, in the wife group, the most common educational level was junior high school or less. The percentage of men with a graduate degree or higher was greater than the proportion of wives among all husbands and wives with a graduate degree or higher (17.7%). When questioned about their job status, many wives in the study said they were unemployed. Many respondents were of Han ethnicity and dwelled in rural areas. Many couples (68%) indicated New Rural Cooperative Medical Insurance as their supplier of medical insurance and Urban Residents Basic Medical Insurance. Table 1 provides more information about the characteristics of the sample population.
Pearson correlation and confirmatory factor analysis
The correlation of fertility motivation between the husbands and wives was high (r = 0.65, p < 0.01). This correlation suggested a sufficient overlap of the scores between the husbands and wives on fertility motivation which allowed us to consider the dyad as the unit of analysis. Table 2 reports all other correlations between the variables. Pearson's correlation analysis showed that significant and positive correlations were found between HIV-related worries and fertility motivation in both the husbands and their wives (r = −0.18-0.45, p < 0.01). In this study, all scales had good Cronbach's α reliability (CR = 0.77-0.94) and average variance extracted discriminant validity coefficients (0.49-0.79).
Actor-partner interdependence moderation model results
One APIMoM was conducted to assess the main effect of HIV-related worries on fertility motivation factors. The model had a good fit: χ 2 /df = 2.08; CFI = 0.928; TLI = 0.918; RMSEA = 0.059; and SRMR = 0.067. The explained variance of fertility motivation in husband-wife dyads through HIV-related worries was 37.9% and 51%, respectively ( Figure 2). All covariates were included in the model (Table 2). On each side of the couple, the latent variables, and fertility motivation were positively explained by happiness, Frontiers in Psychology 05 frontiersin.org well-being, identity, and continuity. HIV-related worries also had significant and negative effects on the fertility motivation of the spouse to a smaller degree than on one's own HIV-related worries. When we added "any living child" as the moderator, all covariates were included in the model (Figure 2). Regarding actor impacts, husbands and wives who had HIV-related worries were adversely motivated by their fertility (β = −0.287, p < 0.001 and β = −0.495, p < 0.001, respectively). Concerning relationship impacts, men with more anxious wives reported reduced reproductive motivation (β = −0.274, p < 0.001). This effect was smaller than HIV-related worries of the husband on the wife's fertility motivation (β = −0.170, p < 0.001). Since the core APIMoM was saturated, goodness-of-fit indices were explained Having at least one child positively moderated the negative HIV-related worries on the same individual. This effect was stronger on the husband's part (β = 0.215, p < 0.001) than on the wife's part (β = 0.136, p < 0.001). The moderator, however, had no significant effect on the relationship between the subject's HIV-related worries and his/her spouse's fertility motivation. In other words, the presence of at least one child moderated the negative effect of one's own HIV-related worries but not the spouse's fertility motivation.
Discussion
In this paper, our discussion is centered on the following: (A) There was a high correlation of fertility motivation between husband and wife; (B) HIV-related worries have negative effects on fertility motivation in people living with HIV but also in the spouse, and (C) Having at least one child moderates this negative effect in the individual but not in the spouse.
Our data showed that stable CLWH was a unit of dyad as the intra-couple fertility motivation had a high correlation coefficient. This was consistent with a prior study that found CLWH often had a common motivation for children and that having a motivated spouse was the largest predictor of a participant's motivation to have more children (Pintye et al., 2015). Spéder and Kapitány (2009) found that happier men and women prefer having children sooner. The impact of happiness on childbearing intentions varies. Aassve et al. (2016) found that women's satisfaction seemed to have a larger role in the choice to have a second child. People who are optimistic and content with their life path and future prospects are more likely to fulfill their fertility goals. To have a child, a satisfying relationship should be sought. Berninger et al. (2011) also observed that in West Germany, the quality of a positive relationship relates to reproductive intention. Sebert Kuhlmann et al. (2019) indicated, however, that women who experience intimate partner abuse are less likely to want more children. The finding implies that services for fertility planning should be provided to CLWH as a dyad and not on an individual basis. Cross effects of HIV-related worries on a spouse's fertility motivation could be explained by inter-dependence within the CLWH. Adverse effects on the spouse would have an important impact on the individual's well-being. Again, this emphasized the importance of CLWH-based counseling. Both the effect of HIV-related worries on one's own fertility motivation and cross effects on the spouse were stronger in the wife (β = −0.495, p < 0.001 and β = −0.274, p < 0.001, respectively) than in the husband (β = −0.287, p < 0.001 and β = −0.170, p < 0.001, respectively). Thus, the wife needs stronger psychological support than the husband to alleviate the effect of HIV-related worries on fertility motivation. Negative effects from the wife on the husband were stronger than from husband to wife. This was possibly due to reproductive physiology, especially pregnancy, and household welfare that are mainly shouldered by the wife (Allendorf, 2010;Rahman et al., 2018;Lee et al., 2021).
Our last finding was that having at least one child can reduce (moderate) the unwanted effect of HIV-related worries on fertility motivation. This is consistent with Kipp et al. (2011) who proposed that having children may be a way to alleviate HIV-related worries during fertility decision-making that results in an increased drive to have more children. There is a possibility that having children is a valid need for changing linkages between HIV-related worries and fertility motivation but having no children may be less successful. Milford et al. reported comparable findings (Milford et al., 2021) that the ability of CLWH to share problem-solving skills assisted them in avoiding HIV-related worries. For this reason, women with just one child are more adaptable in the face of HIV-related worries, since their fertility motivations are more likely to fluctuate (Mynarska and Rytel, 2020). This flexibility may help them adjust to HIV-related worries. Although HIV-related worries had substantial impacts on reproductive motivation in the immediate aftermath, having had at least one child partially alleviated the problem. Interestingly, we have examined the possibility of such moderation on the Actor-partner interdependence moderation model of husband-wife dynamics (N = 322). wh1/ww1, wh2/ww2, wh3/ww3, wh4/ww4, Worries indicators; hh1/hw1, hh2/hw2, hh3/hw3, Happiness indicators; bh1/bw1, bh2.bw2, Well-being indicators; ih1/iw1, ih2/iw2, ih3/iw3, Identity indicators; ch1/cw1, ch2/cw2, ch3/cw3, Continuity indicators. The individual definitions of each abbreviation were explained in Supplementary Tables 1, 2.
Frontiers in Psychology 07 frontiersin.org cross-spouse adverse effect of HIV-related worries and found this was not significant. This can be explained by the fact that the level of cross-spouse effect was already low although statistically significant. The implication of this is that CLWHs without any children should receive more intensive counseling on HIV-related worries than couples with at least one child (Rogers et al., 2016).
Limitations
The data were obtained from CLWH with the consent of both the husband-and-wife for the interviews. The high level of crossspouse correlation and the effect of at least one living child may not be generalized directly to CLWH with less marital harmony. Despite this limitation, the findings may be useful for fertility counseling in CLWH who have a good marital relationship and are ready to conceive.
Conclusion
HIV-related worries of the PLWH negatively affect his/her own and the spouse's fertility motivation. This effect is moderated by having at least one child. This information should be taken into account on fertility counseling for the CLWH.
Data availability statement
All pertinent information is contained inside the text and its accompanying information files.
The datasets presented in this article are not readily available because according to the regulations of China CDC on HIV/AIDS patient management, patient information shall not be uploaded and disclosed. Requests to access the datasets should be directed to 568606564@qq.com
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Prince of Songkla University (REC-63-208-18-1) and the Research Ethics Review Committee of the Third People's Hospital (2020072001). In this study, pseudonyms were used to protect the identity of the participants. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions YG and VC: conceptualization, methodology, project administration, and validation. YG, YD, JB, and WW: data curation. YG, VC, and JL: formal analysis. YG and JL: funding acquisition. YG, YD, JL, JB, and WW: investigation. YD, JL, LW, JG, WW, JC, ZX, JY, NL, and CL: resources. VC: supervision and writingreview and editing. YG: writing-original draft. All authors contributed to the article and approved the submitted version.
Funding
Funding was received from the Higher Education Research Promotion and Thailand's Education Hub for the Southern Region of ASEAN Countries Project Office of the Higher Education Commission (TEH-AC:016/2018). |
17184970 | s2orc/train | v2 | 2014-10-01T00:00:00.000Z | 2002-01-29T00:00:00.000Z | Supersymmetric Classical Mechanics: Free Case
We present a review work on Supersymmetric Classical Mechanics in the context of a Lagrangian formalism, with $N=1-$supersymmetry. We show that the N=1 supersymmetry does not allow the introduction of a potential energy term depending on a single commuting supercoordinate, $\phi (t;\Theta)$.
I. INTRODUCTION
Supersymmetry (SUSY) in classical mechanics (CM) [1,2] in a non-relativistic scenario is investigated. SUSY first appeared in relativistic theories in terms of bosonic and fermionic fields 1 , and the possibility was early observed that it can accomodate a Grand-Unified Theory (GUT) for the four basic interactions of Nature (strong, weak, electromagnetic and gravitational). However, after a considerable number of works investigating SUSY in this context, confirmation of SUSY as high-energy unification theory is missing. Furthermore, there exist phenomenological applications of the N = 2 SUSY technique in quantum mechanics (QM) [3]. In the literature, there exist four excellent review articles about SUSY in quantum mechanics [4]. Recently a general review on the SUSY QM algebra and the procedure on like to build a SUSY Hamiltonian hierarchy in order of a complete spectral resolution it is explicitly applied for the Pöschl-Teller I potential [5].
We must say that, despite being introductory, this work is not a mere scientific exposition.
It is intended for students and teachers of science and technology. The pre-requisites are differential and integral calculus of two real variables functions and classical mechanics.
Recently, two excellent mini-courses were ministered of introduction to the theory of fields with the aim of presenting the fundamental basics of the theory of fields including the idea of SUSY with emphasis on basic concepts and a pedagogical introduction to weak scale supersymmetry phenomenology, in which the reader may use for different approaches and viewpoints include [6].
Considering two ordinary real variables x and y, it is well known that they obey the commutative property, xy = yx. However, ifx andỹ are real Grassmann variables, we have: 1 A bosonic field (associated with particles of integral or null spin) is one particular case obeying the Bose-Einstein statistic and a fermionic field (associated to particles with semi-integral spin) is that obey the Fermi-Dirac statistic.
In this work, we will use a didactic approach about the transformations in the superspace, showing the infinitesimal transformation laws of the supercoordinate and of its components, denominated by even and odd coordinates, in the unidimensional space-time D=(0+1) = 1.
We will see that by making an infinitesimal variation in the even coordinate, we generate the odd coordinate and vice-versa. This approach is done in this work using the right derivative rule. We will distinguish that property of supersymmetry in which the action is invariant with the translation transformations in the superspace (δS = 0), noting that the same does not occur with the Lagrangian (δL = 0).
In the construction of a SUSY theory with N > 1, referred to as extended SUSY, for each spatial commuting coordinate, representing the degrees of freedom of the system, we associate one anticommuting variable, which are known that Grassmannian variables.
However, we consider only the N = 1 SUSY for a non-relativistic point particle, which is described by the introduction of only one real Grassmannian variable Θ, in the configuration space, but all the dynamics are putted in the time t. In this case, we have two degrees of freedom. The generalized anticommuting coordinate (odd magnitude) will be represented by ψ(t). The new real coordinate defined in the superspace will be called supercoordinate.
Note that the first term is exactly the ordinary real commuting coordinate q(t) and, like the next term, must to be linear in Θ, because Θ 2 = 0. In this case, the time dependent part multiplying Θ is necessarely one Grassmannian variable Ψ(t), which need the introduction of i for warranty that the supercoordinate φ(t; Θ) will be real 2 .
We would like to highlight to the readers who know field theory, but have never seen the supersymmetrical formalism in the context of classical mechanics that in this work we will present the ingredients for implenting N=1 SUSY, namely, superspace, supertranslation, supercoordinate, SUSY covariant derivative and super-action. Indeed, the steps adopted in this approach are the same as used in the supersymmetrizations of quantum field theories in the quadrimensional space-time of the special relativity (D = (3+1), three position coordinates and one temporal coordinate).
This work is organized as follow: in section II we construct a finite supercoordinate transformation and the infinitesimal transformations on the supercoordinate and its components via the translaction in the superspace. In section III, we investigate the superparticle using the Lagrangian formalism in the superspace, noting the fact that the N=1 SUSY does not allow the introduction of a potential term for only one supercoordinate and we indicate the quantizating procedure. In section IV, we present the conclusion.
II. TRANSLATIONS IN SUPERSPACE
We will consider the N=1 supersymmetry i.e. SUSY with only one anticommuting variable. Supersymmetry in classical mechanics unify the even coordinate q(t) and the odd cordinate Ψ(t) in a superspace characterized by the introduction of a Grassmannian variable, Θ, not measurable [1,2,7].
where t and Θ act, respectively, like even and odd elements of the Grassmann algebra.
The anticommuting coordinate, Θ, will parametrize all points of superspace, but all dynamics will be put in the time coordinate t. SUSY in classical mechanics is generated by a translation transformation in the superspace, viz., where Θ and ǫ are real Grassmannian paramenters, This star operation of a product of two anticommuting Grassmannian variables ensures that the product is a pure imaginary number and for this reason must insert the i = √ −1 in (2) to obtain the real character of time. SUSY is implemented for maintain the line element where one again we introduce an i for the line element to become real.
The supercoordinate for N = 1 is expanded in a Taylor series in terms of even q(t) and odd ψ(t) coordinates: Now, we need to define the derivative rule with respect to one Grassmannian variable.
Here, we use the right derivative rule i.e. considering f (Θ 1 , Θ 2 ) a function of two anticommuting variables, the right derivative rule is the following: where δΘ 1 and δΘ 2 appear on the right side of the partial derivatives.
One infinitesimal transformation of supercoordinate that obbey the SUSY transformation law given by (2) results in: On the other hand, making an infinitesimal variation of (5) i.e. δφ(t; Θ) = δq(t)+iΘδψ(t) and comparing with (8) we obtain the following SUSY transformation law for the components of the supercoordinate: Therefore making a variation in the even component we obtain the odd component and vice versa i.e. SUSY mixes the even and odd coordinates. Note from (5) and (7), that the infinitesimal SUSY transformation law can be written in terms of the supercoordinate φ(t; Θ): where ∂ Θ ≡ ∂ ∂Θ , ∂ t ≡ ∂ ∂t . Therefore any coordinate which obbey equation (9) will be interpreted as supercoordinate 4 . The differential operator Q, called the supercharge, is a representation of the translation generator in the superspace. In fact one finite translation can be easily obtained (9) which has an analogous form as that of translation in the ordinary with the operator Q doing a similar role as that of the linear momentum operator in ordinary space.
III. COVARIANT DERIVATIVE AND THE LAGRANGIAN
Nowm, we build up a covariant derivative (with respect to Θ) which preserves the supersymmetry of super-action i.e. we will see that the derivative with respect to Θ (∂ Θ Φ) does not itself transform like a supercoordinate. So it is necessary to construct a covariant derivative.
SUSY really possesses a peculiar characteristic. As the anticommuting parameter ǫ is a constant we see that SUSY is a global symmetry. In general local symmetries are the ones which require covariant derivatives. For example the gauge theory U(1) with local symmetry requires covariant derivatives. But because of the fact that ∂ Θ φ(t; Θ) is not a supercoordinate, SUSY will require a covariant derivative for us to write the super-action in a consistent form. To prove this fact we use (10) so as to obtain the following variations: On the other hand, making an infinitesimal variation of the partial temporal derivative we find: So we conclude that ∂ t φ obeys the SUSY transformation law and therefore it is a supercoordinate. The covariant derivative of supersymmetric classical mechanics is constructed so that it obeys the anticommutativity with Q i.e. [D Θ , Q] + = 0. It is easy to verify that one representation for a covariant derivative is given by: Another interesting property which occures when the SUSY generator (Q) is realized in terms of spatial coordinates on in the configuration representation [3,4] is the fact that the anti-commutator of the operator Q with itself gives us the SUSY Hamiltonian: i.e. two sucessive SUSY transformations give us the Hamiltonian. This is an algebra of left supertranlations and time-translations. The corresponding right-supertranslations satisfy the following algebra: Before we construct the Lagrangian for a superpoint participle, we introduce the Berezin integrals [7] for an anticommuting variable: Now we are in conditions to analyse the free superpoint particle in one dimension and to construct a manifestly supersymmetric action. We will see that SUSY is a super-action symmetry but does not let the Lagrangian invariant. A super-action for the free superpoint particle can be written as the following double integral 5 Indeed after integrating in the variable Θ, we obtain the following Lagrangian for the superpoint particle: where the first term is the kinetic energy associated with the even coordinate in which the mass of the particle is unity. The second term is a kinetic energy piece associated with the odd coordinate (particle's Grassmannian degree of freedom) dictated by SUSY and is new for a particle without potential energy. Thus we see that the Lagrangian is not invariant because it's variation result in a total derivative and consequently is not zero, which can be obtained from δS, D Θ | Θ=0 = Q Θ | Θ=0 : Because of the fact that the Lagrangian is a total derivative, we obtain δS = 0 i.e. the super-action is invariant under N=1 SUSY transformation.
Note that for N = 1 SUSY and with only one coordinate φ, we can´t introduce a potencial term V (φ) in the super-action because it conduces to non-invariance i.e. (δS = 0).
There are even two more inconsistency problems. First we note that the super-action S acts like an even element of the Grassmann algebra and for this reason any additional term must be an even element of this algebra. Indeed, analysing the terms present in the super-action we see that the line element has one dΘ and one dt which are respectively odd and even.
As the supercoordinate is even, the potential V (φ) must also be even, which when acts with the line element dtdΘ becomes odd which fact will let the super-action odd and this is not admissible. The other inconsistency problem can be traced form the dimensional analysis.
In the system of natural units the super-action must be non-dimensional. In such a system of units, the time and the even component q(t) of the supercoordinate have dimension of [massa] −1 . In this way, starting form the supertranslation, we will see that Θ will have Because of this when we introduse a potential term V (φ) we would obtain a super-action with inconsistent dimension.
The cannonical conjugate momentum associated to the supercoordinate is given by which leads to the following Poisson brackets: We can not implement the first cannonical quantizating method directly because there exist constraints: the primary obtained from the definition of cannonical momentum and the secondary obtained from the consistency condition. In this case we must to construct the modified Poisson parentheses called Dirac brackets. These aspects have been considered in the quantization of the superpoint particle with extended N = 2 SUSY and is out of the scope of this work [9].
We finalize by writing another manifestly supersymetric action which can be constructed for the case with N = 1 SUSY using the generator of right supertranslation D Θ : It is left as an exercise for the reader to demonstrate that it is possible to effect the integral in Θ and encounter the N = 1 SUSY Lagrangian for the case.
IV. CONCLUSION
After the introduction of a real Grassmannian anticommuting variable, we consider a translation in superspace and implement the transformation laws of the supercoordinate and its components. We show that an infinitesimal variation of the even coordinate generates the odd coordinate and vice-versa, characterizing N=1 SUSY. We introduce a covariant derivative for writing the super-action in a consistent way. We verify that occure an interesting property occurs when the SUSY generator (Q) is realized in terms of Grassmannian coordinates: the anticommutator of Q with itself results in the Hamiltonian i.e. two successive SUSY transformations gererate the Hamiltonian. If the reader considers two successive supertranslations, will obtain exactly the Hamiltonian as result i.e. D 2 Θ = H. In the original works about supersymmetric in classical mechanics [1,2], the respective authors do not justify as to why in the case of N=1 SUSY is not allowed to put a potential term in the Lagrangian. Therefore, the main purpose in this work has been to make an analysis of this question in the context of a Lagrangian formalism in superspace with N=1 SUSY. In synthesis from the fact as to how the super-action must be even and the line element dtdθ in its construction be odd, we show that it is not possible to introduction a potential energy term V (φ), because which a potential term would conduze in a super-action with inconsistent dimension i.e. the super-action itself becomes odd too. Therefore when we have only one supercoordinate φ, the N = 1 SUSY exists only for a free superpoint particle. The equations of motion for the superpoint particle with N=1 SUSY are first order for odd coordinate (Θ = dΘ dt = 0) and second order for even coordinate (ẍ = d 2 dt 2 x = 0).
In conclusion we must stress that the super-action must always be even but the Lagrangian may eventually be odd. Nonetheless, the same analysis can be implemented for the case with N = 2 SUSY so that one may put a potential term in the superaction. In this case, considering only one supercoordinate φ of commuting nature, is allowed the introduction of a potential term in the Lagrangian [9,10]. On the other hand, one can introduce an odd supercoordinate of anticommuting nature (Ψ(Θ; t) = ψ(t) + q(t)Θ) so that the N = 1 SUSY is ensured and the main consequence is to obtain the unarmonic oscillator potential. |
119509420 | s2orc/train | v2 | 2017-09-06T01:15:29.000Z | 2017-05-04T00:00:00.000Z | Andreev bound states versus Majorana bound states in quantum dot-nanowire-superconductor hybrid structures: Trivial versus topological zero-bias conductance peaks
Motivated by an important recent experiment [Deng et al., Science 354, 1557 (2016)], we theoretically consider the interplay between Andreev bound states(ABSs) and Majorana bound states(MBSs) in quantum dot-nanowire semiconductor systems with proximity-induced superconductivity(SC), spin-orbit coupling and Zeeman splitting. The dot induces ABSs in the SC nanowire which show complex behavior as a function of Zeeman splitting and chemical potential, and the specific question is whether two such ABSs can come together forming a topological MBS. We consider physical situations involving the dot being non-SC, SC, or partially SC. We find that the ABSs indeed tend to coalesce together producing near-zero-energy midgap states as Zeeman splitting and/or chemical potential are increased, but this mostly happens in the non-topological regime although there are situations where the ABSs could come together forming a topological MBS. The two scenarios(two ABSs forming a near-zero-energy non-topological ABS or a zero-energy topological MBS) are difficult to distinguish by tunneling conductance spectroscopy due to essentially the same signatures. Theoretically we distinguish them by knowing the critical Zeeman splitting for the topological quantum phase transition or by calculating the topological visibility. We find that the"sticking together"propensity of ABSs to produce a zero-energy midgap state is generic in class D systems, and by itself says nothing about the topological nature of the underlying SC nanowire. One must use caution in interpreting tunneling conductance measurements where the midgap sticking-together behavior of ABSs cannot be construed as definitive evidence for topological SC with non-Abelian MBSs. We also suggest some experimental techniques for distinguishing between trivial and topological ZBCPs.
I. INTRODUCTION
The great deal of current interest [1][2][3][4][5][6][7] in Majorana zero modes (MZMs) or Majorana fermions focusing on semiconductor-superconductor hybrid structures [8][9][10][11] arises mainly from the significant experimental progress [12][13][14][15][16][17][18][19][20] made in the subject during the last five years. In particular, proximity-induced superconductivity in spin-orbit-coupled semiconductor nanowires can become topological with localized MZMs in the wire if the system has a sufficiently large Zeeman spin splitting overcoming the induced superconducting gap. Such MZMs, being zero-energy midgap states, should produce quantized zero-bias conductance peaks (ZBCPs) associated with perfect Andreev reflection in tunneling measurements [21][22][23][24]. Indeed, experimentally many groups have observed such zero-bias conductance peaks in tunneling measurements on nanowire-superconductor hybrid structures although the predicted precise and robust quantization (with a conductance value 2e 2 /h) has been elusive. Many reasons have been provided to explain the lack of precise ZBCP quantization [25][26][27], but alternative scenarios, not connected with MZMs, for the emergence of the ZBCP have also been discussed in the literature [28][29][30][31][32]. Whether the experimentally observed ZBCPs in semiconductor-superconductor hybrid struc-tures arise from MZMs or not remains a central question in spite of numerous publications and great experimental progress in the subject during the 2012-2017 five-year period.
A key experimental paper by Deng et al. has recently appeared in the context of ZBCPs in semiconductorsuperconductor hybrid systems [20], which forms the entire motivation for the current theoretical work. In their work, Deng et al. studied tunneling transport through a hybrid system composed of a quantum dot-nanowiresuperconductor, where no superconductivity (SC) is induced in the quantum dot (i.e., the superconductivity is induced only in the nanowire). In Fig. 1, we provide a schematic of the experimental system, where the dot simply introduces a confining potential at one end of the nanowire which is covered by the superconductor to induce the proximity effect. Such a quantum dot may naturally be expected to arise because of the Fermi energy mismatch of the lead and the semiconductor much in the way a Schottky barrier arises in semiconductors. Reducing the potential barrier at the lead-semiconductor interface to produce a strong conductance signature likely requires the creation of a quantum dot as shown in Fig. 1. Thus a quantum dot might be rather generic in conductance measurements, i.e., one may not have to introduce a real quantum dot in the system although such a dot did arXiv:1705.02035v2 [cond-mat.mes-hall] 6 Sep 2017 exist in the set-up of Ref. [20]. The quantum dot may introduce Andreev bound states (ABSs) in the nanowire, and the specific issue studied in depth by Deng et al. is to investigate how these Andreev bound states behave as one tunes the Zeeman spin splitting and the chemical potential in the nanowire by applying a magnetic field and a gate potential respectively. It is also possible that the ABSs in the Deng et al. experiment arise from some other potential fluctuations in the nanowire itself which is akin to having quantum dots inside the nanowire arising from uncontrolled potential fluctuations associated with impurities or inhomogeneities. (We consider both cases, the dot being outside or inside the nanowire, in this work.) The particular experimental discovery made by Deng et al., which we theoretically examine in depth, is that Andreev bound states may sometimes come together with increasing Zeeman splitting (i.e., with increasing magnetic field) to coalesce and form zero-energy states which then remain zero-energy states over a large range of the applied magnetic field, producing impressive ZBCPs with relatively large conductance values ∼ 0.5e 2 /h. Deng et al. speculate that the resulting ZBCP formed by the coalescing ABSs is a direct signature of MZMs, or in other words, the ABSs are transmuting into MZMs as they coalesce and stick together at zero energy. It is interesting and important to note that the sticking together property of the ABSs at zero energy depends crucially on the gate voltage in Deng et al. experiment, and for some gate voltage, the ABSs repel away from each other without coalescing at zero energy and at still other gate voltages, the ABSs may come together at some specific magnetic field, but then they separate out again with increasing magnetic field producing a beating pattern in the conductance around zero bias. Our goal in the current work is to provide a detailed description of what may be transpiring in the Deng et al. experiment within a minimal model of the dot-nanowire-superconductor structure elucidating the underlying physics of ABS versus MZM in this system. In addition, we consider situations where the quantum dot is, in fact, partially (or completely) inside the nanowire (i.e., the dot itself is totally or partially superconducting due to proximity effect), which may be distinct from the situation in Deng et al. experiment [20] where the quantum dot is not likely to be proximitized by the superconductor although any potential inhomogeneity inside the wire would act like a quantum dot in general for our purpose. Specific details of how the ABSs arise in the nanowire are not important for our theory as most of the important new qualitative features we find are generic as long as ABSs are present in the nanowire.
It may be important here to precisely state what we mean by a "quantum dot" in the context of our theory and calculations. The "quantum dot" for us is simply a potential fluctuation somewhere in or near the wire which produces Andreev bound states in the system. This "quantum dot", being strongly coupled to the nanowire (perhaps even being completely inside the nanowire or arising from the Schottky barrier at the tunnel junction), [20]. A semiconductor (SM) nanowire is mostly covered by a parent s-wave superconductor (SC). One fraction of the nanowire is not covered by the superconductor and is subject to a confinement potential. This part (encircled by the red dash line) between the lead and the superconducting nanowire is called quantum dot (QD) in this paper. Figs. 2-9 are results based on this configuration. Later we also consider situations where a part or the whole of the dot is covered by the superconductor making the whole hybrid structure superconducting. Note that the quantum dot here is strongly coupled to the nanowire and may not exhibit any Coulomb blockade behavior.
does not have to manifest any Coulomb blockade as ordinary isolated quantum dots do. In fact, our theory does not include any Coulomb blockade effects because the physics of ABS transmuting into MZM or not is independent of Coulomb blockade physics (although the actual conductance values may very well depend on the Coulomb energy of the dot). The situation of interest to us is when the confined states in the dot extend into the nanowire (or are entirely inside the nanowire) so that they become Andreev bound states. In situations like this, perhaps the expression "quantum dot" is slightly misleading (since there may or may not be any Coulomb blockade here), but we use this expression anyway since it is convenient to describe the physics of Andreev bound states being discussed in our work.
It may be useful to provide a succinct summary of our main findings already in this introduction before providing the details of our theory and numerics. We show our most important findings in Fig. 2 (all obtained by assuming the dot leading to ABSs to be entirely outside the nanowire), where we show our calculated differential conductance in the dot-nanowire-superconductor system as a function of Zeeman splitting energy (V Z ) and the source-drain voltage (V ) in the nanowire for a fixed chemical potential in each panel (which, however, varies from one panel to the next). The four panels indicate the four distinct generic results which may arise depending on the values of chemical potential and Zeeman splitting (with all other parameters, e.g., bulk superconducting gap, spin-orbit coupling, tunnel barrier, temperature, dissipative broadening, etc. being fixed throughout the four panels). We start by reminding that the topological . Differential conductance through four nanowire systems with the same dissipation Γ = 0.01 meV, temperature T = 0.02 meV and tunnel barrier of height 10 meV and width 20 nm. (a): a simple nanowire without quantum dot at chemical potential µ = 0 meV. A ZBCP from MZM forms after TQPT at VZ = 1.5 meV, since the SC pairing at low bias is renormalized to 1.5 meV due to the self-energy term. (b): a hybrid structure with µ = 4.5 meV. Two ABSs come together at VZ ∼ 1.5 meV and remain stuck at zero energy up to VZ = 2.5 meV (and beyond) although the system is non-topological(VZ < µ). (c): a hybrid structure with µ = 3.8 meV. Two ABSs come together at VZ ∼ 1.5 meV then split at a somewhat higher Zeeman field, but coming together again at VZ ∼ 3 meV. Again, this is all in the non-topological regime since VZ < µ. (d): a hybrid structure with µ = 2.0 meV. Two ABSs first stick together at VZ ∼ 1.8 meV which is in the trivial regime (i.e. VZ < µ), but then the ZBCP continues all the way to the topological regime (VZ > 2.5 meV, marked by the yellow vertical line), with the ZBCP value remaining ∼ e 2 /h throughout. Note that nothing special happens to the ZBCP feature across the yellow line indicating TQPT. Calculations here include self-energy renormalization by the parent superconductor thus renormalizing the bare induced gap so that VZ = 1.5 meV is the TQPT point rather than 0.9 meV as it would be without any renormalization. Panels (e)-(h) correspond respectively to panels (a)-(d) showing "waterfall" diagrams of the conductance against bias voltage for various Zeeman splitting-each line corresponds to a 0.1 meV shift in VZ increasing vertically upward. Similarly, panels (i)-(l) correspond respectively to panels (a)-(d) showing the calculated zero-bias conductance in each case as a function of Zeeman splitting. quantum critical point separating trivial and topological phases in this system is given by the critical Zeeman splitting V Zc = µ 2 + ∆ 2 , where µ and ∆ are respectively the chemical potential and the proximity-induced superconducting gap in the nanowire [8][9][10]-thus V Z < µ automatically implies a trivial phase where no MZM can exist. Fig. 2(a) shows the well-studied result of the ZBCP arising from the MZM as the system enters the topological superconducting phase with the topological quantum phase transition point being at V Zc = 1.5 meV with the chemical potential being zero, µ = 0. (We note that our calculations include the self-energy renormalization effect by the parent superconductor which renormalizes the superconducting gap, as discussed in Sec. III of the manuscript.) This result is obtained without any quantum dot (or ABS) being present, and is the generic wellknown theoretical result for the simple nanowire in the presence of induced superconductivity, Zeeman splitting, and spin-orbit coupling as predicted in Refs. [8][9][10][11]. We provide this well-known purely nanowire (with no dot, and consequently, no ABS) result only for the sake of comparison with the other three panels of Fig. 2 where ABS physics is present because of the presence of the quantum dot. In Fig. 2(b), the chemical potential is increased to µ = 4.5 meV with the nonsuperconducting quantum dot being present at the end of the nanowire. Here, the two ABSs come together around V Z = 1.5 meV and remain stuck to zero energy up to V Z = 2.5 meV (and beyond) although the system is nontopological throughout the figure (as should be obvious from the fact that V Z < µ throughout). Thus, ABSs coalescing and sticking at zero energy for a finite range of magnetic field is not necessarily connected with MZMs or topological superconductivity. It should be noted that the ZBCP value in 2(b) is close to 2e 2 /h, but this has nothing to do with the MZM quantization, and we find that the ZBCP arising from coalescing ABSs could have any non-universal value. In Fig. 2(c), we change the chemical potential to µ = 3.8 meV, resulting in the two ABSs coming together at V Z ∼ 1.5 meV, and then splitting at a somewhat higher magnetic field, but coming together again at V Z ∼ 3.0 meV with the ZBCP value varying from e 2 /h to 1.5e 2 /h. Again, this is all in the nontopological regime since V Z < µ throughout the figure. Finally, in Fig. 2(d) we show the result for µ = 2 meV, where the two ABSs first stick together at V Z ∼ 1.8 meV which is in the trivial regime (i.e. V Z < µ), but then the ZBCP continues all the way to the topological regime (V Z > 2.5 meV, marked by the yellow vertical line), with the ZBCP value remaining > e 2 /h throughout. Interestingly, although there is a topological quantum phase transition (TQPT) in Fig. 2(d) at the yellow line, nothing remarkable happens in the ZBCP-it behaves essentially the same in the trivial and the topological regime! We note that the specific value of the ZBCP in each panel depends on parameters such as temperature and tunnel barrier, and can be varied quite a bit, but their relative values are meaningful and show that the ZBCP in the trivial and the topological regime may have comparable strength, and no significance can be attached (with respect to the existence or not of MZMs in the system) based just on the existence of zero-bias peaks and their conductance values. Thus, stable zero-bias conductance peak is necessary for MZMs, but the reverse is untruethe existence of stable ZBCP does not by itself imply the existence of MZMs. Note that we are employing the simplest possible model with no disorder at all, and as such our findings are completely different from the disorderinduced class D peak discussed in Refs. [29][30][31]. This is consistent with the semiconductor nanowire in Ref. [20] being ballistic or disorder-free, and hence the ABS-MZM physics being discussed in our work has nothing whatsoever to do with the physics of 'class D peaks' discussed in Refs. [29][30][31] where disorder plays the key role in producing effectively an antilocalization zero bias peak.
For the sake of completeness, we also show in , and (l) does not manifest itself in any striking way for it to be discerned without already knowing its existence a priori. We emphasize that Figs. 2(e) and (f) look essentially identical qualitatively although the ZBCP in Fig. 2(e) arises from the MZM and in Fig. 2(f) from coalesced ABSs. Similarly, the dependence of the zero-bias conductance on V Z could be quite similar in these two cases too (Figs. 2(i) and (j)).
We note that the results of Fig. 2 are produced for a nominal temperature ∼ 200 mK which is higher than the fridge temperature (∼ 40 mK) where typical experiments are done. The main reason for this is that finite temperature smoothens fine structures in the calculated conductance spectra arising from energy levels in the nanowire which are typically not seen experimentally. Having a finite temperature does not in any way affect the existence or not of the zero mode or any of our conclusions. We add that the electron temperature in semiconductor nanowires is typically much larger (> 100 mK) than the fridge temperature, and T = 0.02 meV may not be completely inappropriate even for the realistic system although our reason for including this finite T is purely theoretical.
The importance of our results as summarized in Fig. 2 is obvious. In particular, the coalescing of ABSs and their sticking together near zero energy with a fairly strong ZBCP is generic (as we will explain in the Sec. III) in the trivial regime of the magnetic field and chemical potential, and equally importantly, there is no special feature in the ZBCP itself for one to discern whether such a coalesced ZBCP is in the topological or trivial regime just based on tunneling conductance measurements. The generic occurrence of almost-zero-energy modes has previously been attributed as a property of quantum dots in symmetry class D [33] in the presence of random disorder whereas our theory by contrast is manifestly in the clean disorder-free limit. In fact, as our Fig. 2(d) indicates, the ZBCP may very well form in the trivial regime and continue unchanged into the topological regime with nothing remarkable happening to it as the magnetic field sweeps through the topological quantum phase transition! Experimental tunneling spectroscopy, by itself, might find it difficult to distinguish MZMs from accidental zero-energy ABSs just based on the observation of the ZBCP (even when the ZBCP conductance ∼ 2e 2 /h) since experimentally one simply does not know where the topological quantum phase transition point is in the realistic nanowires. The good thing is that our results indicate that it is possible that some of the Deng et al. ZBCPs [20] may be topological, but it is also possible that all of them are trivial ZBCPs. We simply do not know based just on tunneling conductance measurements that have been performed so far.
We mention that there have been earlier indications that ABSs (or in general, low energy fermionic subgap states) may manifest ZBCP features indistinguishable from MZM-induced zero-bias peak behavior [32][33][34][35][36][37]. In particular, it was shown by a number of authors that the presence of a smooth varying potential background in the nanowire could produce multiple MZMs along the wire (and not just the two pristine MZMs localized at the wire ends), which could lead in some situations to trivial ZBCPs in tunneling measurements mimicking MZMinduced ZBCPs [34][35][36]. The fact that small quantum dot systems could have ABS-induced ZBCPs was experimentally established by Lee et al. [32]. Our work, however, specifically addresses the quantum dot-nanowiresuperconductor system, showing that the recent observation by Deng et al. of ABSs coalescing together near zero energy and then remaining stuck at zero energy for a finite range of magnetic field by itself cannot be construed as evidence for ABSs combining to form MZMsthe ZBCP in such situations may very well arise from accidental coalesced ABSs which happen to beat or stay near zero energy. Clearly, more work is necessary in distinguishing ABS-induced trivial zero modes from MZMs in nanowire-superconductor hybrid structures. There has been other recent theoretical work [38][39][40][41] on trying to understand the Deng et al. experimental work of Ref. [20] using alternative approaches assuming that the experimental ZBCPs form in the topological regime (i.e., lying above the TQPT point in the magnetic field).
We emphasize that although our initial goal motivating this work was to understand the experiment of Ref. [20] where transmutation of ABS into MZM is claimed in disorder-free ballistic nanowires, we have stumbled upon a generic result of substantial importance in the current search for topological Majorana modes in nanowires (and perhaps in other solid state systems too, where ABSs may arise). This generic result is that the combined effect of spin-orbit coupling and spin splitting could lead to subgap Andreev bound states generically sticking around midgap in a superconductor, and these nontopological 'alomost-zero' energy modes are virtually indistinguishable from topological Majorana zero modes using tunneling spectroscopy. Our result implies that considerable caution is now necessary in searching for MZMs in nanowires since the mere observation of ZBCPs even in clean systems is insufficient evidence for the existence of MZMs.
The paper is organized as follows: In Sec. II we give the minimal theory describing the quantum dot-nanowiresuperconductor hybrid structures. In Sec. III, we introduce the numerical method and calculate the tunneling differential conductance in simple and hybrid structure systems. In Sec. IV, analytical low-energy spectra of hybrid structures are calculated to provide insightful information about the corresponding zero-bias conductance behavior. In Sec. V, we consider the effect of strongly changing dot confinement on the zero-bias behavior of the ABS, contrasting it with that of MZM, providing one possible experimental avenue for distinguishing between trivial and topological ZBCPs. In Sec. VI, we calculate the differential conductance for hybrid structures where the quantum dot has partial or complete induced superconductivity(i.e., it is a strongly coupled part of the nanowire itself). In Sec. VII we discuss how our quantum dot-induced ABS results connect with the corresponding results in the case of smooth confinement at the wire ends and can be understood using the reflection matrix theory. Sec. VIII concludes our work with a summary and open questions. A number of appendices provide complementary detailed technical results not covered in the main text of the paper.
II. MINIMAL THEORY
We calculate the differential tunnel conductance G = dI/dV through a junction of a normal lead and the quantum dot-nanowire-superconductor hybrid structure, as shown in the schematic Fig. 1. We use the following Bogoliubov-de Gennes (BdG) Hamiltonian as the non-interacting low-energy effective theory for the nanowire [8][9][10] , and σ µ (τ µ ) are Pauli matrices in spin (particle-hole) space, m * is the effective mass, α R spin-orbit coupling, V Z the Zeeman spin splitting energy, ∆ 0 the induced superconducting gap. In some discussions and calculated results we also replace the superconducting pairing term by a more complex selfenergy term to mimic renormalization effects by the parent superconductor [42], which will be elaborated later. The normal lead by definition does not have induced SC, thus the lead Hamiltonian is where an additional on-site energy E lead is added representing a gate voltage. The quantum dot Hamiltonian is where V (x) = V D cos( 3πx 2l ) is the confinement potential. (We have ensured that other models for confinement potential defining the dot do not modify our results qualitatively.) The quantum dot size l is only a fraction of the total nanowire length L. The quantum dot is non-SC at this stage although later (in Sec. III D and Sec. VI) we consider situations where the dot could have partial or complete induced superconductivity similar to the nanowire. Whether the quantum dot exists or not, there is always a barrier potential between the lead and the hybrid nanowire system. Multi-sub-band effects are introduced by constructing a second nanowire with different chemical potential. An infinitesimal amount of dissipation iΓ is also added into the nanowire Hamiltonian Eq. (1) for the sake of smoothening the conductance profile without affecting any other aspects of the results [26,27]. We emphasize that there is no disorder in our model distinguishing it qualitatively from earlier work [29][30][31] where class D zero bias peaks in this context arise from disorder effects. Given this quantum dotnanowire model, our goal is to calculate the low lying energy spectrum and the differential conductance of the system varying the chemical potential and the Zeeman splitting in order to see how any dot-induced ABSs behave. The specific goal is to see if we can qualitatively reproduce the key features of the Deng et al. experiment in a generic manner without fine-tuning parameters. Our goal is not to demand a quantitative agreement with the experimental data since too many experimental parameters are unknown(confinement potential, chemical potential, tunnel barrier, superconductor-semiconductor coupling, spin-orbit coupling, effective mass, Lande g-factor, etc.), but we do want to see whether ABSs coalesce generically and whether such coalescence around zero energy automatically implies a transmutation of ABSs into MZMs.
III. NUMERICAL RESULTS FOR TUNNEL CONDUCTANCE
The goal of our current work is to understand the interplay between Andreev and Majorana bound states in quantum dot-nanowire-superconductor hybrid structures, and to answer the specific question whether two Andreev bound states can coalesce forming a zero-energy bound state leading to a stable ZBCP in the tunnel conductance (as observed in Ref. [20]). This motivates all the calculations in this section. In Sec. III A, we calculate the differential conductance of a set of nanowires without any quantum dot for the sake of making comparison with the situation where ABS physics is dominant due to the presence of the quantum dot.(We emphasize, as mentioned already in Sec. I, that our "quantum dot" is simply a prescription for introducing ABS into the physics of the hybrid structure, and is not connected with Coulomb blockade or any other physics one associates with isolated quantum dots.) In Sec. III B, the differential conductance of quantum dot-nanowire-superconductor hybrid structures is calculated as a function of Zeeman field or chemical potential for various parameter regimes. Near-zerobias peaks similar to the Deng et al. experimental data are obtained, and the topology and quantization properties of these peaks are carefully studied. In Sec. III C, topological visibility [26,27] is calculated for both Andreev and Majorana-induced ZBCPs as a theoretical tool discerning the two cases, i.e., to explicitly check whether a zero-energy state is trivial or topological. Of course, in our simulations, we explicitly know the location of the TQPT and can read off the topological or trivial nature of a particular ZBCP simply by knowing the Zeeman field, the chemical potential, and the induced gap. The topological visibility calculation provides an additional check, which simply verifies that a ZBCP arising below (above) the TQPT is trivial ABS (topological MZM), as expected.
For clarification we first provide definitions of two frequently used terms in the rest of this paper: simple nanowire and hybrid structure. A simple nanowire, which by definition does not have any ABS, is defined as a semiconductor nanowire without quantum dot ( i.e., the usual system already extensively studied in the literature [25][26][27]). A hybrid structure, the opposite of a simple nanowire, may have ABS in it, and is defined as a quantum dot-nanowire-superconductor system. The hybrid structure qualitatively mimics the Deng et al. system of Ref. [20] (see Fig. 1). For results presented in this section the chemical potential and the on-site energy are uniform throughout the nanowire since the quantum dot is explicitly outside the nanowire with the dot being non-SC whereas the nanowire being SC (due to proximity effect). Note that although the dot is considered outside the nanowire, any bound state wavefunction in the dot may extend well inside the nanowire (thus making it an ABS) depending on system parameters.
Differential conductance is calculated using the S matrix method, which is a universal method in mesoscopic physics. Numerical implementation of the S matrix method is carried out in this section through KWANT [43], which is a Python package for calculating the S matrix of scattering regions in tight-binding models. The model defined in Sec. II is particularly well-suited to the KWANT methodology of calculating the S matrix. We discretize Eqs. (1)-(3) into a onedimensional lattice chain and extract the differential conductance from the corresponding S matrix [44,45]. Since the calculational technique is well-established, here we focus on presenting and discussing our results, referring the reader to the literature for the details [26,27,[43][44][45]. The new aspect of our work is to introduce the quantum dot (see Fig. 1) in the problem and calculate the S matrix exactly for the combined dot-nanowire system.
For the results presented in this section we choose the following representative parameter values for the quantum dot-nanowire system. Effective mass is chosen to be m * = 0.015m e , along with induced superconducting gap ∆ 0 = 0.9 meV (we present some results for a smaller SC gap later), nanowire length L 1.3 µm, Zeeman energy V Z [meV] = 1.2B [T] where B in Tesla is the applied magnetic field and spin-orbit coupling α R = 0.5 eVÅ [27]. (Note that this induced bare gap will be renormalized by self-energy corrections.) The gate voltage in the lead is E lead = −25 meV. The confinement potential in the quantum dot has a strength V D = 4 meV and length l = 0.3 µm. (We have varied the dot parameters to ensure that our qualitative results are generic, i.e., the qualitative physics discussed in our work does not arise from some special choice of the dot confine-ment details.) The default value of the barrier between lead and nanowire has height E barrier = 10 meV and width l barrier = 20 nm. Note that there is nothing special about these numbers and no attempt is made to get any quantitative agreement with any experimental data since the applicable parameters (even quantities as basic as the effective mass and the g-factor) for the realistic experimental systems are unknown. Our goal here is a thorough qualitative understanding and not quantitative numerical agreement with experimental data. We also leave out disorder and/or soft gap effects since these are not central to our study of ABS versus MZM physics in hybrid systems. Introducing these effects is straightforward, but the results become much less transparent.
A. Simple nanowire
We first focus on simple nanowires without any quantum dots. There are no ABSs in this case by construction, and any ZBCP can only arise from MZMs in our model. The corresponding conductance has been well studied [25][26][27]. However, we still present our numerical simulations for such simple nanowire systems for two reasons. First, we will compare Andreev and Majoranainduced conductances later in this paper, and therefore it is important to have the pure MZM results in simple nanowires (i.e., without any ABS) for our specific parameter values. Second, the proximity effect (with or without self-energy effects) discussed in the simple model is generic and is applicable to the situation with quantum dot. The conductance of three simple nanowire systems is shown in Fig. 3. All of them use a one-band model with chemical potential µ = 0 meV. The difference lies in the way of introducing the proximity SC effect. In the first case ( Fig. 3(a)), a phenomenological constant s-wave SC pairing is introduced and thus its Hamiltonian is exactly the minimal model defined by Eq. (1). In the other two cases (Figs. 3(b) and (c)), degrees of freedom in the SC are microscopically integrated out, giving rise to a selfenergy term in the semiconductor nanowire [42,46,47] where λ has the dimension of energy and is proportional to the tunnel coupling between the parent superconductor and the semiconductor nanowire. We choose λ = 1.5 meV throughout this work. In the low energy limit ω → 0, the self-energy term goes to the simple form of s-wave SC pairing but with a renormalized SC pairing amplitude −λτ x . Therefore in both cases, the Hamiltonian becomes energy-dependent including the substrateinduced self-energy term In the third case ( Fig. 3(c)), not only is a self-energy term introduced, the bulk SC gap also has V Z -dependence, i.e., ∆ 0 in Eq. (4) is replaced by where V Zc represents the critical magnetic field beyond which the bulk superconductivity is destroyed. (We introduce such a field-dependent SC gap since this appears to be case often in the nanowire experiments.) Then the Hamiltonian becomes Our reason for introducing a self-energy in the problem is to include the renormalization effects by the parent superconductor to some degree [48]. This is not essential for studying the ABS-MZM story in itself, but the calculated transport properties agree better with experiment in the presence of the self-energy corrections. In spite of the three different ways of introducing proximity SC effect, the calculated differential conductance at low energies (small bias voltage) shows universal behavior for the simple nanowire-a ZBCP forms right after gap closing, indicating the TQPT. This ZBCP is obviously associated with the MZMs at the ends of the nanowire. The ZBCP is quantized at 2e 2 /h because of the nanowire being in a topological superconducting phase and is robust against variations in the tunnel barrier, chemical potential and other parameters at zero temperature. Here in our simulation, however, the peak value is slightly below the quantized value 2e 2 /h because a small amount of dissipation (Γ = 0.01 meV) has been added for data smoothening. Although the three results are universal and identical in Fig. 3 for the low energy regime near the ZBCP, in the high energy regime (large bias voltage) conductance shows qualitative differences with or without self-energy. In addition, the TQPT point may shift due to self-energy corrections as the induced SC gap is renormalized by the tunnel coupling λ in Eq. (4). The calculated conductance in Fig. 3(a) has clear patterns at large bias voltage, while in Fig. 3(b) and (c), the calculated conductance at eV > ∆ is smooth and featureless. This featureless conductance can be understood by the smearing of the spectral function due to nanowire electrons tunneling into the quasiparticle continuum in the parent superconductor. Thus, the continuum (i.e., electron-hole) behavior above the SC gap is different in Fig. 3 with and without self-energy although the below-gap behavior near zero energy is essentially the same in all three approximations (except for a shift of TQPT to a higher critical V Z due to the self-energy renormalization). We note, however, that in Fig. 3(a) there is some evidence for the MZM-overlap induced ZBCP oscillations [49,50] at the highest magnetic field values (V Z > 2.5 meV) which is more obvious in Fig. 3(g) at the highest V Z values. The edge of the self-energy with the parent SC pairing decreasing with Zeeman field. Note that in panels (b) and (c) the self-energy effect renormalizes the induced gap to the tunnel coupling value λ = 1.5 meV so that TQPT is at VZ = 1.5 meV (and not the bare gap value 0.9 meV). In panel (a), by contrast, there is no self-energy correction, and hence the TQPT is at VZ = 0.9 meV. In panels (d)-(f) we show the "waterfalls" corresponding to panels (a)-(c), respectively. In panels (g)-(i) we plot the zero-bias conductance as a function of VZ corresponding to panels (a)-(c) respectively. We note that although we only show µ = 0 results here for the simple nanowire, the corresponding results for all finite µ look identical to the results shown here except for the TQPT point shifting to larger values of VZ consistent with the well-known theory (i.e., TQPT being given by ∆ 2 + µ 2 ).
quasiparticle continuum in Fig. 3(b) stays at a fixed bias voltage due to constant ∆ 0 , while the edge shrinks in Fig. 3(c) due to a decrease of the field-dependent SC gap ∆(V Z ) as a function of Zeeman field. In the Deng et al. experimental data [20], we clearly see the quasiparticle continuum edge shrinking with Zeeman field and the conductance is featureless outside the SC gap, which leads us to believe that a self-energy term for describing proximity superconducting effect and a V Z -dependent bulk SC gap ∆(V Z ) are necessary physical ingredients for correctly describing the higher energy features. Thus in all the calculations in the rest of the main paper, the proximity effect will be introduced by a self-energy term and the SC bulk gap will be ∆(V Z ), unless explicitly stated otherwise. Here for the simple nanowire case, we only show the conductance of one-band models, while relegat-ing the corresponding conductance of two-sub-band models in the Appendix A for completeness. We note that both the self-energy effect and the two-sub-band effect are necessary only for the qualitative agreement between our conductance calculations and the experimental data away from the midgap zero-energy regime. If we are only interested in the zero-energy behavior of ABS and MZM, the minimal model of Eq. (1) without any self energy or two-sub-band effect is perfectly adequate.
B. Quantum dot-nanowire-superconductor hybrid structures
In nanowire tunneling experiments quantum dot physics is quite generic, and it may appear at the in-terface between the nanowire and the lead due to Schottky barrier effects as mentioned in Sec. I, since all that is needed is a small potential confinement region in between the lead and the wire which is non-SC. In our model, the only role played by the quantum dot potential is to introduce ABSs in the nanowire, and hence, if an experiment observes in-gap ABS in the superconducting nanowire, we model that by a "quantum dot" strongly coupled to the nanowire. In this subsection, we calculate the differential conductance of generic hybrid structures, for which the Hamiltonian is a combination of quantum dot Eq. (3) and nanowire Eq. (7). Only one-band model with the self-energy is presented in the main text, while two-subband models and constant s-wave proximity pairing cases are discussed in Appendix B. We also present the energy spectra of hybrid structures with or without Zeeman spin splitting and spin-orbit coupling in Appendix C. In the main text of this section, we mainly show our calculated tunneling conductance results.
Scan of Zeeman field
The calculated differential conductance through the dot-nanowire hybrid structure as a function of Zeeman field at various fixed chemical potentials(µ = 3.0, 3.8, 4.5 meV) is shown in Fig. 4. Finite temperature T = 0.02 meV is introduced by a convolution between zero-temperature conductance and derivative of Fermi-Dirac distribution: In each panel of Fig. 4, a pair of ABS-induced conductance peaks at positive and negative bias voltage tend to come close to each other when the Zeeman field is turned on. At finite Zeeman field (∼ 1.5 meV), these two ABS peaks either cross zero bias and beat (Figs. 4(a) and (b)) or stick with each other near zero energy( Fig. 4(c)), all of which are similar to the observations in the Deng et al. experiment [20]. However these near-zero-energy peaks, especially the ZBCP formed by sticking of two ABSs, are all topologically trivial ABS peaks in Fig. 4 because V Z < µ 2 + ∆ 2 with the Zeeman splitting explicitly being less than the critical value necessary for the TQPT. We emphasize that experimentally the TQPT critical field is unknown whereas in our theory we know it by definition. If we did not know the TQPT point, there was no way to discern (just by looking at these conductance plots) whether the ZBCP in Fig. 4 arises from trivial or topological physics! The generic beating or accidental sticking behavior from the coalesced ABS pair is the consequence of the renormalization of the bound states in the quantum dot in proximity with nanowire in the presence of Zeeman splitting and spin-orbit coupling, which has little to do with topology and Majorana. More detailed discussion of this point will be presented in Sec. IV. All we emphasize here is that coalescence of ABS pairs into a ZBCP (as in Fig. 4(c)) cannot be construed as ABSs merging into MZMs without additional supporting evidence. In Figs. 4(d)-(f) we provide further details by showing "waterfalls" patterns of conductance for increasing V Z corresponding to the results in Figs. 4(a)-(c), respectively, whereas in Figs. 4(g)-(i) we show the calculated zero-bias conductance as a function of V Z for results in Figs. 4(a)-(c), respectively.
Scan of chemical potential
Calculated differential tunnel conductance through the dot-nanowire hybrid structure as a function of chemical potential at various Zeeman fields at T = 0.02 meV is shown in Fig. 5. In Fig. 5(a) and (b), the ABS-induced conductance peaks repel away from each other without coalescing at zero energy. In Fig. 5(c) the ABS peaks come together at some specific magnetic field, and beat with increasing chemical potential. In Fig. 5(d) ABS peaks beat and stick with each other. All these features are similar to observations in the Deng et al. although the relevant variable in the experiment is a gate voltage whose direct relationship to the chemical potential in the wire (our variable in Fig. 5) is unknown, precluding any kind of direct comparison with experiment [20]. But all of these near-zero-energy peaks are topologically trivial in our results of Fig. 5 Again, sticking together of ABSs at zero energy producing impressive ZBCP peaks are not sufficient to conclude that topological MZMs have formed. In Fig. 5, all the results are nontopological! We note that the ABSs sticking to almost zero energy and producing trivial ZBCPs generically happen only for larger values of chemical potential (as should be obvious from Figs. 4 and 5) with the ABSs tending to repel away from each other or not quite stick to zero (e.g., Figs. 5(a) and (b)) for µ < ∆. We find this to be a general trend. Unfortunately, the chemical potential is not known in the experimental samples.
Generic near-zero-bias conductance features independent of the choice of parameters
In the previous subsections, we show how topologically trivial ABSs could induce near-zero-bias conductance peaks that are quite similar to MZM-induced ZBCPs. The most important results among them are also summarized in the introduction (Fig. 2). In order to show that all these results are generic, not dependent on the particular choice of parameters, we here present another sets of differential conductance plots ( Fig. 6) with different choice of parent superconducting bulk gap ∆ 0 and the coupling λ between the semiconductor nanowire and the proximitizing superconductor. In the previous discussions, the default values are ∆ 0 = 0.9 meV and λ = 1.5 meV. Here in Fig. 6, the upper panels use ∆ 0 = 0.4 meV and λ = 1.5 meV, while the lower panels (color online). The calculated differential conductance through the dot-nanowire hybrid structure as a function of Zeeman field at various fixed chemical potentials (µ = 3.0, 3.8, 4.5 meV) at T = 0.02 meV. In all three panels (a)-(c), a pair of ABS conductance peaks at positive and negative bias voltage tend to come close to each other when the Zeeman field is turned on. At finite Zeeman field (∼ 1.5 meV), in (a) and (b), these two ABS peaks cross zero bias and beat, while in (c) they stick with each other. However these near-zero-energy peaks, especially the ZBCP formed by sticking of two ABSs in (c), are all topologically trivial ABS peaks because VZ < µ 2 + ∆ 2 with the Zeeman splitting explicitly being less than the critical value necessary for the TQPT. In panels (d)-(f) we show "waterfall" plots of conductance line cuts for different VZ (increasing vertically upward by 0.1 meV for each line) corresponding to panels (a)-(c) respectively, whereas in panels (g)-(i) we show the calculated zero-bias conductance in each case corresponding to panels (a)-(c) respectively. Note that these results include self-energy renormalization correction for the proximity effect.
use ∆ 0 = 0.2 meV and λ = 1.0 meV. Apart from these different parameters, all other ingredients are kept exactly the same as those in Fig. 2 so as to make direct comparison. If we compare Fig. 2(a)-(d) with Fig. 6(a)-(d), we find that the edge of the quasiparticle continuum is determined by the value of ∆ 0 , while the near-zero-bias conductance behavior looks exactly the same, independent of ∆ 0 , because the low-energy induced gap is the coupling λ, not the bare bulk gap ∆ 0 , as discussed below Eq. (4). Thus in Fig. 6(e)-(h), the low-energy conductance behavior is changed by a difference choice of λ (e.g., the critical Zeeman field for the formation of MZM-induced ZBCP in Fig. 6(e) is smaller than that in Fig. 6(a) ). However, this kind of variation for the near-zero-bias ABS-induced conductance peaks due to the change of ∆ 0 and λ is perturbative, as shown in Fig. 6(f)-(h) with respect to either The way to understand this observation is that ABSs are bound states localized in the quantum dot, with some wavefunction leakage into the proximitized nanowire, and thus the effect of superconducting gap on the ABSs is only perturbative. Thus, ABS-induced ZBCP physics is independent of the SC gap size as long as the gap is not so small as to be comparable with the energy resolution in the experiment (or numerics). We expect this physics to arise whenever there are ABSs in the system in the presence of spin-orbit-coupling and Zeeman splitting independent of the SC gap size and other details (except that the chemical potential should not be too small). More detailed discussion on this perturbative effect will be presented in Sec. IV. (color online). Calculated differential conductance through the hybrid structure as a function of chemical potential at various Zeeman fields at T = 0.02 meV. In (a) and (b), the ABS conductance peaks repel away from each other without coalescing at zero energy. In (c) the ABS peaks come together at some specific magnetic field, and beat with increasing chemical potential. In (d) ABS peaks beat and stick with each other. However, all of these near-zero-energy peaks are topologically trivial because VZ < µ 2 + ∆ 2 . In panels (e)-(h) we show the calculated zero-bias conductance corresponding respectively to panels (a)-(d) as a function of chemical potential at fixed VZ . Note that the TQPT happens here at low VZ < 2.0 meV (not shown).
FIG. 6. (color online). Differential conductance for simple nanowires and hybrid structures with different SC gap parameters. In the upper panels, (a)-(d), the parent superconducting gap is ∆0 = 0.4 meV, and the coupling between the nanowire and the parent superconductor is λ = 1.5 meV. In the lower panels, (e)-(h), ∆0 = 0.2 meV, and λ = 1.0 meV. These plots should be directly compared with Fig. 2, which shows that all the ABS-induced near-zero-bias conductance features are generic. (In the other figures in this paper, ∆0 = 0.9 meV and λ = 1.5 meV.)
Continuous crossover from ABS to MZM-induced ZBCP
As already mentioned in the introduction, a topologically trivial ABS-induced near-zero-bias conductance peak can continue all the way to the topologically nontrivial MZM-induced zero-bias conductance peak, with nothing remarkable happening at the TQPT point ( Fig. 2(d)). The ABS to MZM transition is in fact a smooth crossover, not that different from what would happen to the MZM itself if one starts from a very short wire with strongly overlapping end-MZMs and then crosses over to exponentially protected well-separated MZMs in the long wire limit simply by increasing the wire length. Here we provide a zoom-in plot of Fig. 2(d) focusing on the vicinity of TQPT, in order to see explicitly how ABSs and MZMs interact with each around around the TQPT. As shown in Fig. 7, it is the conductance for a hybrid structure with chemical potential µ = 2 meV as a function of Zeeman field and bias voltage. The critical Zeeman field is V Zc = 2.5 meV, as indicated by the vertical yellow line, to the left (right) of which, the hybrid structure is in topologically trivial (nontrivial) regime. When V Z < 2.5 meV, there is ABS near zerobias, while when V Z > 2.5 meV, the MZM-induced ZBCP forms and stays over a large range of Zeeman field. We want to emphasize that the ABS-induced peaks and the MZM-induced peaks are uncorrelated with each other, they do not transmute into each other by any means. This statement is supported by the observation that the near-zero-energy ABS below the formation of the MZMinduced ZBCP in Fig. 7 still exists at finite energy in the topological regime, and it affects the MZMs by squeezing the width of the ZBCP and lowering its peak value when the their energy separation is small (∼ 2.7 meV). Put in another way, those ABSs forming the near-zerobias conductance peaks never transmute into the MZMs, they exist on their own and may affect the MZMs at some point. All that happens in Fig. 7 is that the ABS is near zero energy below the TQPT, and once the MZM forms above the TQPT, the ABS moves away from zero energy producing some level repulsion physics with the MZM above the TQPT. We emphasize that there is neither an ABS-MZM transition nor an ABS-MZM transmutation. We note, however, that the level repulsion pushing the ABS away from zero energy in Fig. 7 actually happens a finite field above the TQPT reflecting crossover nature of the ABS-MZM 'transition'.
Effect of tunnel barrier
It has been well established that a zero-temperature ZBCP from MZM has a robust quantized peak value 2e 2 /h against the variation of tunnel barrier. For peaks from ABSs, however, such robustness is absent, and there is no generic value for the height of ABS peaks -they range from 0 to 4e 2 /h [21]. We have checked explicitly that we can get any conductance value associated with the ABS-associated ZBCP by tuning various parameters. In particular, a ZBCP conductance around 2e 2 /h is quite common from the non-topological ZBCP arising from coalesced ABSs through fine-tuned barrier strength. This dependence of ABS-induced ZBCP on the tunnel barrier strength can be used to check the robustness of any experimentally observed ZBCP. If the ZBCP height is immune to variations in the tunnel barrier, the likelihood is high that the corresponding ZBCPs are induced by topological MZMs.
C. Topological visibility
Based on our numerical simulations, we conclude that it is difficult to differentiate between Majorana and Andreev-induced ZBCPs by merely looking at differential conductance, e.g., Fig. 3(c) and 4(c) both show ZBCPs approaching 2e 2 /h at large Zeeman field. Whether the ZBCPs are topological or not is determined by calculating whether V Z is larger or smaller than the critical value for the TQPT, i.e., V Zc = µ 2 + ∆ 2 . We can also use another complementary quantity called topological visibility [26] to measure the topology of ZBCPs, discerning topological MZM-induced ZBCPs from trivial ABSinduced ZBCPs. Topological visibility (TV) is defined as the determinant of the reflection matrix: where the reflection matrix r contains both the normal and the Andreev reflections from the nanowire at zerobias voltage. Topological visibility is a generalization of topological invariant (Q) defined by S matrix at zero-bias voltage, which is Q = Det(r) = sgn(Det(r)). The topological invariant takes only binary values as ±1 due to the assumption of particle-hole symmetry and unitarity of the reflection matrix [26,51]. However, for a finitelength nanowire, the topological invariant always takes the trivial value, i.e., Q = +1 because of the Majorana splitting from MZM overlapping making conductance at zero bias always zero even when the topological criteria is satisfied. When an infinitesimal amount of dissipation is added into the nanowire leading to a finite value of zero-bias conductance, Det(r) can take any real value between −1 to +1 because the unitarity condition is no longer satisfied. Thus TV= Det(r), as a generalization of topological invariant, varies between -1 and +1. When the value of TV is close to −1, the system is thought of as topologically nontrivial, and otherwise, the system is more topologically trivial. The TV of a simple nanowire and a hybrid structure are shown in Fig. 8(a), whose corresponding conductance is in Fig. 3(c) and 4(c) respectively. At small Zeeman field, TVs in both cases are close to 1, indicating trivial phases. At large Zeeman field, the TV of the simple nanowire goes down to negative values approaching −1 while that of the hybrid structure also goes down but still remains around zero. This fact indicates that although a pair of ABSs coalesce forming a ZBCP, this peak is topologically trivial, while Majorana-induced ZBCP is topological. Thus merely getting a ZBCP with value close to 2e 2 /h does not necessarily mean the system enters topological regime. Unfortunately, there is no direct method to measure the topological visibility experimentally. In Fig. 8(b), we show the calculated TV corresponding to Fig. 2(d) where the TQPT is at V Zc = 2.5 meV (the vertical yellow line). We notice that TV starts to dive to be more negative for V Z > 2.5 meV consistent with the TQPT separating a trivial ZBCP below and above V Z = 2.5 meV, but the result is not absolutely definitive because of the presence of dissipation, finite temperature, gap closing, and Majorana overlap. These problems may exist in the experimental systems too masking the TQPT and making it difficult to distinguish trivial and topological regimes. More details on the role of topological visibility in this context can be found in Ref. [26], and we do not show any more TV results in the current paper except to make one remark. The calculated TV is approaching −1 (or not) whenever the corresponding Zeeman energy for the zero mode is above (below) the critical TQPT value V Zc , thus distinguishing (theoretically) the MZM and ABS zero modes. For our purpose, any apparent zero mode (or almost-zero mode) below (above) the TQPT point (which is exactly known in our theory, but not in the experiment) is considered to be an ABS (MZM) by definition.
We note in this context that the trivial ZBCP in Fig. 4(b) and 4(c) may persist to large Zeeman splittings (as in Fig. 2(d)) going beyond the TQPT point (V Z > 3.8, 4.5 meV in Fig. 4(b) and 4(c) respectively), and then the coalesced ABSs have eventually become MZMs at large enough magnetic field values (see, e.g., Fig. 2(d)). Unfortunately, there is no way to know about such a trivial to topological crossover by looking simply at the ZBCP (without knowing the precise TQPT point), and hence experimentally, one cannot tell whether a coalesced ZBCP is trivial or topological by studying only the ZBCP. One way to distinguish is perhaps careful experimentation varying many experimental parameters (e.g., magnetic field, chemical potential, tunnel barrier, SC gap) to test the stability of the absolute value of the ZBCP against such perturbations. The MZM-induced topological ZBCP should manifest the universal strength of 2e 2 /h whereas the trivial ABS-induced ZBCP will have non-universal behavior. Another issue which may become important in the experimental context [20] is that the bulk SC gap may collapse in the high magnetic field regime where one expects the MZM to manifest itself.
D. Proximitized quantum dot
All the calculations in the previous subsections are based on hybrid structures with the quantum dot outside the nanowire, i.e., there is no induced superconductivity in the quantum dot at all. In real experimental situations, however, it is possible that unintentional quantum dots may appear inside the SC nanowire, or the parent superconductor may partially or completely proximitize the quantum dot. Another way of saying this is that ABSs may arise in the nanowire from unknown origins where no obvious quantum dots are present. (Such a possibility can never be ruled out although whether it actually happens in a particular experimental system or not would depend on unknown and uncontrolled microscopic details.) We now consider hybrid structures with the quantum dot completely proximitized and calculate the corresponding differential conductance. The calculated differential conductance is shown in Fig. 9. Both Fig. 9(a) and 9(b) are differential conductance as a function of Zeeman splitting. In Fig. 9(a), the critical Zeeman field is at V Zc = ∆ 2 + µ 2 = 2.5 meV (marked by a yellow vertical line), beyond which the system enters the topological regime. By contrast in Fig. 9(b), the critical Zeeman field is outside the range of V Z , thus the zerobias peak is trivial. But there is no way to differentiate between the two situations by just looking at the ZBCPs. Another intriguing phenomenon in Fig. 9(a) and (b) is that the positions of the pair of ABSs at zero Zeeman field are much closer to the induced SC gap than situations where the quantum dot is not proximitized, as shown in Fig. 4. This is because the SC pairing for ABSs in fully proximitized dot is larger than the renormalized SC pairing in unproximitized dot. Thus the gap in the former case is larger and closer to the induced SC gap in the nanowire. Thus the position of ABS peaks at zero Zeeman field can be regarded as a clue to the degree of proximitization of the quantum dot. Such a feature is also manifest in Fig. 9(c), where we show the conductance as a function of chemical potential at zero Zeeman field. In contrast with Fig. 5(a), now the peaks from ABSs are closer to the induced SC gap. Fig. 9(d)shows Differential conductance as a function of Zeeman splitting at fixed chemical potential. In (a), the critical Zeeman field is at VZc = ∆ 2 + µ 2 = 2.5 meV (marked by a yellow vertical line), beyond which the system enters the topological regime. Here ∆ = λ = 1.5 meV due to self-energy renormalizing the SC pairing. In (b), the critical Zeeman field is outside the range of VZ , thus the zero-energy peak here is trivial. (c) and (d): Differential conductance as a function of chemical potential at fixed Zeeman field VZ = 0 and VZ = 2.0 meV. the differential conductance as a function of chemical potential at a finite Zeeman field V Z = 2 meV, where peaks from the two ABSs inside the SC gap are close to zero-energy. We believe that in most experimental situations the ABSs arise from strongly-coupled "effective" quantum dots within the nanowire (or from "dots" present at the Schottky barrier between the semiconductor nanowire and the normal metallic lead). More detailed results and discussion on hybrid structures with proximitized quantum dot are provided in Sec. VI.
IV. SELF-ENERGY MODEL OF QUANTUM DOT
So far we have numerically calculated differential conductance through various nanowire systems, showing either ZBCPs or near-zero-bias peaks. Some of these conductance plots, e.g. Figs. 4 and 5, are quite similar (essentially identical qualitatively) to those in the Deng et al. experiment [20], but this is only suggestive as we have no way of quantitatively simulating the experimental devices because of many unknown parameters (not the least of which are the detailed quantum dot characteristics). We have shown explicitly that ABSs could come together and remain stuck at zero energy in the quantum dot-nanowire hybrid system producing trivial ZBCPs which perfectly mimic the topological ZBCPs associated with MZMs in simple nanowires. This tendency seems to be quite generic at higher chemical potentials whereas at lower chemical potentials the ABSs seem to simply repel each other. This section is devoted to understanding the relevant physics leading to the conductance patterns discussed above. We calculate analytically the energy spectra of hybrid structures, especially focusing on low-energy states, which can provide insightful information about the corresponding zero-bias conductance behavior. We mention that superconductivity, spin-orbit coupling, and Zeeman splitting are all essential ingredients for the ABS physics being discussed here. Thus, the zero-sticking property of trivial ABSs (as a function of magnetic field) is a generic feature of class D superconductors, even without any disorder.
With no loss of generality, we focus on a single hybrid structure with chemical potential µ = 3.0 meV using the minimal model of a constant s-wave paring potential in this section since the low-energy behavior is not affected by the way proximity SC is introduced. The basic idea here is to see how a self-energy theory of quantum dot bound states, taking explicitly (but perturbatively) into account the SC nanowire as well as Zeeman splitting and spin-orbit coupling, leads naturally to ABS-sticking near zero energy independent of trivial or topological regime one is considering. In other words, the tendency of ABSs coalescing near zero energy is a generic property of class D superconductors and has nothing whatsoever to do with MZMs or TQPT. This is consistent with a previous analysis [33] of so-called Y-shaped resonances that were proposed to occur in generic quantum dots coupled to SCs on the grounds of random matrix theory where the system of interest was random (i.e., had disorder in it in sharp contrast to our disorder-free consideration). The focus of our work here is to expand on the likelihood of this occurrence in a spin-orbit coupled nanowire system in general even without any disorder. The resulting ZBCP may arise from an MZM in the topological regime or an ABS in the trivial regime controlled entirely by the magnetic field where it happens (i.e. whether this field is above or below the critical Zeeman field for the TQPT). What we find is (and show in Sec. III in depth) that the trivial ABSs could stick to zero energy for a large range of magnetic field without being repelled away, thus mimicking the expected zero mode behavior of topological MZMs.
A. Exact results from diagonalization
First, we look at the isolated quantum dot system whose Hamiltonian is H QD as shown in Eq. (3). The spectrum is shown in Fig. 10(a), where the blue curve denotes the bound state whose energy crosses Fermi level . 10. (color online). The spectrum of the isolated quantum dot. The blue curve is the spectrum of the bound state whose energy crosses Fermi level as a function of Zeeman field. The black curve is its particle-hole partner, which is redundant in this case since there is no SC pairing in the isolated quantum dot. Green curves are spectra of other bound states that are always well above or below Fermi level.
as a function of Zeeman field, the black curve denotes its particle-hole partner, which is redundant in this case since there is no SC pairing in the isolated quantum dot, and the green curves are spectra of other bound states that are always well above or below the Fermi level. As Zeeman field increases, the bound state eigen-energy crosses Fermi level, but it is then repelled by neighboring energy states due to spin-orbit coupling, which leads to the beating shape in the spectrum. Note that the theory explicitly must consider both Zeeman and spinorbit coupling effects. We provide more details on the directly calculated energy spectra of the hybrid system in Appendix C.
Second, we include the superconducting nanowire and couple it with the quantum dot; they together constitute the hybrid structure. The system is now class D (but with no disorder)-it has superconductivity, Zeeman splitting and spin-orbit coupling. The total Hamiltonian is a combination of Eqs. (1) and (3) where H QD is the isolated quantum dot, H N W is the superconducting nanowire and H t is the coupling between them.ĉ annihilates an electron at the end of the nanowire adjacent to the dot andf † creates an electron at the end of the dot adjacent to the nanowire. By diagonalizing the total Hamiltonian, we obtain the spectrum shown in Fig. 10(b), where the blue curves are particle-hole pairs that cross Fermi level, while the green curves are states well above or below Fermi level. By focusing on the spectra near Fermi level in Fig. 10(a) and (b), we see that the effect of the nanowire on the bound states of the quantum dot is that it shifts the spectrum and changes the spectrum curvature. The strong similarity between the spectrum of hybrid structure in Fig. 10(b) and the differential conductance in Fig. 4(a) indicates that the energy spectrum provides a good perspective on understanding the behavior of differential conductance, which is in general true at low temperature since the low-temperature transport is dominated by contributions from the lowenergy states..
B. Approximate results from self-energy theory
The numerical results in the previous subsection show graphically that coupling with the nanowire has a perturbative effect on the energy spectrum of the isolated quantum dot. We now calculate the analytic form of the perturbed spectrum in the quantum dot using an effective theory including perturbative corrections of the quantum dot spectra arising from the superconducting nanowire. The total Hamiltonian is still Eq. (9). We first project H QD onto the subspace spanned by the bound state crossing the Fermi level and its redundant hole partner, thus obtaining where γ's are Pauli matrices on the projected twodimensional subspace. E eff QD = E QD − ∆µ, with E QD the bare energy of the bound state in the isolated quantum dot crossing the Fermi level, and ∆µ represents the renormalization of the chemical potential due to projecting out all the other states. Then we integrate out the degrees of freedom in the nanowire, leading to an energydependent self-energy term in the isolated quantum dot where u, u † represent the hopping between nanowire and quantum dot. Similarly, we project this self-energy term onto the two-dimensional subspace in quantum dot and get whereP denotes the projection operator. Thus the approximate energy spectrum of the hybrid structure near the Fermi level is given by the roots of The spectrum obtained from this effective theory is shown in Fig. 10(c), where blue circles represent the exact spectra from diagonalizing the total Hamiltonian in the previous subsection, while the red line is the spectrum obtained from the projected effective theory with the appropriate choice of ∆µ in Eq. (10). The excellent agreement between the exact diagonalization results and the effective theory results demonstrates that the proximity effect from the SC nanowire onto the quantum dot bound states is perturbative renormalization.
We can take one more step to get an analytic expression of the ABS spectra using the low energy assumption ω → 0. In the nanowire, particle-hole symmetry constrains the form of projected self-energy term F (ω) to be (Appendix D) where f 0,x are odd functions of ω, and f z is an even function of ω. Here, f 's are expanded up to their leading order for small ω. Then the leading order solution is given by the approximate root of Eq. (13): This result indicates that the proximity effect of the nanowire is two-fold: it first shifts the projected spectrum of the isolated quantum dot, and it reduces the curvature (i.e., enhances the effective mass if we focus on the parabolic part) of the spectrum, since numerics show β 0 1, β x .
Our finding is that the near-zero conductance peaks in hybrid structures are mainly contributed by the ABSs related to the quantum dot. These ABSs can be regarded as bound states in the quantum dot perturbatively renormalized by the nanowire. ABS spectra show parabolic shapes as a function of Zeeman field with renormalized effective mass and chemical potential. When the parabolic spectrum crosses the Fermi level, the spectrum together with its particle-hole partner manifests a beating pattern around midgap, and if this beating involves small amplitude, the resulting ABS will appear to be stuck at zero energy manifesting a generic ZBCP, which has nothing to do with MZMs. It is simply a low energy fermionic bound state in the SC gap. For the approximately zero-energy ABSs the renormalized effective mass is accidentally huge and the renormalized chemical potential shifts the ABS close enough to zero energy. How close is "close enough" depends entirely on the energy resolution of the experiment-all these apparent zero-energy trivial ABS modes are beating around midgap, it is only when this beating happens to be smaller than the resolution, the mode appears stuck at zero energy. Especially when broadening effects from finite temperature and/or intrinsic dissipation are larger than the beating amplitude, near-zero peaks seem to appear stuck at zero energy since the energy resolution is not fine enough to resolve the beating pattern. This makes it essentially impossible to obtain a simple analytic form for the range of magnetic field (i.e., range of V Z ) over which the trivial ABSs will remain close to zero-this range is a multidimensional complicated function determined determined by all the parameters of the hybrid system even in our simple perturbative model (chemical potential, magnetic field, induced gap, quantum dot confinement details, experimental resolution around zero bias, temperature, broadening, etc.).
We emphasize that all the four ingredients are essential in the perturbative theory: quantum dot, superconducting nanowire, spin-orbit coupling, and Zeeman splitting. What is, however, not necessary is topological pwave superconductivity or Majorana modes. Generically, the ABSs in class D superconducting nanowires may be attracted to the midgap, and once they coalesce there, they will have a tendency to stick to zero energy. The fact that class D superconductors generically allow trivial zero-energy states can also be seen from the known level statistics whose probability distribution has no repulsion from zero energy [52] in contrast to the other class superconductors. What we show in our analysis here is that this tendency of D-class peaks to stick to zero energy can happen for simple ABSs arising from single quantum dots, there is no need to invoke disorder as leading to such class D peaks [29][30][31]53], and such zero-bias sticking could survive over a large range of magnetic field variation. The disorder-free nature of our theory distinguishes it from earlier work on class D zero-bias peaks which are caused by disorder induced quantum interference [29][30][31]53].
Specifically, the ingredients discussed in the previous paragraph produce localized ABSs in the symmetry class D with a large weight at the end. Superconductivity provides particle-hole symmetry and Zeeman splitting breaks time-reversal symmetry in order to place the system in the symmetry class D. Spin-orbit coupling is needed to break spin-conservation without which the system would become two copies of a different symmetry class. Class D is important to induce energy-level repulsion that pushes the lowest pair of ABSs towards zero energy [54]. As seen from Eq. (14), the self-energy from the superconducting nanowire that is in symmetry class D generates the eigenstate repulsion which pushes the ABSs towards zero energy. The tendency of ABSs to stick as the Zeeman field is varied in class D is analogous to the Y-type resonance discussed in the context of superconducting quantum dots [33].
A definitive prediction of the arguments in the previous paragraph is that the combination of spin-orbit cou-pling and Zeeman splitting is required to create states that stick to zero energy, which only occurs in symmetry class D. This can be checked explicitly by obtaining the corresponding low-energy spectra in the hybrid quantum dot-nanowire system without Zeeman splitting or without spin-orbit coupling respectively. We carry out these direct numerical simulations and show the corresponding results in Appendix C, where it can be clearly seen that only the situations with superconductivity, spinorbit coupling, and Zeeman splitting all being finite allow for the possibility of zero-sticking (and beating) of ABS. Thus, the same ingredients which lead to the existence of MZMs in nanowires (superconductivity, spinorbit coupling, and Zeeman splitting) also lead to Andreev bound states sometimes producing almost-zeroenergy midgap states. This is a most unfortunate situation indeed. This means that confirming the presence of Majoranas through transport measurement might be more complicated than simply observing a robust zerobias peak. While a ZBCP is indeed necessary, it is by no means sufficient even if the ZBCP value agrees with the expected quantized conductance of 2e 2 /h. It will also be necessary to vary the tunneling through the quantum dot to reduce it to a quantum point contact which can explicitly be verified to be carrying a single spin-polarized channel in the normal state [55]. In addition, it must be ensured that the ZBCP quantization is indeed robust against variations in various system parameters such as tunnel barrier, magnetic field, and chemical potential. In particular, varying the quantum dot confinement through tunable external gate voltage and checking for the stability of the ZBCP may be essential to ensure that the relevant ZBCP indeed arises from MZMs and not ABSs. This is considered in the next section.
V. DISTINGUISHING BETWEEN TRIVIAL AND TOPOLOGICAL ZERO MODES
In the previous sections, we numerically show that differential conductance from MZMs and near-zero-energy ABSs may share strong similarities with each other, making them hard to distinguish. Although theoretically one can look at topological criteria or TV to distinguish between the two cases, quantities like chemical potential and TV are hardly known in the real experimental setup. So in order to distinguish ZBCPs arising from topological and nontopological situations, we discuss an alternate experimentally (in principle) accessible method, i.e., to see how the zero modes are affected by the change of the depth of the quantum dot confinement potential. We mentioned before that the phenomenon of the generic existence of trivial ABS-induced zero modes is qualitatively independent of the quantum dot confinement details, but now we are asking a different question. We focus on a fixed hybrid structure with ABS-(or MZM-) induced zero modes, and ask how this specific zero mode and the near-zero-bias differential conductance (compar-ing the ABS and the MZM cases) react to the change in the depth V D of the quantum dot confinement potential keeping everything else exactly the same.
A. Energy spectra for hybrid structures with ABS and MZM-induced zero modes
We show our numerical results in Fig. 11. Fig. 11(a) is the calculated spectrum as a function of chemical potential at fixed V Z = 2.0 meV for V D = 4 meV with topological MZM-(or trivial ABS-) induced zero modes at small (large) chemical potential regimes. Now, we ask how this spectrum evolves if we only vary V D keeping everything else exactly the same. Fig. 11(b) presents the MZM spectrum (i.e., at small chemical potential) as a function of dot depth, showing that it is robust against change of dot depth. By contrast, Fig. 11(c) shows the ABS spectrum (i.e., large chemical potential) as a function of the dot potential depth, clearly showing that the ABS "zero mode" is not stable and oscillates (or splits) as a function of the dot potential. Put in another way, the fact that we see near-zero-energy ABSs is quite accidental for any particular values of Zeeman splitting and chemical potential, which only happens when the dot depth is fine-tuned to be some value, e.g., V D = 4 meV so that the energy splitting of the ABS zero mode happens to be smaller than the resolution. So varying the dot depth (e.g., by experimentally changing gate potential) will be a stability test distinguishing topological MZMs and nontopological ABSs. Note that it is possible (even likely) that the original ABS-induced ZBCP will split as the dot potential changes whereas a new trivial zero mode could appear, but the stability (or not) of specific ZBCPs to gate potentials could be a powerful experimental technique for distinguishing trivial and topological ZBCPs. Of course, experimentally tuning the dot potential by an external gate may turn out to be difficult in realistic situations, but modes which are unstable to variations in gate potentials are likely to be trivial ABS-induced ZBCPs.
B. Conductance for hybrid structures with ABS and MZM-induced zero modes
We also show the calculated differential conductance through the hybrid structures as a function of the depth of the quantum dot and bias voltage, as shown in Fig. 12. The conductance color plots in the upper panels (a)-(c) are for topological nanowires, i.e., V Z > V Zc = µ 2 + ∆ 2 , and thus all the zero-bias or near-zero-bias conductance peaks are MZM-induced. Such ZBCPs are stable against the variation of the depth of the quantum dot. With the increase of the Zeeman field, ZBCPs will be split and form Majorana oscillations as a function of the dot depth. By contrast, the conductance color plots in the lower panels (d)-(f) are for topologically trivial nanowires (V Z < µ), and thus all the near-zero-bias conductance peaks are ABS-induced. These nontopological near-zero-bias peaks also show beating patterns as a function of the dot depth, which is quite similar to the patterns for Majorana oscillations, although the origin is nontopological. But the crucial difference between the two situations is that ABS-induced oscillations are not guaranteed to cross zero bias for a variation of the parameter choice, e.g., increasing chemical potential as shown in (e) and (f), while for MBS-induced oscillations, although the amplitude of oscillation will increase with parameters in the nanowire (e.g., Zeeman field), the oscillation itself is sure to pass through zero-bias voltage.
The difference between the two situations rises from the crucial fact that ABS induced ZBCPS are almost zero modes involving (always) some level repulsion whereas the MZM induced ZBCP oscillations arise from the splitting of a true zero mode in the infinite wire limit.
VI. QUANTUM DOTS AS SHORT-RANGE INHOMOGENEITY
So far, our theoretical analysis (except for Sec. III D and Fig. 9) has focused on quantum dots explicitly created at the end of a nanowire (see Fig. 1). In this case the quantum dot is normal (i.e. non-superconducting), while the rest of the wire is proximity-coupled to the parent superconductor. However, in general the quantum dot could be unintentional, i.e. the experimentalist may be unaware of its presence near the wire end, and it could be partially or completely covered by the superconductor. For example, such a situation may arise if a potential well with a depth of a few meV forms near the end of the proximitized segment of the wire. Similar phenomenology emerges in the presence of a low (but wide enough) potential barrier. After all, there is no easy way to rule out shallow potential wells (and low potential barriers) inside the nanowire or near its ends. In this context, we emphasize that a better understanding of the profile of the effective potential along the wire represents a critical outstanding problem. It turns out that all our results obtained so far still apply qualitatively even if the quantum dot is partially or completely inside the nanowire. In these cases we obtain exactly the same type of low-energy ABSs that have a tendency of sticking together near zero energy, thus producing ZBCPs that mimic MZM-induced ZBCPs. We present these results in detail below. We are providing these results here in order to go all the way from an isolated non-superconducting dot at the wire end (as in the previous sections of this paper) to a situation where the dot is inside the wire and is completely superconducting. We explicitly establish that the main results of the previous sections can be obtained everywhere within this range, i.e. from isolated dots to dots completely inside the nanowire. In fact, this behavior is rather generic in non-homogeneous semiconductor nanowires [36]. Finally, in this section we pay special attention to the profile of the ZBCPs associated with the almost-zero-energy ABSs. The key question that we want to address is whether or not a quantized ZBCP (i.e., a ZBCP with a peak height of 2e 2 /h) can be used as a hallmark for the Majorana zero modes expected to emerge beyond a certain critical field.
In Fig. 13, we represent schematically the hybrid structure [panel (a)] and the effective potential [panel (b)] corresponding to three different situations that we consider explicitly in this section using exactly the same model parameters: dot entirely outside the proximitized segment of the nanowire, dot completely inside the nanowire (i.e., the whole dot is superconducting), and dot partially cov- ered by the parent superconductor. The depth of the potential well in the quantum dot region is about 1 meV and its length is 250 nm. The coupling between the quantum dot and the rest of the wire is controlled by the height of the corresponding potential barrier [see panel (b) in Fig. 13]. In addition, the coupling depends on how much of the dot is covered by the superconductor. The parameters used in our calculations correspond to intermediate and strong coupling regimes. We note that replacing the potential well from Fig. 13 (b) with potential barrier of a height several times larger than the induced gap ∆ ind leads to low-energy features similar to those described below for the potential well. Finally, for comparison we also consider a nanowire with a smoothly varying nonhomogeneous potential [panel (c) in Fig. 13].
In Fig. 14 we show the calculated low lying energy spectra for three cases: (a) normal dot (i.e. uncovered by the SC), (b) half-covered dot, and (c) fully-covered dot. The system is characterized by an induced gap ∆ ind = 0.25 meV and a chemical potential µ = −2.83∆ ind . The corresponding critical field associated with the topological quantum phase transition, V Zc ≈ 3∆ ind = 0.75 meV, is signaled by a minimum of the quasiparticle gap, as expected in a finite length system. First, we note that all three situations illustrated in Fig. 14 clearly show trivial almost-zero-energy ABSs in a certain range of Zeeman field (lower than the critical field). However, the Zeeman field V * Z associated with the first zero-energy crossing is significantly lower in the case of an The zerotemperature conductance along various constant field cuts marked "1", "2", and "3" are shown in Fig. 15.
Consequently, the range of Zeeman field corresponding to almost-zero-energy ABSs gets reduced with increasing the coverage of the quantum dot by the SC. Another key feature is the dependence of the energy of the ABS at V Z = 0 on the dot coverage. For the fully covered dot [panel (c)], this energy is practically ∆ ind . In fact, by proximity effect, all the states that "reside" entirely under the parent superconductor have energies (at V Z = 0) equal or larger than the induced gap for the corresponding band. By contrast, the zero-field energy of the ABSs in the half-covered [panel (b)] and uncovered [panel (a)] dots is significantly lower that induced gap. To obtain such a state it is required that a significant fraction of the corresponding wave function be localized outside the proximitized segment of the wire. We find that, quite generically, strongly coupled dots that are uncovered or partially covered (when the uncovered fraction is signifi-cant) can support ABSs that i) have energies at V Z = 0 much smaller than the induced gap and ii) are characterized by "merging fields" V * Z significantly lower than the critical value V Zc . Consequently, in hybrid systems having strongly coupled dots at the end it is rather straightforward to obtain low-energy Andreev bound states that merge toward zero and generate MZM-like zero-bias conductance peaks in the topologically-trivial regime, way before the topological quantum phase transition. In a real system it is possible that superconductivity be suppressed by the magnetic field before reaching the critical value V Zc . In such a scenario, a robust ZBCP that sticks to zero energy over a significant field range is entirely caused by (topologically trivial) merging ABSs, rather than (non-Abelian) MZMs. We speculate that the the rigid zero-energy state shown in Fig. S6 of Ref. [20] is an example of such a trivial (nearly) zero-energy state.
Next, we address the following question: can one discriminate between a MZM-induced zero-bias conductance peak and a trivial, ABS-induced ZBCP based on the height of the peak at zero temperature? More specifically, does the observation of a quantized peak guarantee its MZM nature? In short, the answer is no. However, observing a quantized ZBCP that is robust against small variations of parameters such as the Zeeman field, the chemical potential, and external gate potentials provides strong indication that the peak is probably not generated by merging ABSs partially localized outside the proximitized segment of the wire, i.e. scenarios (a) and (b) in Fig. 14. The results that support this conclusion are shown in Fig. 15. Each panel in Fig. 15 shows the (low-energy) differential conductance at T = 0 for three different values of the Zeeman field marked "1", "2", and "3" in the corresponding panel of Fig. 14. Generally, the largest value of the ZBCP obtains for Zeeman fields corresponding to the first zero-energy crossing, V * Z , marked "1" in Fig. 14. In this case, the maximum height exceeds 2e 2 /h. However, for the fully covered dot (bottom panel) the excess conductance consists of a very narrow secondary peak that would be practically unobservable at finite temperature. In fact, we find that in the case of a fully covered dot, at low-temperature, the conductance peak height is practically quantized in both the trivial regime (field cuts "1" and "2") and the topological regime (field cut "3"), regardless of whether the ZBCP is split or not. By contrast, for the uncovered and the halfcovered dots (top and middle panels, respectively) the peak height can have any value between 0 and 4e 2 /h in the trivial regime and becomes quantized in topological regime. Of course, a quantized ZBCP can be obtained even in the trivial regime at certain specific values of the Zeeman field, but its quantization is not robust against small variations of the control parameters (e.g., Zeeman splitting, chemical potential, SC gap).
A key requirement for the realization of topological superconductivity and Majorana zero modes in semiconductor-superconductor hybrid structures is that the applied magnetic field be perpendicular to the effec- tive Rashba spin-orbit (SO) field. More specifically, the MZMs are robust against rotations of the applied field in the plane perpendicular to the SO field, but become unstable as the angle between the applied and the SO fields (which corresponds to π/2 − θ in the inset of Fig. 16) is reduced. The natural question is whether the nearlyzero ABS modes induced by a quantum dot (or other type of inhomogeneity) show a similar behavior. We find that the coalescing ABSs (and, more generally, the low-energy spectrum) are insensitive to rotations of the applied field in the plane perpendicular to the effective SO field (i.e. the x-z plane in Fig. 16). This property is illustrated by the spectrum shown in the top panel of Fig. 16 corresponding to a field oriented along the z-axis. Note that this spectrum is identical to Fig. 14 (b), which corresponds to a field oriented along the x-axis. By contrast, when the field is rotated in the x-y plane, the nearly-zero ABS mode becomes unstable (see the middle and bottom panels in 16). In addition, the spectrum becomes gapless above a certain (angle-dependent) value of the Zeeman splitting. We conclude that the coalescing ABSs behave qualitatively similar to the MZMs with respect to rotations of the field orientation. To further support this conclusion, we calculate the low-energy spectra of the wire-dot system in the Majorana regime for two different orientations of the applied magnetic field. The results are shown in Fig. 17. We note that rotating the field in the x-z plane (i.e. the plane perpendicular to the SO field) does not affect the spectrum. By contrast, rotating the field in the x-y plane changes the low-energy features in a manner similar to that discussed in the context of coalescing ABSs.
Before concluding this section, we compare a hybrid system having a (strongly coupled) quantum dot near one end with an inhomogeneous system with a smooth effec-q=p/3 q=0 FIG. 17. (color online). Dependence of the low-energy spectrum on the field orientation for a wire-dot system in the Majorana regime. The model parameters are the same as in Fig. 14 (b), except the chemical potential, which is set to µ = −0.25∆ ind . The top panel corresponds to a field oriented along the wire (or any other direction in the x-z plane), while the bottom panel corresponds to an angle θ = π/3 in the x-y plane (see inset of Fig. 16). Note the similarity with the bottom panel from Fig. 16.
tive potential as shown in Fig. 13 (c). In the language of Ref. [36], this would correspond to a long-range inhomogeneity, in contrast to the quantum dots which can be viewed as short-range inhomogeneities. The low-energy spectrum of the non-homogeneous system is shown in Fig. 18. At zero field, the energy of the ABS is lower than the induced gap as a result of the nanowire being only partially covered (about 90%) by the parent superconductor, as discussed above. Note the striking absence of a minimum of the quasiparticle gap, which would signal the topological quantum phase transition in a homogeneous system. The merging ABSs form a very robust nearly-zero mode, which, according the analysis in Ref. [36], consists of partially overlapping Majorana bound states. The low-energy differential conductance corresponding to the nearly-zero mode in Fig. 18 is shown in Fig. 19 (as function of the Zeeman field for three different values of the bias voltage) and Fig. 20 (as function of the bias voltage for three different Zeeman fields marked "1", "2", and "3" in Fig. 18). The low-bias differential conductance traces shown in Fig. 19 have values between 0 and (almost) 4e 2 /h. In particular, the differential conductance exceeds 2e 2 /h in the vicinity of the first zero-energy crossing, V Z ≈ 0.3 meV (see Fig. 18). However, in practice it would be extremely difficult to observe a ZBCP larger than 2e 2 /h at finite temperature. This is due to the fact that the contribu- 18. (color online). Low-energy spectrum as function of the applied Zeeman field for a system with smooth nonhomogeneous effective potential [see Fig. 13, panel (c)]. The length of the parent SC is the same as in the case of halfcovered quantum dot (i.e. a segment of the wire of about 125 nm is not covered). Note the robust (nearly) zero-mode and the absence of a well defined minimum of the quasiparticle gap corresponding to the crossover between the trivial and the "topological" regimes.
tion exceeding the quantized value forms a very narrow secondary peak (see Fig. 20, left panel), similar to the completely covered dot shown in Fig. 15. We interpret the double-peak structure of the ZBCP as resulting from the partially-overlapping Majorana bound state (MBS) that form the ABS. The broad peak is generated by the MBS localized closer to the wire end (which is strongly coupled to the metallic lead), while the narrow additional peak is due to the MBS localized further away from the end (which is weakly coupled to the lead). Finally, we note that the low conductance values in Fig. 19 are due to the splitting of the ZBCP. However, the maximum value of the ZBCP is practically quantized at very low (but finite) temperature, as evident from the results shown in Fig. 20.
In summary, the results presented in this section lead us to the following conclusions. First, semiconductorsuperconductor hybrid systems having strongly-coupled quantum dots at the end of the wire, which can be viewed as systems with short-range potential inhomogeneities, generate ABSs that, quite generically, tend to merge at zero energy with increasing Zeeman field, but still within the topologically-trivial regime. Second, ABSs with energies at V Z = 0 significantly lower than the induced gap and low values of the merging field V * Z are likely to generate extremely robust topologically-trivial ZBCPs. Third, measuring a quantized ( to 2e 2 /h) ZBCP does not provide definitive evidence for Majorana zero modes (although finding ZBCP quantization which is robust over variations in many parameters, e.g., magnetic field, chemical potential, tunnel barrier, carrier density, would be very strong evidence for the existence of MZMsas emphasized already in this paper). However, trivial conductance peaks generated by merging ABSs having wave functions partially localized outside the superconducting Zeeman field (meV) Differential conductance (e 2 /h) FIG. 19. (color online). Dependence of the low-energy differential conductance on the Zeeman splitting for the nonhomogeneous wire with the spectrum shown in Fig. 18. The black, orange (light gray), and red (gray) lines correspond to a bias voltage V bias = 0.05, 0.15, and 0.75 µV, respectively. region are generally expected to produce ZBCPs with heights between 0 and 4e 2 /h. In this regime, an accidental quantized peak will not be robust against small variations of the control parameters. By contrast, if the wave function is entirely inside the proximitized region, the ZBCP is (practically) quantized and cannot be distinguished from a MZM-induced conduction peak by a local tunneling measurement. In this case, a minimal requirement for the Majorana scenario is to be able to reproduce the (robust) ZBCP by performing a tunneling measurement at the opposite end of the wire, in the spirit of Ref. [50]. Finally, our fourth conclusion is that very similar phenomenologies can be generated using rather different effective potentials(i.e., the effective 'quantum dot' leading to the ABS could arise from many different physical origins and could lie inside or outside the nanowire). A better understanding of the profile of the effective potential along the wire (which can be obtained, for example, by performing detailed Poisson-Schrodinger calculations) represents a critical task in this field.
VII. UNDERSTANDING NEAR-ZERO-ENERGY ANDREEV BOUND STATES FROM REFLECTION MATRIX THEORY
The absence of level repulsion in symmetry class D enhances the likelihood of a pair of levels sticking together at zero energy as some parameter such as the Zeeman splitting or the chemical potential is varied as discussed throughout this manuscript. Despite this generic fact associated with symmetry class D that describes systems containing Zeeman splitting, spin-orbit coupling and superconductivity, the range of Zeeman splitting over which the spectrum sticks is not guaranteed to be large. In fact, the range of Zeeman field is typically not large for most Bias potential (meV) dI/dV (e 2 /h) FIG. 20. (color online). Zero temperature differential conductance as function of the bias voltage for three different values of the Zeeman field marked "1", "2", and "3" in Fig. 18. disordered Hamiltonian [29]. In the experiment [20] and in our simulations (with quantum dots, but no disorder), however, the zero-sticking propensity of trivial ABSs extends over a large range of Zeeman splitting (V Z ).
A more specific mechanism that provides a relatively robust (compared to the usual disordered class D) nearzero-energy states within symmetry class D involves the so-called smooth confinement [34,36]. The essential idea is that large Zeeman splitting (V Z ) compared to SC pairing (∆) suppresses conventional s-wave pairing compared to p-wave pairing leading to a tendency for the formation of Majorana states at the end of the system for each spinpolarized channel in the nanowire. However, the end potential typically scatters between the different channels and gaps the Majorana fermions out, i.e., an MZM splitting develops. If the inter-channel scattering between different channels is weak then this Majorana splitting is small and there is a near-zero-energy state in such a potential. This near-zero-energy mode is, however, nontopological as it is arising from split Majorana modes at the wire end. Thus, the ABS producing the ZBCP is a composite of two MZMs, only one of which contributes to tunneling, leading to a robust almost-zero mode in the trivial regime.
In subsection VII A, we will first show the energy spectra for the quantum dot-proximitized nanowire hybrid structure using various parameters (e.g., chemical potential µ, nanowire length L, dot length l, etc.) in order to show the trend of zero-energy sticking in the parameter regime. Second in subsection VII B, we use reflection matrix theory to explain why such zero-sticking bound states exist in the relevant parameter regime.
A. Energy spectra for hybrid structures with various parameters
We show the energy spectra for various hybrid structures in Fig. 21. The few relevant parameters we focus on and thus vary between panels are chemical potential µ, length of the nanowire L, length of the quantum dot l, while all other parameters, e.g. pairing potential ∆ 0 = 0.9 meV and etc., are kept the same as the default values introduced in the previous sections. Fig. 21(a) shows the energy spectrum of a typical hybrid structure discussed in the previous sections, with the parameters conforming to the known values in the realistic experimental setup. There is a finite range of Zeeman splitting over which the energy of the topologically trivial ABSs stick around zero. Through Fig. 21(b) to (d), we step by step increase the chemical potential µ, the length of the semiconductor-superconductor nanowire L, and the length of the quantum dot l. Finally with all the three parameters µ, L, l large in Fig. 21(d), the energy of the trivial ABS is even closer to zero energy, and even more strikingly, the range of Zeeman splitting for such nearzero-energy ABSs becomes extremely large, starting from a few times the pairing potential up to the chemical potential. The trend of decreasing ABS energy and increasing range of zero-energy sticking shown by Fig. 21(a) to (d) indicates that Fig. 21(a) and Fig. 21(d) are essentially adiabatically connected. In the following subsection, we will discuss why there exist such near-zero-energy ABSs over such a large range of Zeeman field in large µ, L, l limit using reflection matrix theory. Since realistic situation is adiabatically connected to this large µ, L, l limit, our understanding will also apply to most of the hybrid structures discussed in previous sections. Note that this discussion also explains why the zero-sticking of ABSs mostly arises in the large chemical potential regime.
B. Understanding zero-energy sticking from reflection matrix theory
In the previous subsection, numerical simulations show strong evidence that the energy of the ABSs approaches zero energy and the range of such near-zero-energy sticking increases with increasing chemical potential, increasing nanowire length, and increasing quantum dot length. Thus, here we try to understand this phenomenon using reflection matrix theory. The setup is shown in Fig. 22. Although the NS junction setup is exactly the same as that shown in Fig. 1, an imaginary piece of semiconductor is added between the quantum dot and the semiconductor-superconductor nanowire for the discussion of the reflection matrix theory. This imaginary semiconductor can also be regarded as a part of the quantum dot but with nearly homogeneous potential. The total reflection matrix from the hybrid structure is where r b is the reflection matrix for the incoming modes in the lead reflected by the barrier, t is the transmission matrix for the lead modes transmitting to the semiconductor, r SC is the reflection matrix for the semiconductor modes reflected by the proximitized nanowire, r QD is the reflection matrix for the semiconductor modes reflected by the quantum dot, and t is the is the transmission matrix for the semiconductor modes transmitted to the lead. The near-zero-energy differential conductance is where r he is the Andreev reflection matrix from the hybrid structure. The last step holds due to the unitarity of the total reflection matrix when bias voltage is below the superconducting gap. The Andreev reflection is contained in the second term of Eq. (16), and the pole of (1 − r SC r QD ) −1 corresponds to the peak of the differential conductance. On the other hand, the pole of the reflection matrix is also the condition for the formation of a bound state, i.e., a bound state forms when is satisfied. In the large Zeeman field limit, i.e., V Z ∆, α R , the spin-orbit-coupled nanowire can be thought of as two spin-polarized bands with a large difference in chemical potential and Fermi momenta. When considering the QD SM-SC nanowire SM Lead 22. (color online). A schematic for the NS junction setup. Although the setup is exactly the same as that shown in Fig. 1, an imaginary piece of semiconductor is added between the quantum dot and the semiconductorsuperconductor nanowire for the discussion of reflection matrix theory.
scattering process between the effectively spin-polarized semiconductor and the semi-infinite superconductor, the momentum must be conserved in the limit of Andreev approximation ∆ µ. The constraint of momentum conservation prohibits the normal reflection between either the same or the other spinful channel due to the large difference in Fermi momenta between two channels. Thus the scattering process between semiconductor and the superconductor can be thought of as effectively two independent perfect Andreev reflection processes among each spin-polarized channel. So the reflection matrix for each channel can be written as For the scattering process between the semiconductor and the quantum dot, when the dot potential is smooth, the normal reflection only connects the Fermi level within the same spinful channel, and thus again the two spinpolarized bands of the semiconductor can be thought of as independent of each other. So the reflection matrix for each band can be written as The numerical evidence for the form of r SC and r QD are shown in Fig. 23, which is consistent with our argument in the large Zeeman field and Andreev approximation limit. It is easy to see that such zero-bias reflection matrices satisfy the condition for the formation of a bound state, i.e., Eq. (18). It indicates that in the large Zeeman field and Andreev approximation limit, the semiconductor-superconductor nanowire can be seen as consisting of two nearly spin-polarized p-wave superconductors, and each of them holds a MZM at the wire end.
Since the interchannel coupling between the two p-wave superconductors is weak in the presence of a smooth dot potential at the wire end, the two MZMs from two channels do not gap out each other, they form a near-zeroenergy ABS.
Although the above discussion assumes large chemical 19)) as a function of nanowire length. In the long nanowire limit, the Andreev reflection becomes perfect. The lower panels are the normal reflection between each spinful channel (i.e., the |e iβ | in Eq. (20)) as a function of dot length.
potential, long semiconductor-superconductor nanowire, and long quantum dot, the conclusion well applies to the realistic situation with intermediate value of chemical potential, finite length of the nanowire and quantum dot, since these two situations are adiabatically connected with each other. This conclusion is explicitly verified by the extensive numerical results presented in this work.
VIII. CONCLUSION
We have developed a non-interacting theory for the low-lying energy spectra and the associated tunneling transport properties of quantum dot-nanowiresuperconductor hybrid structures focusing on quantum dots strongly coupled to the proximitized wire. The theory is motivated by a striking recent experiment [20] reporting intriguing coalescence of Andreev bound states into zero-energy states characterized by zero-bias conductance peaks that mimic the predicted Majorana zero mode behavior. The specific question we address in our work is whether the midgap coalescence of Andreev bound states and their sticking together propensity at zero energy necessarily imply a metamorphosis of Andreev states into topological Majorana modes in the presence of spin-orbit coupling and Zeeman splitting. The topological Majorana bound states are operationally defined as the pairs of well-separated Majorana zero modes localized at the opposite ends of the wire, while the Andreev bound states, which can be viewed as pairs of overlapping (or partially overlapping) Majorana zero modes, are localized near one end of the hybrid system. Our numerical simulations produce essentially exact qualitative agreement with the data of Ref. [20], reproducing the observed features of the Andreev states as functions of Zeeman splitting and chemical potential, although a quantitative comparison (and hence, a definitive conclusion) is impossible because the experimental parameters to be used in the theory are mostly unknown.
Our conclusion is that in strongly-coupled dotnanowire hybrid structures (and in the presence of superconductivity, Zeeman splitting, and spin-orbit coupling) Andreev states generically coalesce around zero energy producing zero-bias tunneling conductance values that mimic Majorana properties, although the physics is non-topological. In fact, the transport properties of such "accidental" almost zero-energy trivial Andreev states in class D systems are (locally) difficult to distinguish from the conductance behavior of topological Majorana zero modes. We show that this zero-energy-sticking behavior of trivial Andreev bound states (superficially mimicking topological Majorana behavior) persists all the way from an isolated (i.e. non-superconducting) quantum dot at the end of the nanowire to a quantum dot completely immersed inside the nanowire (i.e. superconducting) as long as finite Zeeman splitting and spin-orbit coupling are present. Our theory thus connects the recent observations of Deng et al. [20] to the earlier observations of Lee et al. [32], who studied Andreev bound states in a superconducting dot (not attached to a long nanowire), establishing that the physics in these two situations interpolates smoothly. In both theses cases zero-bias conductance peaks may arise from trivial Andreev bound states in the presence of superconductivity, spin-orbit coupling, and Zeeman splitting. Of course, in a small quantum dot, the concept of MZMs does not apply because of strong overlap between the two ends whereas in the Deng et al. experiment (i.e. in a dot-nanowire hybrid system) the ZBCP may arise from either trivial ABS or topological MZM. We establish, however, that in both cases the ABS can be thought of as overlapping MZMs, and hence the generic zero-sticking property of the ABS arises from the combination of spin-orbit coupling, spin splitting, and superconductivity. An immediate (and distressing) conclusion of our work is that the observation of a zerobias conductance peak (even if the conductance value is close to the expected 2e 2 /h quantization) cannot by itself be construed as evidence supporting the existence of topological Majorana zero modes. In particular, both trivial Andreev bound states and topological Majorana bound states may give rise to zero-bias peaks, and there is no simple way of distinguishing them just by looking at the tunneling spectra. Since the possibility that a given experimental nanowire may contain inside it some kind of accidental quantum dot can never be ruled out, the tunneling conductance exhibiting zero-bias peaks in any nanowire may simply be the result of the existence of almost-zero-energy Andreev bound states in the system. Our work shows this generic trivial situation to be a compelling scenario, bringing into question whether any of the observed zero-bias conductance peaks in various experiments by themselves can be taken as strong evidence in favor of the existence of Majorana zero modes since the possibility that these ZBCPs arising from accidental trivial ABSs cannot a priori be ruled out. Consequently, a zero-bias conductance peak obtained by tunneling from one end of the wire cannot be accepted as a compelling topological Majorana signature (even when the height of the peak is quantized at 2e 2 /h), since a likely alternative scenario is that the zero-bias peak is, in fact, a signature of a trivial Andreev bound state associated with a strongly coupled quantum dot or other type of inhomogeneity (unintentionally) present in the system. One must carry out careful additional consistency checks on the observed ZBCPs in order to carefully distinguish between ABS and MZM.
Therefore, to be more decisive, transport experiments must demonstrate the robustness of the quantization to all possible variations in the barrier. One possibility is to study avoided crossings between levels in the quantum dot and a potential Majorana state [40,41] that essentially eliminate the quantum dot. This can be done for example by extending the normal region in the semiconductor wire in between the metallic and superconducting lead shown in Fig. 1. By such an extension, one can enhance gate control so as to be able to create a singlechannel quantum point contact. The quantization of the conductance (at low enough temperature compared to the transmission of the point contact) is still a topological invariant [55]. In addition, one should always check (by using suitable externally controlled gate potentials) the stability of any observed ZBCP to variations in the tunnel barrier as well as the electrostatic environment near the wire ends (as in Sec. V). This test is absolutely essential in our opinion since the ABS-induced trivial ZBCP should manifest splitting as the dot potential is tuned strongly. Despite these checks, it is still likely that transport measurements will need to include additional consistency tests to confirm the nonlocal nature of the Majorana modes (e.g, observing the ZBCPs from both ends of the wire, measuring nonlocal correlations) and their robustness (e.g., robustness of the ZBCP quantization against variations of the barrier height, Zeeman splitting, chemical potential, and other variables). Any type of hybrid structure that is not capable of passing these relatively straightforward tests of ZBCP robustness would not be suitable for more complex experiments involving interferometry, fusion, or braiding. In short, a ZBCP is only a necessary condition for an MZM, and could easily arise also for non-topological zero-energy ABSs in class D systems.
The obvious consistency test is of course the robustness of the ZBCP to variations in all controllable experimental parameters. The topological MZM-induced ZBCP should show stable robustness whereas the ABS-induced ZBCP will not. In particular, we discuss in Sec. V that varying the dot potential will lead to splitting or possibly even disappearance of the trivial ABS-induced ZBCP, but the MZM-induced ZBCP should be relatively stable. This, in principle, enables a unique distinction between the two cases, but in reality this may not be as simple. Since the "quantum dot" is often not obvious, it is not clear how to vary its potential. Perhaps the most obvious check is to use additional gates with varying gate voltage to ensure a complete stability of the observed ZBCP. Another possible test is rotating the magnetic field, but here both trivial and topological MZMs go away as the field is rotated toward the spin-orbit direction in the wire (and is unaffected by any rotation in the plane perpendicular to the spin-orbit direction). Although there are quantitative differences between the two cases, it may not be easy to be definitive. Seeing correlations in the ZBCP while tunneling from the two ends of the wire separately may be quite definitive since it is unlikely that the same ABS can be operational at both ends of the wire (as it requires identical quantum dot confinements at the two ends), but this kind of correlated tunneling measurements from both wire ends have not yet been successfully performed in the laboratory.
We find that generically the ABS-induced ZBCPs require high values of chemical potential, µ > ∆, and for µ ∆, the trivial zero-sticking region could extend over a very large Zeeman field range from V Z = ∆ to µ, with the eventual topological phase emerging at a still higher field ∆ 2 + µ 2 . But, some non-universal beating or apparent oscillation of the ZBCP around zero energy is likely since the ABSs do not stick precisely to zero energy as there is no exponential protection here unlike the corresponding MZM case in the long-wire limit. On the other hand, the MZM-induced ZBCPs also manifest an apparent beating around zero energy due to MZM splitting oscillations arising from Majorana overlap invariably present in any finite wire. (We note that the exponentially small MZM splitting can only happen in very long wires since at high magnetic field the induced gap is small making the SC coherence length very large.) The question, therefore, arises if the oscillatory behaviors of the two situations (the ABS beating around zero energy in the trivial phase because of the zero-sticking in D class SC versus the MZM oscillating around zero energy in the topological phase due to the Majorana overlap from the two ends) can somehow be used to distinguish trivial and topological zeros. This question was addressed in a related, but somewhat different, context by Chiu et al. [37] in trying to understand the experiment of Albrecht et al. [17]. In fact, Chiu et al. showed [37] that the data of Albrecht et al. claiming exponential Majorana protection [17] can be understood entirely by invoking ABS physics, consistent with our findings in the current work. We show in Appendix F our calculated low-lying energy spectra for both trivial ABS and topological MZM approximate zero-modes in simple nanowire and hybrid (i.e. nanowire + dot) structures respectively, keeping all the other parameters very similar. It is clear that the oscillatory or beating structures in the two cases are superficially similar except that the ABS oscillations are non-universal whereas the MZM oscillations always manifest increasing amplitude with increasing V Z by virtue of the decreasing induced gap with increasing V Z .
We mention that although we have used the termi-nology 'class D' to describe the system and the physics studied in the current work, the standard terminology for class D systems [29-31, 52, 53] specifically invokes disorder and discusses random or chaotic systems whereas we are discussing clean systems with no disorder. We only mean the simultaneous presence of spin-orbit coupling, Zeeman splitting, and superconductivity when we mention 'class D' , and as such our ABS-induced ZBCP is fundamentally distinct from those discussed in Refs. [29][30][31]53].
Before concluding, we point out that, although Ref. [20] contains some of the most compelling experimental evidence for the existence of stable almost-zeroenergy Andreev bound states in quantum dot-nanowire hybrid structures, there have been several earlier experiments hinting at the underlying Andreev physics discussed in our work. The foremost in this group is, of course, the experiment by Lee et al. [32] who studied zerobias peaks induced by Andreev bound states in quantum dots in the presence of spin-orbit coupling, Zeeman splitting, and superconductivity. But a re-evaluation of the experimental data in the InAs-Al system by Das et al. [13], where the nanowires were typically very short (i.e., almost dot-like), indicates that the zero-bias peak in this experiment is most likely a precursor of the Deng et al. experiment with Andreev bound states coming together and coalescing around midgap with increasing Zeeman splitting. Of course, in a very short nanowire the midgap state is an operational Andreev bound state by construction, since the condition of "well separated" Majorana bound states cannot be satisfied due to the short wire length. By contrast, in long wires with quantum dots (engineered or unintentional) and other types of inhomogeneities, the emergence of topological Majorana modes is possible (and may very well have happened for some of the ZBCPs observed in Ref. [20]), but the observation of a robust zero-bias peak does not guarantee their presence (since trivial coalescing Andreev bound states are a likely alternative). Recent theoretical work by Chiu et al. [37] provides support to the idea that the experimental observation of Coulomb blockaded zero-bias peaks by Albrecht et al. [17] in a quantum dotnanowire hybrid structure most likely arises from the presence of Andreev bound states in the system(in combination with MZMs). Finally, very recent unpublished work from Delft and Copenhagen [56,57] hint at the possibility that zero-bias conductance peaks manifesting conductance values 2e 2 /h may have now been observed in nanowire systems. The peaks could be Majoranainduced, but (trivial) Andreev bound states generated by unintentional quantum dots present in these structures represent a likely alternative scenario that a priori cannot be ruled out without a systematic study of the barrier dependence as discussed in the last paragraph. To understand these brand new experiments in high-quality epitaxial semiconductor-superconductor hybrid structures, more work is necessary involving both experiment (i.e., performing the consistency tests) and theory (e.g., modeling the effective potential profiles). In particular, robustness of the ZBCP to variations in parameters (e.g., magnetic field, chemical potential, tunnel barrier, dot confinement) is essential before MZM claims can be taken seriously even when the ZBCP is quantized at 2e 2 /h.
The key message of our work is that Andreev bound states could coalesce in the trivial superconducting regime of nanowires producing surprisingly stable almostzero-energy modes mimicking Majorana zero mode behavior even in completely clean disorder-free systems, thus making it difficult to differentiate between Andreev bound states and Majorana zero modes in some situations. Thus the existence of a zero-bias conductance peak is at best a necessary condition for the existence of Majorana zero modes.
The differential conductance for one-band and twoband hybrid structures with constant s-wave pairing are shown in Fig. 25. In Fig. 25(a), low-energy (small-bias) behavior of conductance is quite similar to the case with self-energy in the main text shown in Fig. 4(a), while high-energy (large-bias) behavior of conductance is quite different because there is no quasiparticle continuum in this case, leading to clear patterns in conductance. For two-band model with a second band with larger chemical potential µ = 10 meV, the total conductance is approximated as the sum of the conductance of each band separately. The differential conductance is shown as Fig. 25(b). In addition to almost the same behavior as the one-band model, a significant new feature is that the conductance from the lowest few eigenstates from the second band is much larger and broader than the first band. This is because a higher chemical potential is effectively lowering the tunneling barrier, thus enhancing conductance. Appendix C: Energy spectra with and without spin splitting and spin-orbit coupling Here we show the calculated energy spectra of hybrid structures with and without Zeeman spin splitting and spin-orbit coupling in Fig. 26 and 27. As shown in the lower panels of Fig. 26, spectra have no zero-energy states when the Zeeman splitting is turned off. On the other hand, as shown in the lower panels of Fig. 27, the low-energy spectra without spin-orbit coupling are composed of straight lines. In these cases the energy spectra have a simple analytic form E = V Z ± √ 2 + ∆ 2 , where is the eigen-energy of nanowire without Zeeman splitting and spin-orbit coupling, and the energy scales linearly with Zeeman field. It is clear that superconductivity along with both Zeeman splitting and spin-orbit coupling are necessary for obtaining low energy Andreev bound states sticking to the midgap.
Appendix D: Expansion of projected self-energy term in quantum dot subspace We can constrain the form of the projected self-energy F (ω) making use of the particle-hole symmetry in the nanowire: where P = σ y ⊗ τ y K. Thus for any eigenstate |ψ a with eigenenergy E, there must be another state |ψā = P |ψ a with eigenenergy −E. So applying particle-hole symmetry onto the projected self-energy F (ω) in the quantum dot subspace, we have If we expand the 2 × 2 matrix of F (ω) by Pauli matrices and it is easy to see that f 0,x are odd functions of ω, while f z is an even function of ω based on Eq. (D2). The absence of γ y is due to the fact that the Hamiltonian H N W is accidentally real. |
53112750 | s2orc/train | v2 | 2018-10-31T14:10:33.332Z | 2018-10-30T00:00:00.000Z | Multi-species oral biofilm promotes reconstructed human gingiva epithelial barrier function
Since the oral mucosa is continuously exposed to abundant microbes, one of its most important defense features is a highly proliferative, thick, stratified epithelium. The cellular mechanisms responsible for this are still unknown. The aim of this study was to determine whether multi-species oral biofilm contribute to the extensive stratification and primed antimicrobial defense in epithelium. Two in vitro models were used: 3D reconstructed human gingiva (RHG) and oral bacteria representative of multi-species commensal biofilm. The organotypic RHG consists of a reconstructed stratified gingiva epithelium on a gingiva fibroblast populated hydrogel (lamina propria). Biofilm was cultured from healthy human saliva, and consists of typical commensal genera Granulicatella and major oral microbiota genera Veillonella and Streptococcus. Biofilm was applied topically to RHG and host–microbiome interactions were studied over 7 days. Compared to unexposed RHG, biofilm exposed RHG showed increased epithelial thickness, more organized stratification and increased keratinocyte proliferation. Furthermore biofilm exposure increased production of RHG anti-microbial proteins Elafin, HBD2 and HBD3 but not HBD1, adrenomedullin or cathelicidin LL-37. Inflammatory and antimicrobial cytokine secretion (IL-6, CXCL8, CXCL1, CCL20) showed an immediate and sustained increase. In conclusion, exposure of RHG to commensal oral biofilm actively contributes to RHG epithelial barrier function.
SCIeNTIfIC RePoRTS | (2018) 8:16061 | DOI: 10.1038/s41598-018-34390-y human host-microbe interactions due to major differences in their physiology; (ii) 2D monolayer cell models do not adequately mimic the complexity of the native tissue and therefore fail to provide reliable information on morphological changes; (iii) most host-microbe interaction studies generally use mono-species, or a non-biofilm design (e.g. planktonic bacteria culture) whereas microbes in vivo form multi-species biofilm. This is important since a multi-species biofilm can form a microbial community with metabolic benefits which can better withstand environmental stress 9,10 ; (iv) most co-culture models (conventional submerged keratinocytes and fibroblasts) are limited to 48 hours (or less) bacteria exposure 1,9,[11][12][13] or at the most 72 hours 14 , even though the interactions in vivo are in a lasting dynamic status. In this study we combined two state of the art in vitro models: a 3D reconstructed human gingiva model (RHG) and multi-species oral bacteria representative of commensal biofilm, in order to study host -microbiome interactions over an extensive period of time (7 days). The organotypic RHG consists of a reconstructed stratified oral gingiva epithelium on a gingiva fibroblast populated collagen 1 hydrogel which serves as the lamina propria. Previously this RHG model has been extensively characterized and used to investigate cytokine secretion after wounding and after chemical sensitizer exposure, and also to study pathogenic biofilm immune evasion after short term (24 hr) exposure 15,16 . Whereas the RHG shows many characteristics of native gingiva epithelium (e.g. high keratin 13 and keratin 17 expression, intermittent Keratin 10 expression and extensive suprabasal involucrin expression), which distinguish it from, for example skin epidermis. However, RHG currently fails to show the highly proliferative capacity which results in the characteristic thickened epithelium compared to skin epidermis 17,18 . This suggests that extrinsic, as well as intrinsic properties of keratinocytes and fibroblasts are involved in regulating the important oral epithelium barrier properties. The aim of this study was to use the RHG model to determine whether oral microbiota contributes to oral epithelial barrier properties. The multi-species biofilm was cultured from healthy human saliva in vitro, and consisted of relevant numbers of bacterial species, the typical commensal genera Granulicatella, and predominant amounts of the major oral microbiota genera: Veillonella and Streptococcus 16 . This biofilm was exposed to the upper, stratified air exposed surface of RHG for 7 days and its effect on epithelial barrier properties was investigated. Epithelial stratification was assessed by quantifying the number of epithelial cell layers formed and by determining the expression of two typical proliferation markers: Ki67 and proliferating cell nuclear antigen (PCNA). Ki67 is necessary for cellular proliferation and strictly associated with active cell progression. PCNA aids DNA synthesis during DNA replication and is involved in repair after DNA damage. The expression of antimicrobial peptides (AMPs: Elafin, HBD-1, HBD-2, HBD-3, ADM and LL-37) 19,20 and the secretion of inflammatory, antimicrobial cytokines (IL-6, CXCL8, CXCL1 and CCL20) which are known to prime the host for counteracting potential pathogens was also determined 21,22 .
Results
Histological features of healthy native gingiva. To maintain a resistant and healthy epithelial barrier to the environment, gingiva has developed specialized morphological and functional features. The thick gingiva epithelium consists of multiple keratinocyte layers connected to the basement membrane via deep rete ridges, and forms a protective barrier above the underlying lamina propria (Fig. 1a). Rapid proliferation and self-renewal of the epithelium accelerates the clearance of toxic exogenous substances, and is indicated by two proliferation markers: PCNA and Ki67 (Fig. 1a). Abundant PCNA expression, a protein which is essential for DNA replication during cell division, was found to be expressed extensively throughout the epithelium. Ki67, a protein expressed in actively dividing cells but absent in quiescent cells, was expressed in the basal and lower suprabasal epithelial layers. The AMP elafin, a protein which inhibits serine protease and provides protection to the host 23 , was strongly expressed within the granular layer of the gingiva epithelium where host-microbe interactions begin (Fig. 1a). Another AMP, HBD-2, a protein which is expressed in normal uninflamed gingiva 24 , was also found to be abundantly express throughout gingiva epithelium (Fig. 1a).
Multi-species biofilm increases RHG proliferation and stratification.
Since native gingiva is continuously exposed to a richly diverse microbiota, we exposed RHG to cultured oral bacteria representative of multi-species commensal biofilm and determined its effect on epithelial phenotype over a 7 day period. Notably, epithelial thickness was increased 31% in RHG exposed to biofilm compared to unexposed RHG, resulting in a similar order of thickness to that observed in native gingiva (Figs 1 and 2). This increased epithelial thickness was already apparent after a culture period of only 7 days. In the presence of the biofilm, the epithelial layers became much more organized to form a compact barrier, with a dense inner basal cell layer and with suprabasal layers becoming more differentiated (flattened anuclear keratinocytes) towards the upper surface. Due to the structure of the collagen hydrogel, rete ridges were absent both in exposed and unexposed RHG. PCNA, an early proliferative biomarker, was activated directly upon biofilm exposure (within 24 hours) resulting in a 41% increase in positively staining cell nuclei throughout the epithelium, in line with native gingiva (Figs 1 and 2). After 7 days of culture, control unexposed RHGs were senescing whereas biofilm exposed RHGs were still actively proliferating with the result that more than twice as many proliferating Ki67 positive staining nuclei were found in biofilm exposed RHGs (Figs 1b and 2). Taken together, we can conclude from these results that biofilm actively contributed to the characteristic highly proliferative stratified oral mucosa.
Biofilm increases antimicrobial peptides and cytokine secretion in RHGs.
Next the influence of biofilm on the expression of epithelial antimicrobial peptides was determined. Increased amounts of Elafin were detected in the upper epithelial layers of biofilm exposed RHG, in line with Elafin location in native gingiva (Fig. 1). This was accompanied with more than 2 fold increase in Elafin secretion into culture supernatants of biofilm exposed RHG compared to unexposed RHG and this high Elafin secretion was maintained for the entire 7 day exposure period (Fig. 2d). The influence of biofilm on AMPs was further investigated on the gene expression level (Fig. 3). A clearly differential expression was observed. The expression of HBD-2 increased 145 fold already at day 1 and then gradually decreased to that of unexposed RHG at day 7 and HBD-3 expression peaked at day 2 (6.3 fold increase) and then decreased sharply to levels observed in unexposed RHG at day 4. The high increase in HBD-2 mRNA corresponded to an increase in protein expression (Fig. 1b), in line with HBD-2 expression Histology (hematoxylin and eosin staining, H&E) and immunohistochemical staining with antibodies against PCNA, Ki67, Elafin and HBD-2 is shown. (b) RHG were exposed topically (surface application) to either multispecies biofilm or control medium without biofilm and harvested 1 or 7 days hereafter. Histology (H & E) shows thicker epithelium at day 7 in RHG exposed to biofilm compared to control RHG. Immunohistochemistry shows increased proliferation (PCNA, Ki67), Elafin and HBD-2 in biofilm exposed RHG compared to control RHG at day 7. Comparison of biofilm exposed RHG with unexposed RHG over a 7 day exposure period. (a) Epithelial thickness, as determined from H & E stained sections, is increased in biofilm exposed RHG at day 7. (b,c) number of PCNA and Ki67 positive staining cells per mm 2 epithelium, as determined from immunehistochemical staining of paraffin embedded tissue sections is shown: the number of PCNA-positive cells is higher in biofilm exposed RHG at day 1; biofilm exposed RHG maintain a constant number of Ki67 positive staining cells whereas control RHG senesce. (d) Elafin ELISA shows increased Elafin secretion into biofilm exposed RHG culture supernatant compared to control RHG. Open bar = control medium without biofilm exposed RHG; black bar = multi-species biofilm exposed RHG. Tissue samples were analyzed after 1, 2, 4 and 7 days biofilm exposure. Data represent the average of three independent experiments, each with an intraexperiment duplicate ± SEM; *p < 0.05; **p < 0.01; ***p < 0.001; unpaired t-test for comparison between exposed group and unexposed group and 2-way ANOVA followed by Bonferroni's multiple comparison for comparison between time and treatment. in native gingiva (Fig. 1a). In contrast, the relative mRNA expression of HBD-1, adrenomedullin (ADM) and cathelicidins (LL-37) were not significantly increased. However, the basal mRNA expression level of these AMPs in unexposed RHG was already much higher than that of the housekeeping gene HPRT1. For example on day 1, Figure 3. Differential response of RHG antimicrobial peptides to multi-species biofilm during a 7 day exposure period. Real time PCR analysis of mRNA transcripts for HBD1-3, ADM and LL-37. Results are shown normalized to housekeeping gene HPRT1. Open bar = control medium without biofilm exposed RHG; black bar = biofilm exposed RHG. Tissue samples were analyzed after 1, 2, 4 and 7 days biofilm exposure. Data represent the average of three independent experiments, each with an intra-experiment duplicate ± SEM; ***p < 0.001; ****p < 0.0001; unpaired t-test for comparison between exposed group and unexposed group and 2-way ANOVA followed by Bonferroni's multiple comparison for comparison between days.
Next the influence of biofilm on RHG inflammatory and antimicrobial cytokine secretion was determined (Fig. 4). Our results show that biofilm stimulated an immediate (within 24 hours) and prolonged (up to 7 days) secretion of IL-6, CXCL8, CXCL1 and CCL20 by RHG thus increasing RHG resistance to potential pathogens. Biofilm (10 7 CFU/ RHG sample) was applied topically to the RHG. Already after 1 day exposure, the number of CFUs which could be retrieved from the RHG was substantially less, indicating a rapid decrease in viability with time (Table 1). Notably FISH staining of bacteria rRNA was observed within the viable epithelial layers indicating that a few bacteria had penetrated to a certain extent into the tissue (Fig. 5). However, FISH staining does not distinguish live from dead bacteria at the time of RHG harvesting. Most importantly, even though viable bacteria were not detected after 2 days, the effects on the RHG were observed after 1 day and were even more pronounced after 7 days. This indicates that a single biofilm exposure, independent of whether or not the bacteria remain viable, is sufficient to stimulate long lasting effects on gingiva epithelial barrier properties in RHG.
Discussion
In this manuscript we show for the first time that multi-species oral biofilm has a beneficial effect on the host tissue by contributing to the unique physiological barrier properties of the epithelium found within the oral cavity. It is long known that oral mucosa has a higher turnover than for example skin and an increased thickness 18 . However the underlying mechanisms contributing to this were currently unknown. By using the organotypic RHG we were able to determine that oral bacteria representative of multi-species commensal biofilm clearly contributes to mucosa tissue integrity by increasing proliferation and stratification. Furthermore, we could show the biofilm could prime the tissue to protect against potential assault from pathogens by increasing multiple anti-microbial peptides and cytokine secretion over a prolonged period of time (7 day study period). To our knowledge, there are no studies which investigate host-microbe interactions in vitro for more than 72 hours and no studies which were therefore able to describe long term effects on host tissue integrity.
Our results show that exposure of RHG to the biofilm, which closely represented microbiota found in healthy saliva 25 , actively contributed to the increased epithelial thickness characteristic of oral mucosa. This finding can be explained by the observation that biofilm stimulated keratinocyte proliferation (number of Ki67 and PCNA positive cells) thus preventing the cell senescence which was observed in unexposed RHG at day 7. This would explain previous observations by us and others where sterile RHG maintained a relatively thinner epithelium with few proliferating basal keratinocytes, which was more comparable to skin epidermis than the relatively thicker gingiva epithelium 15,18,26,27 . It is possible that the moderate degree of inflammation caused by microbes is enough to stimulate an innate immune response which will result in secretion of inflammatory cytokines which also have mitogenic properties. Indeed, our results did clearly show that exposure to biofilm resulted in an immediate (within 24 hours) and prolonged (up to 7 days) inflammatory response by increasing secretion of IL-6, CXCL8, CXCL1 and CCL20. IL-6 and CXCL1 have been shown to be involved in epithelial cell proliferation and migration 21,28 .
In addition to stimulating epithelial proliferation and stratification, we found that biofilm induced an antimicrobial protective response in the gingiva epithelium, showing a selective increase in the protein expression of the Elafin, and the mRNA expressions of HBD-2 and HBD-3 but not HBD-1, ADM or LL-37. In healthy gingiva the modest level of antimicrobial peptides, cytokines and chemokines induced by biofilm may be considered to be a strategy of the host to maintain homeostasis. The host tissue, immune system and complement system will therefore be primed and kept in an activated state against potential pathogens 29,30 . Elafin, an endogenous protease inhibitor which was upregulated in our biofilm exposed RHG, plays a critical role in homeostasis by preventing tissue damage from excessive proteolytic enzyme activity during inflammation 31,32 . Healthy subjects have been reported to exhibit higher Elafin levels compared to periodontitis patients 33 . Two AMPs, HBD-2 and HBD-3, were found to be expressed constitutively in normal oral tissue in healthy people without localized inflammation 24,34 20,35,36 . HBD-2 was also suggested to be correlated with cellular differentiation in human gingiva epithelial cell 37 . The reason that HBD-1, ADM and LL-37 were not up-regulated in biofilm exposed RHG could be due to the absence of cell types responsible for their induction in the current RHG model. For example, LL-37 has been described to be produced by neutrophils as well as epithelial cells in oral cavity 20,24 , the former of which are not yet incorporated into our model. Alternatively, these AMPs may only be induced above base line expression in response to pathogenic biofilm rather than commensal biofilm. Our finding that biofilm resulted in increased secretion of cytokines with inflammatory as well as antimicrobial properties (IL-6, CXCL1, CXCL8 and CCL20) is in line with our previous study in which we showed that commensal biofilm stimulated a stronger innate immune response than gingivitis and cariogenic biofilm when exposed to RHG for 24 hours 16,38 . Here we show that this is a prolonged cytokine response for at least 7 days. Our in vitro results are in line with others who showed that indicator bacteria in healthy oral microbiota was associated with high basal levels of CXCL8 release from gingival epithelial cells obtained from healthy individuals 39 . Also, secretion of IL-6 and CXCL8 by cultured keratinocytes was shown to be promoted by less-pathogenic single-species bacteria 40 and healthy oral microbiota 39 , but inhibited by toxic challenges e.g. live Porphyromonas gingivalis 40,41 .
and could be induced by resident bacteria in gingival cells in vitro
Notably, the pronounced effect on epithelial barrier properties were observed for up to 7 days after a single biofilm exposure even though the number of viable bacteria greatly decreased within the first day of exposure. This would indicate that a single bacterial trigger is enough to result in a long lasting effect or that dead bacteria on the upper surface of the epithelium are still able to trigger a response. There is accumulating evidence which suggests that host-microbiome responses are associated with cellular signaling pathways such as the mitogen-activated . Biofilm results in a prolonged increase in cytokine and chemokine secretion from RHGs. The secretion of IL-6, CXCL8 CXCL1 and CCL20 from the lower side of RHG into culture supernatants was analyzed by ELISA. Open bar = control medium without biofilm exposed RHG; black bar = biofilm exposed RHG. Culture supernatants were analyzed after 1, 2, 4 and 7 days biofilm exposure. Data represent the average of three independent experiments, each with an intra-experiment duplicate ± SEM; *p < 0.05; **p < 0.01; ***p < 0.001; unpaired t-test for comparison between exposed group and unexposed group and 2-way ANOVA followed by Bonferroni's multiple comparison for comparison between days. pathways which would suggest that viable bacteria are not required 42,43 . However, this needs further investigation as to whether lipopolysaccharide alone would result in our observed epithelial phenotypic changes.
The limitations of our study should also be noted. The biofilm was cultured from pooled healthy saliva in such a way that it maintained similar phenotypic features to the in vivo oral microbiota 16 . However due to the methodology used to create enough biofilm to expose RHG in a reproducible manner, the intact structure of the preformed biofilm was inevitably disrupted. The most probable explanation for the loss of viability of the biofilm is that RHG were cultured under aerobic conditions whereas the biofilm used in these experiments prefers anaerobic culture conditions. Thus applying biofilm to the surface of RHG followed by culturing under aerobic conditions would be expected to result in the observed decrease in CFUs. Although challenging, in the future anaerobic biofilm conditions should be optimized for RHG aerobic exposure. Alternatively, the antimicrobial Table 1. Viable bacterial cell counts of colony forming units (CFUs). a Data are represented as mean ± standard deviation, n = 6. b Day 0: CFUs determined on Day 0, before applied onto RHGs. c ND = not detectable, stands for counting below detection limit. d Day 1-7: CFUs determined after tissue dissociation. responses elicited in the RHGs, which showed significantly higher levels than in unexposed RHG, may have contributed to the decrease in viability of the biofilm. Our results could also possibly be explained in part by a selective sub-set of survivors derived from the original biofilm. However, it is beyond the scope of this manuscript to isolate and characterize the FISH positive invading bacteria. Furthermore, FISH (bacteria rRNA staining) does not guarantee viable bacteria in the epithelium since DNA can be isolated from dead bacteria. Another limitation in our study is the lack of immune cells. However we consider it important to introduce complexity where complexity is required. In this present study we aimed to determine whether biofilm had a beneficial influence on oral mucosa tissue integrity, and in particular directly on the epithelium. Therefore the experimental design was kept relatively simple. Indeed in future studies we will introduce Langerhans Cells in a similar manner to our skin models as Langerhans Cells are key antigen presenting cells in sampling pathogens 44 . Furthermore, additional cell types such as monocytes and neutrophils will be added in order to further compare commensal and pathogen host responses.
In conclusion, we show that in the presence of the biofilm, RHG developed both morphological and functional features similar to those of native gingiva, indicating that the healthy multi-species biofilm, to a certain extent, promotes a positive symbiosis in the host. Our results highlight the contribution of multi-species biofilm in promoting gingiva epithelium barrier integrity in vitro, therefore providing new insights and possibilities for studying host-microbe interactions.
Methods
Healthy native gingiva. Healthy human gingiva tissue was obtained after informed consent from patients undergoing wisdom tooth extraction as previously described 45 Oral bacteria representative of multi-species commensal biofilm: pooled human saliva from 10 healthy donors was used as inoculum for multi-species biofilm as previously described 25 . The 10 donors were considered healthy since they had no complaints which required treatment by a dental specialist. The saliva was collected following the ethical principles of the 64th World Medical Association Declaration of Helsinki and the procedures approved by the institutional review board of the VU University Medical Centre (Amsterdam, The Netherlands). Informed consent was obtained from all participants. The biofilms were formed in the Amsterdam active attachment model (AAA-model 48 ). The anaerobic colony forming units (CFU) of the biofilm were assessed as a measure of viable bacterial cell counts before use. Aliquots were frozen at −80 °C until use.
Biofilm application to RHG: The stored biofilm was thawed on ice, centrifuged and dispersed in Hanks' balanced salt solution (Sigma-Aldrich) 49 . A sample of biofilm was processed to determine CFU at Day 0. The remaining biofilm was used to apply to the upper surface of RHG as follows: RHG were topically exposed to approximately 1 × 10 7 CFU biofilm cells concentrated in a drop of 10 µl, further cultured at the air-liquid interface at 37 °C, 7.5% CO 2 and 95% humidity and harvested 1, 2, 4, or 7 days hereafter. The RHGs were divided into two halves for conventional paraffin embedment or determination of CFUs.
FISH rRNA in situ hybridization was performed on paraffin sections according to the FISH kit instructions (10MEH000; Ribo Technologies, Groningen, The Netherlands) with the probe EUB338 (5' -GCTGCCTCCCGTAGGAGT). After the staining procedure, the sections were counterstained with DAPI and sealed using a mounting reagent (Fluoroshield, Abcam, Cambridge, UK).
ELISA. Culture supernatants from RHG were collected at the time of harvesting and used to detect levels of IL-6, CXCX1, CXCL8, CCL20 and Elafin using enzyme-linked immunosorbent assays (ELISAs). With the exception of CXCL8 (Sanquin, Amsterdam, The Netherlands) and Elafin (PI3 Human ELISA Kit, Thermo Fisher Scientific, Maryland, USA) where ELISA kits were used, antibodies and recombinant proteins were purchased from R&D Systems, Inc. (Mineapolis, USA) and ELISAs performed according to recommendations of the supplier.
Real-time PCR was performed using RT 2 SYBR ® Green qPCR Mastermixes (Qiagen) with paired-primers for human beta defensin 1-3 (HBD 1-3; HP208395, HP208178, HP213186), adrenomedullin (ADM; HP205068), cathelicidin antimicrobial peptide (CAMP; HP207673) or housekeeping gene HPRT1 (HP200179), all purchased from OriGene Technologies, Rockville, USA. Briefly, 2 µl cDNA was added to 1 µl paired-primer, 9.5 µl nuclease-free water and 12.5 µl of SYBR green mastermix. The cycle threshold value was defined as the number of PCR cycles where the fluorescence signal exceeds the detection threshold value. Normalized by the expression of housekeeping gene HPRT1, the targeted mRNA induction was calculated by the ∆∆C T analysis method following the formula: Viable bacterial cell counts. To determine the biofilm viability at different co-culture time points, total CFUs were counted. For Day 0, serial dilutions of the dispersed biofilm were made and plated on tryptic soy blood agar plates. For Days 1-7, RHG were dissociated using a tissue dissociator (gentleMACS, Miltenyi Biotec B.V., The Netherlands), followed by sonication and plating on tryptic soy blood agar plates. The plates were subsequently incubated anaerobically for 7 days at 37 °C and the CFUs were counted.
Statistics. Statistical analysis was performed with SPSS Statistics (version 23). RHG data were collected from at least three individual experiments, each with an intra-experiment duplicate. The thickness, PCNA, Ki67, Elafin, cytokine secretion were analyzed using an unpaired t-test (between exposed RHG and unexposed RHG on Day 1, 2, 4 and 7). Comparison between time and treatment of the thickness, PCNA, Ki67, Elafin, cytokine secretion and mRNA expression were analyzed using 2-way ANOVA followed by Bonferroni's multiple comparison. Differences were considered significant when p < 0.05. Data are represented as mean ± standard error of mean; *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. |
18651600 | s2orc/train | v2 | 2016-05-12T22:15:10.714Z | 2015-04-10T00:00:00.000Z | Spatial patterns of carbon, biodiversity, deforestation threat, and REDD+ projects in Indonesia
There are concerns that Reduced Emissions from Deforestation and forest Degradation (REDD+) may fail to deliver potential biodiversity cobenefits if it is focused on high carbon areas. We explored the spatial overlaps between carbon stocks, biodiversity, projected deforestation threats, and the location of REDD+ projects in Indonesia, a tropical country at the forefront of REDD+ development. For biodiversity, we assembled data on the distribution of terrestrial vertebrates (ranges of amphibians, mammals, birds, reptiles) and plants (species distribution models for 8 families). We then investigated congruence between different measures of biodiversity richness and carbon stocks at the national and subnational scales. Finally, we mapped active REDD+ projects and investigated the carbon density and potential biodiversity richness and modeled deforestation pressures within these forests relative to protected areas and unprotected forests. There was little internal overlap among the different hotspots (richest 10% of cells) of species richness. There was also no consistent spatial congruence between carbon stocks and the biodiversity measures: a weak negative correlation at the national scale masked highly variable and nonlinear relationships island by island. Current REDD+ projects were preferentially located in areas with higher total species richness and threatened species richness but lower carbon densities than protected areas and unprotected forests. Although a quarter of the total area of these REDD+ projects is under relatively high deforestation pressure, the majority of the REDD+ area is not. In Indonesia at least, first-generation REDD+ projects are located where they are likely to deliver biodiversity benefits. However, if REDD+ is to deliver additional gains for climate and biodiversity, projects will need to focus on forests with the highest threat to deforestation, which will have cost implications for future REDD+ implementation. Los Patrones Espaciales del Carbono, la Biodiversidad, la Amenaza de Deforestación y los Proyectos REDD+ en Indonesia Resumen Actualmente hay preocupación por que las Emisiones Reducidas de la Deforestación y Degradación del Bosque (REDD+, en inglés) puedan fallar en la entrega de co–beneficios potenciales de la biodiversidad si se enfocan en áreas de alto carbono. Exploramos los traslapes espaciales entre los stocks de carbono, la biodiversidad, las amenazas proyectadas de deforestación y la ubicación de proyectos REDD+ en Indonesia, un país tropical a la vanguardia del desarrollo REDD+. Para la biodiversidad, reunimos datos sobre la distribución de vertebrados terrestres (hábitats de anfibios, mamíferos, aves y reptiles) y plantas (modelos de distribución de especies para ocho familia). Después investigamos la congruencia entre las diferentes medidas de la riqueza de la biodiversidad y los stocks de carbono en la escala nacional y sub–nacional. Finalmente mapeamos proyectos REDD+ activos e investigamos la densidad de carbono y la riqueza potencial de la biodiversidad y modelamos presiones de deforestación dentro de estos bosques en relación con áreas protegidas y no protegidas. Hubo poco traslape interno entre los diferentes hotspots (el 10% más rico de las celdas) de riqueza de especies. Tampoco hubo congruencia espacial consecuente entre los stocks de carbono y las medidas de la biodiversidad: una correlación negativa débil en la escala nacional enmascaró las relaciones altamente variables y no-lineales isla por isla. Los proyectos REDD+ actuales estuvieron ubicados preferencialmente en áreas con una riqueza total de especies y con una riqueza de especies amenazadas más altas pero con densidades de carbono más bajas que en las áreas protegidas y los bosques sin protección. Aunque un cuarto del área total de estos proyectos REDD+ está bajo una presión de deforestación relativamente alta, la mayoría del área REDD+ no lo está. Por lo menos en Indonesia, los proyectos REDD+ de primera generación están ubicados en donde tengan probabilidad de entregar beneficios para la biodiversidad. Sin embargo, si se espera que REDD+ entregue beneficios adicionales para el clima y la biodiversidad, los proyectos necesitarán enfocarse en los bosques con la mayor amenaza de deforestación, lo que tendrá implicaciones de costo para la implementación futura de REDD+.
Introduction
There has been a lot of interest in the potential of forest carbon sequestration projects such as those being discussed under the climate mechanism to Reduce Emissions from Deforestation and forest Degradation (REDD+) to deliver benefits for biodiversity. Under the proposed mechanism, REDD+ payments are intended to protect threatened tropical forests by providing economic incentives for continued forest integrity (Venter & Koh 2011). The plus in REDD+ expands the scope to include the conservation, sustainable management, and enhancement of forest carbon stocks as means to reduce emissions from deforestation and forest degradation (UNFCCC 2008). Some argue that REDD+ offers "unprecedented" opportunities for biodiversity (Gardner et al. 2012) and provides new funding for conservation (Venter et al. 2009), rehabilitation of critical habitat (Alexander et al. 2011), and the establishment of new protected areas (PAs) (Macdonald et al. 2011). However, many have also drawn attention to potential risks for biodiversity that are associated with preferential targeting of REDD+ projects in high carbon areas, such as displacement of land use pressure (leakage) into high biodiversity but low carbon areas (Harrison & Paoli 2012) and the diversion of funds for forest conservation away from high biodiversity low carbon areas (Phelps et al. 2012).
The degree to which carbon and biodiversity services are colocated in the landscape will influence the potential for delivery of biodiversity benefit; more opportunities are expected where there is congruence between high carbon and biodiversity stocks (Strassburg et al. 2010). There are strong synergies between carbon and biodiversity at the global level (Strassburg et al. 2010). National scale analyses, particularly important for planning REDD+ as an intergovernmental mechanism (Gardner et al. 2012), have been variable in quality and provide ambiguous results. National-level analyses (Madagascar and Bolivia) with finer scale biodiversity data show little congruence between the two or between carbon and biodiversity (Wendland et al. 2010;Sangermano et al. 2012). However, the additional gains from REDD+ for carbon, biodiversity, and other ecosystem services depend on spatially specific threats of deforestation and forest degradation (Busch & Grantham 2013), few, if any, analyses have included both spatial congruence and deforestation threat.
Indonesia is the third largest tropical forest country, a major contributor to global greenhouse gas emissions from deforestation, forest and peat degradation (Margono et al. 2014), and a mega-biodiversity country (Sodhi et al. 2004). Indonesia has made commitments to reduce emissions (GOI 2012) and received significant donor funding for REDD+ implementation . We assessed the distribution of biodiversity in Indonesia, using species ranges of terrestrial vertebrates (mammals, birds, reptiles, and amphibians) and species distribution models (SDMs) covering 8 plant families which are available for Sundaland only. We explored the congruence between carbon and biodiversity based on 3 measures of richness. We then assessed the location of REDD+ projects relative to deforestation threats and spatially determined potential for these to deliver positive outcomes for carbon and biodiversity.
Data
Our biodiversity analyses were based on recently updated global species range data for the distribution of mammals, reptiles, and amphibians (IUCN 2012), birds (BirdLife International and NatureServe 2012), and SDMs for 8 major plant families (Dipterocarpaceae, Ericaceae, Fagaceae, Lauraceae, Moraceae, Myristicaceae, Sapindaceae, and Leguminosae) in Sundaland. Details on the biodiversity data sets we used are in Supporting Information.
We used newly available high-resolution carbon data sets for above ground biomass (AGB) (Baccini et al. 2012) and soil organic carbon (SOC) up to 100 cm depth (Hiederer & Köchy 2012).
A database of active REDD+ projects in Indonesia was developed for the purpose of this research. We contacted all known REDD+ project developers in Indonesia via email to identify active projects, their central coordinates, and the project size. We achieved a 72% response rate and filled in gaps with best guesses based on available gray literature and Web-based reports. We mapped the location of individual projects based on known project boundaries (n = 22), district boundaries for district level projects (n = 3), and circular boundaries for projects for which we did not have exact boundary information (n = 11). For the circular boundaries, we drew a circle around the project centroid on the diameter of which was based on information about project area provided by project developers. See Supporting Information for details on the REDD+ database.
The PA data set for Indonesia was obtained from the newly updated World Database on PAs (IUCN & UNEP-WCMC 2013). We included PAs in categories I-VI and nationally recognized PAs without an IUCN category (280 in total).
We used the econometric model OSIRIS-Indonesia developed by Busch et al. (2010) to predict deforestation in the absence of REDD+ carbon incentives. The model predicts deforestation based on estimated potential gross agricultural revenues and the cost of converting land from forest to agriculture.
Analyses
Data sets were analyzed at 5 km × 5 km resolution in the WGS 1984 World Mercator projection. We clipped global data sets to the Indonesian Archipelago (total terrestrial land areas), which covers 79,555 terrestrial cells. (Supporting Information for additional information on the spatial analysis methods.) Species distribution analyses were based on the polygon vector ranges of 367 amphibian, 281 reptile, 665 mammal, 1559 bird species, and SDMs of 1720 plant species. Following Wang et al. (2013), we calculated species richness as the number of species range polygons that intersect each grid cell. We used 3 measures of species richness: total species, threatened species, and restricted range species.
Threatened species were those classified by the IUCN (2012) as critically endangered, endangered, and vulnerable. Restricted range species were species with a global range in the lowest quartile of their range class (Orme et al. 2005;Grenyer et al. 2006). Species richness of threatened and restricted range species was analyzed only for vertebrates. We identified the richest grid cells (hereafter hotspots [Orme et al. 2005]) for each richness measure for vertebrates and plants (total richness for Sundaland only). We explored the degree to which hotspots overlapped when defined as the richest 10% of cells and the effects of using different hotspot definitions (richest 5%, 10%, 15%, and 25%). We found that regardless of the definition used, there was no overlap between hotspots identified based on different measures of species richness. (Details in Supporting Information.) Indonesian islands differ in size, isolation, topography, climate, and geology, which results in very different island mean biodiversity and carbon values. We therefore investigated congruence at 3 levels-national and within the 5 major islands (Sumatra, Borneo, Papua, Sulawesi, and Java)-to investigate if national scale patterns are consistent within islands. We selected AGB and SOC up to 100 cm depth based on findings that when congruence was evaluated at 3 soil depths (0, 30, and 100 cm), SOC depth had a clear effect on the congruence patterns, particularly in areas identified as carbon-rich peat swamp forests (see Supporting Information).
Congruence between carbon and the 3 measures of biodiversity richness were assessed using Spearman's rank correlation coefficient; the effective degrees of freedom were corrected by the level of spatial autocorrelation in the data following Dutilleul (1993). We used hexagonal binning (an esthetic mapping technique that shows differences between data-rich and data-sparse parts of the distribution) to visualize the relationship between carbon and biodiversity and fitted a generalized additive model with 95% CIs. All statistical analyses were carried out in R statistical software (R Core Team 2014). Congruence maps were developed in ArcGis 10.1 with the RGB composite band tool.
We assessed the distance and overlap between REDD+ projects (centroid) and PAs (polygon) with the near function in ArcGIS 10.1. We explored the distribution of carbon and biodiversity in Indonesia for 3 categories of forested areas: REDD+ project areas, PAs, and other unprotected forest (outside REDD+ projects and PAs). We defined forest as those pixels comprising mangrove, peat swamp forest, lowland forest, lower montane forest, upper montane forest, and plantation or regrowth as in Miettinen et al. (2012). We sampled 1000 random points from all 3 forest categories and compared the means of the 3 groups with analysis of variance followed by a post hoc Tukey's honestly significant difference test to determine categories that were significantly different.
The modeled deforestation data from OSIRIS-Indonesia version 1.5 showing predicted deforestation in the absence of a REDD+ mechanism (Busch et al. 2010) was exported into ArcGIS 10.1 and resampled to 25 km 2 grid cells (from 9 km 2 ). We calculated predicted deforestation per hectare for all grid cells classed as forest in 2010. We extracted predicted deforestation values (percent) for each forested cell and reclassified these into 5 deforestation threat classes (very low to very Conservation Biology Volume 29, No. 5, 2015 high) based on natural breaks. Using the zonal statistic function in ArcGIS, we calculated the proportion of REDD+ project area, PAs, and unprotected forests that fell into each deforestation class.
Limitations
Our analyses relied on available data sets, such as vertebrate vector range maps, which tend to overestimate the likelihood of species occurrence. Some species will be absent in fragments, logged forests, and recently deforested areas. We dealt with this by refining the species range maps and confining our analyses to remaining forest area-based on 2012 forest cover map, as suggested by Jenkins et al. (2013). We also assumed that most species persist in logged or secondary forests based on the large body of literature which supports this (e.g., Sitompul et al. 2013;Struebig et al. 2013;Edwards et al. 2014). Our projected deforestation threat was based on econometric modeling. The results are therefore a scenariospecific prediction of where threats are most likely to occur given the defined model assumptions. The model predicts deforestation based on the conversion of forest land to agriculture.
Patterns of Biodiversity Distribution
Patterns of potential species richness were highly variable from taxon to taxon and depended strongly on the richness measure used. For total species richness, the highest potential vertebrate species richness was in Sumatra; lower potential species richness was to the East of Wallace's Line in Sulawesi and Papua (Fig. 1a). When both plant and vertebrate data were combined (possible for Sundaland only), the highest total richness shifted from lowland Sumatra to lowland Kalimantan (Fig. 1d), and the northern tip of Kalimantan had the highest total potential species richness (>1270 species in a single cell). Threatened vertebrate species richness was distributed differently. The highest potential richness was concentrated in coastal lowlands of Sumatra and submontane regions of Kalimantan (Fig. 1b), whereas Papua had the lowest potential threatened species richness. Potential restricted range species richness was mostly concentrated in the uplands (Java, Sulawesi, and Papua) and the smaller islands of Buru, Seram, and Halmahera in the Wallacea ecoregion (Fig. 1c). Richness patterns for individual taxa and measures of biodiversity are in Supporting Information.
Hotspots of biodiversity richness identified based on different measures did not generally overlap, further emphasizing that the identification of areas important for biodiversity depended on the measure used (Fig. 1). For example, when hotspots were defined as the richest 10% of cells, no cells were identified as hotspots for all 3 measures of total species richness (vertebrates and plants). Supporting Information contains additional information on the effects of using different hotspot definitions (5%, 10%, 15%, and 25%).
Congruence between Carbon and Biodiversity
At the national scale, there was some evidence of a negative relationship between organic carbon stock and all 3 measures of terrestrial vertebrate richness (Table 1, Fig. 2). This negative relationship was significant at the 5% level for threatened species richness and restricted range species richness but was not significant for total species richness. However, this relationship did not hold when analyzed for islands independently (Table 1, Fig. 2).
The relationship between carbon density and total species richness was either not significant or only weakly correlated for each of the major islands. With the inclusion of plants, results showed a strong negative relationship between carbon and overall species richness in Kalimantan (r s = −0.306, p < 0.001), and Sumatra (r s = −0.516, p < 0.001) (Table 1, Fig. 2d). This result reflected the fact that peat swamp forests store very large amounts of carbon but do not have particularly high overall plant species richness.
The relationship between carbon density and threatened species richness was neither strong nor monotonic in any of the 4 major islands (Table 1, Fig. 2b). The relationship was strongest in Java, where the correlation was broadly positive (r s = 0.29, p < 0.001). Montane regions of Kalimantan and Papua coincided with the highest concentrations of restricted range vertebrate species (Fig 2c); however, these regions have relatively low carbon densities. Thus, a generally negative relationship between carbon and restricted range species richness was evident in Kalimantan (r s = −0.075, p = 0.016) and Papua (r s = −0.222, p < 0.001) (Table 2, Fig. 2c). The opposite trend was evident on Java (r s = 0.61, p < 0.001), where there was a nearly monotonic positive relationship between carbon and restricted range vertebrate species (Table 2, Fig. 2c), both of which are confined to remaining upland forests. The relationship between each measure of species richness and carbon was also greatly influenced by which taxa were included in the analyses; for example, restricted range birds (r s = 0.636, p < 0.001) and mammals (r s = 0.49, p < 0.001) in Java had strong positive correlation with carbon, whereas plants had a strong negative correlation with carbon in Sumatra (Supporting Information).
Carbon, Biodiversity, and Deforestation Threat
We identified 36 active REDD+ projects in 15 provinces of Indonesia (25 projects reported as no longer active). Projects varied in size from site-level activities to those operating at the district or subprovince level. Over half (53%) of the project developers were conservation nongovernmental organizations (NGOs), 33% were private for-profit organizations, and 17% were projects established in collaboration with the Indonesian government or bilateral agencies. At least 25% of REDD+ project centroids overlapped with the boundaries of PAs (Supporting Information).
The REDD+ forests tended to have, on average, lower carbon densities (mean = 433.5 t CO 2 /ha) than PAs (mean = 493.2 t CO 2 /ha) and unprotected forests in Indonesia (mean = 447.6 t CO 2 /ha) (Fig. 3a). Mean carbon density did not differ significantly between REDD+ projects and unprotected forests (F = 17.39 on 2877 df, p = 3.1 × 10 −8 ) (Supporting Information). The REDD+ projects had significantly higher potential total vertebrate species richness (F = 130.2 on 2966 df, p = 2 × 10 −16 ) and threatened species richness (F = 152. (Supporting Information). Restricted range species showed a very different pattern; REDD+ projects and unprotected forests had on average lower potential species richness per cell than PAs (F = 17.2 on 1631 df, p = 4.07 × 10 −8 ) (Fig. 3d) (Supporting Information). At least 23% (or 2.9 million ha) of the area of REDD+ projects was located in forests that had medium to high predicted deforestation threat, whereas 11% (or 2 million ha) of PA and 21% (or 20 million ha) of unprotected forest were under this level of threat. Forests currently not protected by REDD+ or PAs had a much larger area exposed to high deforestation threats; 1 million ha were predicted to be under very high deforestation threat (10-36% deforestation/ha) ( Table 2).
Potential Biodiversity Cobenefits from REDD+
We found that patterns of biodiversity identified depended on the measure of biodiversity used; therefore, the protection of forests with the highest species richness (in Sumatra) may not protect forests with the highest number of threatened species (Kalimantan and coastal Sumatra) or restricted range species (highlands and small islands). Patterns of species richness were also highly variable between taxa, as has been demonstrated globally (Grenyer et al. 2006;Jenkins et al. 2013). Therefore, it is not possible for REDD+ projects to be located in such a way as to be good for all measures of biodiversity simultaneously.
We found no clear and consistent relationship between carbon and any of our proxy measures of biodiversity in Indonesia, there was a weak negative relationship at the national scale, but relationships within islands were sometimes weakly positive, sometimes nonexistent, and sometimes strongly negative. The lack of a clear relationship between carbon and species richness has also been found in South Africa (Egoh et al. 2009) and Madagascar (Wendland et al. 2010). This is perhaps not surprising because of the fundamental ecological differences (definition and substitutability) between carbon and biodiversity (Potts et al. 2013). There are concerns that a lack of congruence between carbon and biodiversity could result in REDD+ investments focusing on high carbon areas which will put biodiversity at risk (Venter et al. 2009;Harrison & Paoli 2012). Although we did not find congruence between carbon stock densities and biodiversity richness in Indonesia, we also did not find REDD+ projects targeting areas with the highest carbon stocks. Instead, they seemed well positioned to deliver biodiversity gains because they tended to be located in areas with higher potential species richness (of total and threatened species).
One factor which may explain why REDD+ projects in Indonesia tended to be located in areas important for biodiversity is that REDD+ development in Indonesia has been spearheaded by conservation NGOs. Such project developers may be seeing REDD+ as a novel funding stream for conservation rather than simply seeking to maximize potential carbon revenues. Our results for Indonesia are consistent with findings from studies in Tanzania (Lin et al. 2014) andBrazil (De Barros et al. 2014), which show evidence of REDD+ initiatives spatially targeting high biodiversity areas. The REDD+ project areas may tend to have lower than average carbon stock because remaining forests outside PAs have mostly been logged (Margono et al. 2014). We also found that many REDD+ projects in our sample are pursuing reforestation and forest restoration as their key project activities, we expect such projects with aims to enhance forest carbon stock to be located in degraded or secondary forests, with perhaps lower than average carbon content.
Contribution of REDD+ to Conservation in PAs
Implementing REDD+ in PAs has been criticized as not being "additional" (Macdonald et al. 2011) because supposedly PAs are already conserved. However, given the underfunding of many PAs worldwide, it could be argued that improved funding could result in additional gains (Macdonald et al. 2011). Despite their protected status, many PAs in Indonesia are under continuing threat; over 12% of primary forest loss in Indonesia (2000-2012) is located in PAs (Margono et al. 2014), and enforcement is lax (Gaveau et al. 2012). Similarly, we found that PAs were not completely spared from the threat of deforestation; at least 11% (or >2 million ha) of PA area was in areas predicted to have medium to high deforestation threat. We found evidence that REDD+ is indeed being used to support conservation in Indonesia's PAs; at least 25% of REDD+ project boundaries overlapped with PAs (Supporting Information). If REDD+ funding could be used to increase the effectiveness of PAs, the benefits for biodiversity could be large. The REDD+ projects located adjacent to current PAs could also play an important role in softening the matrix, which would reduce the effective isolation of species in the PAs and improve population viability (Jantz et al. 2014).
Priorities for Achieving Biodiversity Cobenefits with REDD+
Peat swamp forests in Indonesia have global importance in climate mitigation and they are highly threatened because they represent the last frontiers for production of food, pulp, and biofuels (Posa et al. 2011). Recent findings show that 43% (2.6 million ha) of primary forest loss in Indonesia (2000Indonesia ( -2012 took place in peatlands, which have an overall increasing rate of loss greater than lowland primary forests (Margono et al. 2014). A large number of REDD+ projects are located in carbon-rich peat swamp forests (Harrison & Paoli 2012). We also found this to be true; however, the total area covered by these projects was much smaller than the area covered by projects on mineral soils (Supporting Information). Highly threatened lowland forests, such as those in the lowlands of Borneo and Sumatra, should remain a priority for future REDD+ planning despite having below-average carbon content. Large expanses of selectively logged forests in Indonesia are now degraded and under high threat of conversion because these are prime agriculture lands where the Indonesian government intends to locate future palm-oil plantations in an attempt to divert palm-oil development away from carbon-rich peat swamp forests and pristine mineral soil forests (Gingold 2010). Margono et al. (2014) found that from 2000 to 2012, 98% (15.8 million ha) of forest loss took place in degraded forests. However, even heavily logged forests can be of high conservation value (Struebig et al. 2013). Meijaard and Sheil (2007) estimate that about 75% of Bornean orangutans (Pongo pygmaeus) live in logging concessions, and Sitompul et al. (2013) found that at least 1.6 million ha of Sumatran elephant (Elephas maximus sumatranus) habitat is in active logging concessions or in previously logged areas. These forests contain important biodiversity that would be reduced if they were logged again or cleared for oil palm or pulpwood plantations (Edwards et al. 2012). Opportunities for biodiversity in the REDD+ mechanism do not rely on the spatial congruence between carbon and biodiversity alone.
The REDD+ policies are important if biodiversity conservation is to be integrated into the national REDD+ architecture (Phelps et al. 2012). Biodiversity-specific management will need to be incorporated in the planning, design, and implementation of REDD+ on the ground (Martin et al. 2013) because protecting existing forest carbon stocks alone will not automatically protect other forest values (Huettner 2012).
Cost of Delivering Biodiversity Cobenefits in REDD+
Our results show that first-generation REDD+ projects in Indonesia are not necessarily located in the highest threat areas. This is consistent with the findings of Cerbu et al. (2011), who showed that predicted future deforestation appeared to be less of a criteria among first-generation developers for the location of REDD+ projects than the interests of NGOs or government agencies. Early REDD+ projects have built on prior forest management approaches, such as integrated conservation and development projects, as a springboard for REDD+ (Minang & van Noordwijk 2013) and a testing ground for proof of concept (Murdiyarso et al. 2012). The REDD+ projects in our study are in the early stages of development and are operating largely from bilateral REDD+ funding. As the REDD+ mechanism develops, the conditions under which project location is selected will differ; the non-colocation of carbon and biodiversity priority areas in Indonesia highlights an important structural feature which will affect the cost of delivering biodiversity cobenefits in future REDD+ projects. It can be assumed, based on our findings, that REDD+ projects located in forests most important for biodiversity will cost more per unit of carbon delivered than those located in high carbon forests because forests with the highest biodiversity tend to have low carbon densities but high threat to future deforestation due to high agriculture rent (Busch et al. 2010). Our results show that expanding REDD+ in forest with the lowest deforestation threat (generally on cheaper land) will have low incremental benefits for both biodiversity and carbon. We recommend that future research explicitly assess the costs associated with locating REDD+ projects in forests most important for biodiversity conservation, in light of the limited colocation between carbon and biodiversity we found. A future regulatory mechanism is likely to focus on cost-effective delivery of carbon benefits and not the large-scale delivery of noncarbon benefits (Busch 2013). Biodiversity conservation in the context of REDD+ is therefore likely to require additional investment (Phelps et al. 2012). Options include the introduction of premiums for the delivery of biodiversity benefits (Dinerstein et al. 2013), to allow REDD+ credits to protect forests that are carbon priorities, and use of supplementary funds to protect biodiversity priority areas even when they exhibit low carbon content (Venter et al. 2013). It is an empirical question which of these strategies would be more cost-effective under different contextual preconditions.
We found that patterns of biodiversity varied strongly among taxa and depended on the measure of biodiversity. It would therefore not be possible to place REDD+ projects in areas which are universally good for all measures of biodiversity. In Indonesia carbon stocks correlate poorly with all measures of biodiversity both at the national level and within major islands. However, REDD+ projects under development in Indonesia were located in areas with below-average carbon stock but relatively high biodiversity (according to most measures we used), possibly reflecting the prominent role of conservation NGOs in the development of these first-generation REDD+ projects. Although nearly one-quarter of REDD+ project area was located where deforestation threat was predicted to be relatively high, the majority of REDD+ project area was not in highly threatened forests. This limits the opportunity to achieve the greatest benefits for both emissions reductions and biodiversity conservation. The patterns of biodiversity, threat, and locations of REDD+ projects in Indonesia suggest that biodiversity cobenefits could be achieved through REDD+ in Indonesia, especially if future expansion focused on areas under high deforestation threat. As the world looks toward a global mechanism to address climate change to be agreed Conservation Biology Volume 29, No. 5, 2015 upon at the 21st Conference of Parties in Paris at the end of 2015, our findings make an important contribution to debates surrounding the design of REDD+ to maximize the potential for cobenefits. The realized benefits of any REDD+ network will, of course, depend not only on the design and spatial planning but also on the effectiveness of interventions on the ground. |
14721700 | s2orc/train | v2 | 2014-10-01T00:00:00.000Z | 2006-11-10T00:00:00.000Z | Finite quantum environments as thermostats: an analysis based on the Hilbert space average method
We consider discrete quantum systems coupled to finite environments which may possibly consist of only one particle in contrast to the standard baths which usually consist of continua of oscillators, spins, etc. We find that such finite environments may, nevertheless, act as thermostats, i.e., equilibrate the system though not necessarily in the way predicted by standard open system techniques. Thus, we apply a novel technique called the Hilbert space Average Method (HAM) and verify its results numerically.
Introduction
Due to the linearity of the Schrödinger equation concepts like ergodicity or mixing are strictly speaking absent in quantum mechanics. Hence the tendency towards equilibrium is not easy to explain. However, except for some ideas [1,2] the approaches to thermalization in the quantum domain seem to be centered around the idea of a thermostat, i.e., some environmental quantum system (bath, reservoir), enforcing equilibrium upon the considered system. Usually it is assumed that the classical analogon of this bath contains an infinite number of decoupled degrees of freedom.
Theories addressing such scenarios are the projection operator techniques (such as Nakajima-Zwanzig or the time-convolutionless method see [3]) and the path integral technique (Feynman Vernon [4]). The projection operator techniques are exact if all orders of the system-bath interaction strength are taken into account which is practically unfeasible. However, assuming weak interactions and accordingly truncating at leading order in the interaction strength, which goes by the name of "Born approximation" (BA), produces an exponential relaxation behavior (cf. [5,6]) whenever the bath consists of an continuum of oscillators, spins, etc. The origin of statistical dynamics is routinely based on this scheme, if it breaks down no exponential thermalization can a priori be expected.
In contrast to the infinite baths which are extensively discussed in the mentioned literature, we will concentrate in this article on finite environments. The models we analyze may all be characterized by a few-level-system (S, considered system) coupled to a many-level-system (E, Correspondence to: jgemmer@uos.de environment) consisting of several relevant energy bands each featuring a number of energy eigenstates (e.g. see Fig. 1). Thus, this may be viewed as, e.g., a spin coupled to a single molecule, a one particle quantum dot, an atom or simply a single harmonic oscillator. Note that the spin, unlike in typical oscillator baths or the Jaynes-Cummings Model, is not supposed to be in resonance with the environments level spacing but with the energy distance between the bands. There are two principal differences of such a finite environment level scheme from the level scheme of, say, a standard oscillator bath: i) The total amount of levels within a band may be finite. ii) Even more important, from e.g. the ground state of a standard bath there are infinitely many resonant transitions to the "one-excitation-states" of the bath. But from all those, the resonant transitions lead back to only one ground state. Thus, the relevant bands of any infinite bath would consist of only one state in the lowest band and infinitely many states in the upper bands. In this paper we focus on systems featuring arbitrary numbers of states in any band. (For a treatment of finite baths under a different perspective, see [7,8]).
It turns out that for the above mentioned class of models standard methods do not converge and thus the unjustified application of the BA produces wrong results (cf. [9,10]). This holds true even and especially in the limit of weak coupling and arbitrarily dense environmental spectra. Nevertheless, as the application of the Hilbert Space Average Method (HAM) predicts, a statistical relaxation behavior can be induced by finite baths. It simply is not the behavior predicted by the BA. Thus, the principles of statistical mechanics in some sense apply below the infinite particle number limit and beyond the BA. This also supports the concept of systems being driven towards equilibrium through increasing correlations with their environments [8,11,12,13,14] rather than the idea of system and environment remaining factorizable, which is often attributed to the BA [3,4]. Our paper is organized as follows: First we introduce our class of finite environment models and the appropriate variables (Sect. 2.1, Sect. 2.2). Then we compute the short time dynamics of those variables (Sect. 2.3). Hereafter we introduce HAM and show in some detail how it can be exploited to infer the typical full time relaxation from the short time dynamics (Sect. 3). The theory is then verified by comparing the HAM predictions with numerically exact solutions of the time dependent Schrödinger equation for the respective models (Sect. 4). In the following Section the limits of the applicability of HAM which turn out to be the limits of the statistical relaxation itself are discussed (Sect. 5). Finally we conclude (Sect. 6).
Finite Environment Model
As just mentioned we analyze a few-level-system S, with state space H S , coupled to a single-many-level system E with state space H E consisting of energy bands, featuring, for simplicity, the same width and equidistant level spacing. A simple example is depicted in Fig. 1 with two bands in the environment only. The Hilbert space of states of the composite system is given by the tensor product In H S let us introduce standard transition operatorŝ P ij = |i j|, where |i , |j are energy eigenstates of the considered system S. Furthermore, we define projection operators that implement projections onto the lower, respectively upper band of the environment in H E bŷ where |n a are energy eigenstates of E, a labels the band number. Those projectors meet the standard propertŷ Thus, the number of eigenstates in band a is given by The complete Schrödinger picture Hamiltonian of the model consists of a local and an interaction partĤ = H loc +V , where the local part readŝ Here we introduced the energy levels E i of S and the mean band energies E a of the environment. Note that [Π a ,Ĥ E ] = 0. For the special case depicted in Fig. 1 one gets a = 1, 2 only. The interaction may, in principle, be any Hermitian matrix defined on H. We choose to decompose it uniquely as followsV = ijP ij ⊗Ĉ ij , where theĈ ij themselves may be decomposed aŝ For our special case, the interactionV and its decomposition are sketched in Fig. 2. For later reference we define the "coupling strengths" λ ij,ab as and due to the Hermiticity of the interaction λ ij,ab = λ ji,ba is a real number. Conditions on those interaction strengths are discussed in more detail in Sec. 5.1, but in general we assume them to be weak compared to local energies in S and E, i.e., ∆E from (4). There are two different types of (additive) contributions toV :Ĉ-terms that induce transitions inside the system S (featuring i = j) as well as terms which do not (featuring i = j). Since the first type exchanges energy between system and environment it is sometimes referred to as "canonical coupling"V can (those terms are shaded grey in Fig. 2). The second type produces some entanglement thereby causing decoherence (those terms are white in Fig. 2), but does not exchange energy and therefore refers to a "microcanonical coupling"V mic in the context of quantum thermodynamics (cf. [15]). We do not specify the interaction in more detail here. To keep the following theoretical considerations simple we only impose two further conditions on the interaction matrix. First we require that the different parts ofV as displayed in Fig. 2 are not correlated unless they are adjoints of each other. Hence we get for the following traces Furthermore, demand the traces over individual contributions ofV to vanish, i.e., Tr{Ĉ ij,ab } ≈ 0 .
Both constraints are common in the field and definitely apply for the used numerical examples. For all numerical investigations (see Sect. 4) we are using complex Gaussian distributed random matrices with zero mean to model the interaction. Thus, the mentioned special conditions apply: Only adjoint blocks are correlated and the above traces are extremely small for Gaussian random numbers with zero mean. This interaction type has been chosen in order to keep the model as general and free from peculiarities as possible. For example, in the fields of nuclear physics or quantum chaos random matrices are routinely used to model unknown interaction potentials. We do, however, analyze the dynamics generated by one single interaction, not the average dynamics of an Gaussian ensemble of interaction matrices.
Reduced Dynamics and Appropriate Variables
Of course we are mainly interested in the time evolution of the system S separately, i.e., we would like to find an autonomous, time-local equation for the dynamics of its reduced density matrixρ. However, it turns out that an autonomous description in terms ofρ is, in general, not feasible for finite environments. We thus aim at finding an autonomous set of equations for the dynamics of a set of variables that contain slightly more information thanρ, such that from the knowledge of this setρ may always be computed.
We simply name the set here, and explain in the following the derivations of its dynamics. Consider the following operatorsP ij,a =P ij ⊗Π a .
According to (2) we find and thus the given operators together with zero form a group which we mention here for later reference. Throughout this paper we think of the full system as being always in a pure state |ψ ψ|. Thus the expectation values of the operators (10) may be denoted as For the dynamics of those expectation values we are going to derive an autonomous set of equations. In terms of those variables the reduced density matrix elements ρ ij read This may simply be computed from the definition of density matrix of S
Short Time Dynamics
In order to find the full dynamics of the P 's defined in (12), we start of by computing their short time evolutions in this Section. To those ends we change from the Schrödinger to the interaction (Dirac) picture in which the originally constant interactionV from Sect. 2.1 becomes time dependentV (t) = e iĤ loc t/ V e −iĤ loc t/ .
As well-known in the interaction picture the time evolution may be written in terms of an propagatorD(τ, t) (from here |ψ(t) refers to the interaction picture). The propagatorD(τ, t) may be explicitly noted in terms of an Dyson expansion whereÛ with T being the standard time ordering operator. The time evolution of the expectation values P ij,a according to (12) reads which using (16) may also be written as withP ij,a (t + τ ) =D † (τ, t)P ij,aD (τ, t) .
The above definition allows to write the expectation value of P ij,a at time t + τ (in the interaction picture) as an expectation value of some operatorP ij,a (t + τ ) at time t. This particular form is well suited to asses that dynamics by HAM as will be explained in the next Section. If τ is short and the interaction is weak the propagator may be approximated by a truncation of the Dyson series (17) to leading order which is in this case second order. Let the truncated propagator be denoted asD 2 (τ, t). This truncated propagator can typically be computed, even if complete diagonalization is far beyond reach (for a more explicit treatment ofD 2 , cf. App. A). Thus the approximate form we are going to use in Sect. 3.2 readŝ 3 Dynamical Hilbert Space Average Method
Definition and Calculation of the Hilbert Space Average
The Hilbert space average method (HAM) is in essence a technique to produce guesses for the values of quantities defined as functions of a wave function |ψ if |ψ itself is not known in full detail, only some features of it. In particular it produces a guess for some expectation value ψ|Ŝ|ψ [cf. (20)] if the only information about |ψ is the set of expectation values ψ|P ij,a |ψ = P ij,a mentioned below. Such a statement naturally has to be a guess since there are in general many different |ψ that are in accord with the given set of P ij,a , but produce possibly different values for ψ|Ŝ|ψ . The question here is whether the distribution of ψ|Ŝ|ψ 's produced by the respective set of |ψ 's is broad or whether almost all those |ψ 's yield ψ|Ŝ|ψ 's that are approximately equal. It turns out that if the spectral width ofŜ is not too large andŜ is high-dimensional almost all individual |ψ yield an expectation value close to the mean of the distribution of ψ|Ŝ|ψ 's (see Sect. 5 and [15]). The occurrence of such typical values in highdimensional systems has recently also been exploited to explain the origin of statistical behavior in [16,17]. To find the above mean one has to average with respect to the |ψ 's. We call this a Hilbert space average S and denote it as This expression stands for the average of ψ|Ŝ|ψ over all |ψ that feature ψ|P ij,a |ψ = P ij,a but are uniformly distributed otherwise. Uniformly distributed means invariant with respect to all unitary transformations e iĜ that leave the respective set of expectation values unchanged, i.e., ψ|e iĜP ij,a e −iĜ |ψ = ψ|P ij,a |ψ . Thus the respective transformations may be characterized by Instead of computing the so defined Hilbert space average (23) directly by integration as done, e.g. in [15,18] we will proceed in a slightly different way, here. To those ends we change from the notion of an expectation value of a state to one of a density operator where we skipped the constant expectation values of the Hilbert space average for the moment. Exchanging the average and the trace, one may rewrite To computeα we now exploit its invariance properties.
Since the set of all |ψ that "make up"α [that belong to the averaging region of (27)] is characterized by being invariant under the above transformations e −iĜ ,α itself has to be invariant under those transformations, i.e.
This, however, can only be fulfilled if [Ĝ,α] = 0 for all possibleĜ. Due to (24) the most general form ofα which is consistent with the respective invariance properties iŝ where the coefficients p ij,a are still to be determined. In principle the above sum could contain addends of higher oder, i.e., products of theP -operators, but according to the properties of the projection and transition operators [especially (11)], those products reduce to a singleP -operator or zero (in other words, theP ij,a form a group), hence (29) is indeed the most general form. How are the coefficients p ij,a to be determined? From the definition ofα in (27) it follows By inserting (29) into (30) and exploiting (11) the coefficients are straightforward found to be Thus, we finally get for the Hilbert space average (26)
Iterative Guessing
To find the (reduced) autonomous dynamics for the P ji,a from HAM we employ the following scheme: Based on HAM we compute a guess for the most likely value of the set P ji,a at time (t + τ ) [i.e.,P ji,a (t + τ )] assuming that we knew the values for the P ji,a at time t [i.e.,P ji,a (t)]. Once such a map P ji,a (t) → P ji,a (t + τ ) is established it can of course be iterated to produce the full time dynamics. This of course implies repeated guessing, since in each iteration step the guess from the step before has to be taken for granted. However, if each single guess is sufficiently reliable, i.e., the spectrum of possible outcomes is rather sharply concentrated around the most frequent one (which one guesses), even repeated guessing may yield a good "total" guess for the full time evolution. The scheme is schematically sketched in Fig. 3. Some information about the reliability of HAM guesses has already been given in Sect. 3.1, the applicability of the whole scheme will be analyzed more thoroughly in Sect. 5. To implement the above scheme we consider the equation one gets from insertingP ij,a (t + τ ) forŜ in (32) This is the Hilbert space average (HA) over all possible P ji,a (t + τ ) under the condition that one had at time t the set P ji,a (t). Thus the (iterative) guess now simply consists of replacing the HA by the actual value, i.e., The evaluation of the right hand side requires some rather lengthy calculations, but can be done without further assumptions or approximations. The interested reader may find the details in App. B. Here we simply give the results and proceed.
For P 's featuring i = j one finds and for the P 's with i = j where the f (τ )'s are defined as Note that (35) and (36) now are autonomous (closed) in terms of the respective P 's and there is no more explicit dependence on the absolute time t. It turns out (see below) that the f (τ )'s are approximately linear in τ . Hence for the squared absolute values of the P 's with i = j (which we will be primarily analyzing rather than the P 's themselves) one finds to linear order in τ
Correlation Functions and Transition Rates
In order to interpret (35) and (39) appropriately, we need some information about the correlation functions f (τ ). Apparently those f (τ )'s are essentially integrals over the same environmental temporal correlation functions g(τ ′′ ) that appear in the memory kernels of standard projection operator techniques. (Only here they explicitly correspond to transitions between different energy subspaces of the environment.) Thus we analyze the g(τ ′′ )'s from (37) more thoroughly. Their real parts (which eventually essentially matter) read Thus they simply consist of a sum of weighted cosine functions with different frequencies. The set of those weights essentially gives the Fourier transform of the corresponding correlation function. First of all, only if the transition within the system (j → i) is in resonance with the energy gap between the bands a, b, g(τ ) will contain any small frequency contributions at all. Hence, only in this case temporal integrations, i.e., the corresponding f 's will be nonzero. In the resonant case the frequency spectrum will stretch from zero to a frequency on the order of δǫ/ , at least if the interaction gives rise to the corresponding transitions of the environment. Thus g(τ ) will decay on a timescale on the order of τ c with For τ > τ c , g(τ ) will be essentially zero. This means that f (τ ) which is a twofold temporal integration of g(τ ) will grow linear in time, i.e., f (τ ) = γτ after τ ≈ τ c . The factor γ is given by the area under the curve g(τ ) up to approximately τ c . If τ c was infinite γ would only be determined by the weight of the zero-frequency terms of g(τ ). Since τ c is finite, γ is related to the "peak density" of g within frequency range from zero to ∆ω with ∆ω c ≪ 1/τ c ≈ δǫ/ . Which means γ is eventually given by the sum of all weights that correspond to frequencies from zero to ∆ω divided by ∆ω and multiplied by π. Since in our model the | n a |Ĉ ij |n b | are Gaussian distributed random numbers we eventually find for the f (τ )'s This result can apparently be connected to the transition rate as obtained from Fermi's Golden Rule. Let γ ij,ab be the Golden Rule transition rate for a transition of full system characterized by j → i and b → a. Then the connection reads Since the Golden Rule transition rates depend on the state densities around the final states, respective "forward" and "backward" rates are, for equal bandwidths, connected as
Reduced Equations of Motion
Inserting (43) and (44) into (35) and (39) allows for a computation of the full dynamics of the P ii,a and the |P ij,a | 2 through iteration. The iteration has to proceed in timesteps that are longer than τ c , but shorter than τ d . The latter will be explained in Sect. 5.2. Assuming that τ c is short compared to the timescale of the relaxation dynamics it may be written as (This form is in accord with recent results from novel projection operator techniques [9]) We now analyze this set of equations in a little more detail. The P ii,a may be interpreted as the probability to find the joint system in state i for S and in band a with respect to the environment. Equation (45) obviously has the form of a master equation, i.e, the overall probability is conserved and there is a stable fixpoint which sets the equilibrium values for the P ii,a . According to (46) the P ij,a will all decay to zero. Taking (13) into account this implies thatρ will reach an equilibrium state which is diagonal in the basis of the energy eigenstates of S. As already mentioned below (40) transitions occur only between resonant states, i.e., the γ im,ab are zero unless are the corresponding mean band energies. Thus, if we define the approximate full energy of some state E we may label full system states by i, E(m, E) rather than i, a(m, b) and nonzero transition rates by im, E rather than im, ab, i.e, P ii,a → P E i , γ im,ab → γ E im . With this index transformation and exploiting (44) we may rewrite (45) as is the dimension of the environmental band with energy E b = E − E m . This form reflects the fact that the dynamics of the occupation probabilities with different overall energies are decoupled. We, furthermore, find from (48) for the equilibrium values P E i (t → ∞) ∝ N (E − E i ). Thus, the equilibrium state is in accord with the a priori postulate in that sense that the probability to find the full system in some subspace is proportional to the dimension of this subspace. However, it is in general impossible to transform (48) in a closed set of equations for the occupation probabilities ρ ii = E P E i of S alone. This may only be done if either only one energy subspace E is occupied at all, or if the transition rates γ E im are independent of E and the number of states of the environmental bands N a scales as N a ∝ exp(βE a ). Then (48) may be summed over E yielding which is the usual closed form for the dynamics of the ρ ii with the standard canonical equilibrium state ρ ii (t → ∞) ∝ exp(−βE i ). Thus it is essentially the exponentially growing density of states of typical infinite environments that allows for a closed dynamical description of the considered system S alone and produces the standard Gibbsian equilibrium state.
Thermalization and Decoherence
In order to investigate the relation between the decay of diagonal and off-diagonal elements of the reduced density operator of S, we concretely analyze as an example a slightly modified model featuring the above mentioned structure (exponential state density, equal rates) yielding autonomous dynamics forρ. The model is depicted in Fig. 4. For simplicity we consider only three environmental bands with the same density of states, i.e., the exponential prefactor from (49) vanishes, β = 0. (This eventually implies infinite temperature.) As mentioned the rates that control the dynamics of the diagonal elements (canonical dynamics, thermalization) have to be equal, thus, we choose λ 01,12 = λ 01,23 = λ can . The rates that control the dynamics of the off-diagonal elements (microcanonical dynamics, decoherence) also have to be equal among themselves, but may differ from the "canonical rates". Thus, we choose λ 00,33 = λ 00,22 = λ 11,22 = λ 11,11 = λ mic . Since all other parts of the interaction would not fulfill the resonance condition anyway, we set them to zero (cf. Fig. 5).
Plugging those model parameters into (49) yields Defining the thermalization time as the solution of the above set of differential equations is just an exponential decay according to e −t/T th .
Apparently the P ij,a do not "mix" with respect to different a [cf. (46)]. Thus, if initially the environment only occupies, e.g., band 2, for the full dynamics the offdiagonal element ofρ will be simply given by ρ ij = P ij,2 [cf. (13)]. In this case we find from (46) Using the definition ξ = λ mic /λ can , we find for the decoherence time, i.e., the time-scale on which |ρ 10 | decays For the absence of microcanonical coupling terms (λ mic = 0 → ξ = 0) we get 2T th = T dec which is a standard result in the context of atomic decay, quantum optics, etc. Nevertheless, for increasing ξ decoherence may become arbitrarily faster than thermalization which is a central feature of models that are supposed to describe the motion of particles subject to heat baths, like, e.g., the Caldeira Legget model. Thus our model exhibits a continuous transition between those archetypes of behavior.
Relaxation Dynamics in Model Systems
In this Section concrete models are introduced and the corresponding time dependent Schrödinger equations are solved. Then the results are compared to predictions from HAM and standard open system methods. Our first model is of the type depicted in Fig. 1. The two level system features a splitting of ∆E = 25u. Here and in the following we use an arbitrary energy unit u. The environment consist of two bands of width δǫ = 0.5u with the same amount of levels N = N 1 = N 2 = 500 in each one and separated also by ∆E = 25u. As already mentioned we use complex Gaussian random matrices to model the coupling, thus, satisfying the criteria Sect. 2.1. First, we choose only a canonical interaction due to the coupling strength λ can = 5 · 10 −4 u (λ mic = 0). At first we analyze the decay behavior of two different pure product initial states. The environmental part of both initial states is a pure state that only occupies the lower band, but is apart from that chosen at random. Irrespective of its pureness only with respect to occupation numbers, E's initial state can be considered an approximation to a Gibbs state with δǫ ≪ kT E ≪ ∆E and the temperature of the environment T E (in the example at hand, e.g., kT E ≈ 5u). For small δǫ the temperature may be arbitrarily small. Initially, the system S is firstly chosen to be completely in its excited state and, secondly, in a 50:50 superposition of ground and excited state. The probability [density matrix element ρ 11 (t)] to find the system excited as produced by the first initial state is shown in Fig. 6. Since the first initial state does not contain any off-diagonal elements, we find |ρ 01 | 2 ≈ 0 for all times. This is different for the second initial state investigated in Fig. 7, it starts with |ρ 01 | 2 = 0.25 and is thus well suited to study the decay of the coherence. (The diagonal elements of the second state are already at their equilibrium value ρ 11 (0) = 0.5 in the beginning and exhibits no further change.) By numerically solving the time-dependent Schrödinger equation for the full model's pure state we find for the reduced state of the system, an exponential decay, up to some fluctuations as depicted in Fig. 6 and Fig. 7. (For the baths initial state being a real mixed Gibbs state one can even expect fluctuations to be smaller, since fluctuations corresponding to various pure addends of the Gibbs state will partially cancel each other.) The solid lines are the HAM results as computed from (45) and (46). Obviously, they are in accord with the exact result.
The full model is Markovian in the sense that bath correlations decay much faster than the system relaxes, concretely bath correlations decay on a time scale of τ c ≈ /δǫ = 2 (all times given in units of /u), whereas the system relaxes on a timescale T 1 ≈ 640 (cf. Fig. 6). Nevertheless, S's excitation probability deviates significantly from what the standard methods (BA) predicts (cf. Fig. 6): The beginning is described correctly, but rather than ending up at temperature T = T E as the BA predicts for thermal environment states [3], S ends up at temperature T = ∞, i.e., equal occupation probabilities for both levels.
The equilibrium value of S's excitation probability is given by ρ 11 (∞) = N 1 / (N 1 + N 2 ). Thus, only if N 2 ≫ N 1 (infinite bath) the BA produces correct results. Note, however, that it is not the finite density of states that causes the break down of the BA, since the BA produces wrong results even for N 1 , N 2 → ∞ as long as the above condition is not met. Furthermore, a condition often attributed to the BA, namely that S and E remain unentangled, is not fulfilled: When S has reached equilibrium the full system is in a superposition of |S in the excited state ⊗ E in the lower band and |S in the ground state ⊗ E in the upper band . This is a maximum entangled state with two orthogonal addends, one of which features a bath population corresponding to T E ≈ 0, the other a bath population inversion, i.e., even a negative bath temperature. These findings contradict the concept of factorizability, nevertheless, HAM predicts the dynamics correctly. This is in accord with a result from [9,10,18] claiming that an evolution towards local equilibrium is always accompanied by an increase of system-bath correlations. However, the off-diagonal element evolution coincides with the behavior predicted by the BA. Thus, in spite of the systems finiteness and the reversibility of the underlying Schrödinger equation S evolves towards maximum local von Neumann entropy (see Fig. 6 and Fig. 7) which supports the concepts of [12].
To show that it is indeed possible to get different time scales for the decay of diagonal elements of the density matrix (thermalization) and the decay of off-diagonal elements (decoherence) according to pure Schrödingerian dynamics we consider the concrete model as addressed in Sect. 3.5 with parameters N = 500, δǫ = 0.5u, ∆E = 25u and λ can = 5 · 10 −4 u. However, we choose the microcanonical interaction strength λ mic , in units of the canonical one between ξ = 0 and ξ = 5. As an initial state we prepared a 90:10 superposition of ground and excited state in the system, environment somewhere in the middle band. This refers to a finite off-diagonal element in the beginning. We have computed the Schrödinger dynamics of both diagonal and off-diagonal elements of the two level system. By fitting an exponential to the off-diagonal element we get the decoherence time T dec in dependence of the microcanonical coupling strength. In Fig. 8 we show this numerical decoherence time T dec in comparison with the theoretical prediction of the HAM theory, thus (54). As can be seen, the numerical result is in very good accordance with our theory.
Accuracy of HAM
Since HAM is just a "best guess theory" the exact evolution follows its predictions with different accuracies for different initial states, even if all conditions on the model are fulfilled. To analyze this for, say ρ 11 (t), we introduce D 2 , being the time-averaged quadratic deviation of HAM from the exact (Schrödinger) result Thus, D is a measure of the deviations from a predicted behavior. The results of the investigation for our model (Fig. 1) are condensed in the histogram (Fig. 9, ν = 3, N = 500). The set of respective initial states is characterized by a probability of 3/4 for |S in its excited state ⊗ E in its lower band and 1/4 for |S in its ground state ⊗ E in its upper band . Within these restrictions the initial states are uniformly distributed in the corresponding Hilbert subspace. Since all of them are correlated the application of a product projection operator technique would practically be unfeasible. However, as Fig. 9 shows, the vast majority of them follows the HAM prediction quite closely, although there is a typical fluctuation of D = √ 2 · 10 −2 which is small compared to the features of the predicted behavior (which are on the order of one), due to the finite size of the environment (cf. also fluctuations in Fig. 6).
In Fig. 10 the dependence of D 2 on the number of states of E is displayed for N = 10, . . . , 800 (one evolution for each size of the environment). At N = 500 like used in the above accuracy investigation we find the same typical fluctuation, whereas for smaller environments the typical deviation is much bigger. We find that the squared deviation scales as 1/N with the size of the environment, thus, making HAM a reasonably reliable guess for many-state environments.
Limits for the Applicability
The dynamical considerations of Sect. 3 are only guesses, but as guesses they are valid for any initial state regardless of whether it is pure, correlated, entangled, etc. Thus, in contrast to the standard Nakajima-Zwanzig and TCL methods HAM allows for a direct prediction of the typical behavior of the system. Nevertheless, for deriving the above HAM rate equations we have claimed (and already discussed) that there is a reasonably well defined correlation time τ c (cf. Sect In the following we will investigate the validity of these approximations in more detail.
Truncation of the Dyson Series
In (22) we truncated the Dyson series arguing that for short times τ and small interaction strength this can be a reasonable approximation. We require, however, τ > τ c . Thus, for given interaction strength, the time for which the truncation should hold, τ d should exceed the correlation time, i.e., τ d > τ c . How can τ d be at least approximately determined? Consider the deviation |δψ(t, τ ) of a state at time t+τ from the state at time t, i.e., |δψ(t, τ ) := |ψ(t + τ ) − |ψ(t) and let the norm of this deviation be denoted as ∆(t, τ ) = δψ(t, τ )|δψ(t, τ ) . If we now evaluate ∆(t, τ ) by means of a truncated Dyson series and find it small compared to one it is consistent to assume that higher orders are negligible for the description of |ψ(t + τ ) . If we, in contrary, find it to be large compared to one, the truncation is definitely not justified. Thus, we implicitly define τ d roughly as ∆(t, τ d ) ≈ 1.
Truncating the Dyson series to leading order yields (cf. Since we in general do not know |ψ(t) in detail, but only the P ′ s we replace, following again the argument in Sect. 3.1, the actual value of ∆(t, τ ) by its Hilbert space average ∆(t, τ ) { ψ|Pij,a|ψ =Pij,a(t)} , thus, obtaining Exploiting (86) we find which, taking (43) and (44) into account and for times τ > τ c eventually yields where the second form refers to the notation introduced in and below (47). Thus, ∆(t, τ ) grows linear in τ . Since all the probabilities P E i (t) sum up to one at all times the growth is essentially determined by the rates γ E mi . Since already the sum of the P E i (t) over i and some fixed overallenergy is a constant of motion, rates belonging to energy subspaces E which are not occupied in the beginning will never influence ∆(t, τ ). Hence one should consider τ E d , the time for which the truncation of the Dyson series holds within the invariant energy subspace E. From (59) we find as an rough estimate for τ where N S is the number of eigenstates of S. Comparing this to (45) and (46) it becomes obvious that this is also roughly the time-scale for the relaxation dynamics of the P 's. This implies that the claim τ E d > τ c , which guarantees the applicability of the truncation of the Dyson series, is equivalent to claiming that the typical relaxation time of the P 's should be long compared to the typical correlation time τ c . The latter has already been claimed before (7) in order to transform the iteration scheme into a differential equation. This condition can easily be controlled by changing the overall interaction strength λ. We find that for values of λ that violate the above condition the agreement between the numerical solution and the HAM prediction vanishes.
Hilbert Space Variance
Here we quite briefly consider the assumption that gave raise to the replacement of actual expectation values by their Hilbert space averages in Sect. 3.1. As already mentioned, such a replacement can only yield a reasonable result if the largest part of the possible expectation values is indeed close to the corresponding Hilbert space average. To analyze this we consider the Hilbert space variance of, say, ψ|Ŝ|ψ , i.e., ∆ H S = ψ|Ŝ|ψ 2 − ψ|Ŝ|ψ 2 . If ∆ H S is small the above condition is satisfied. We would like to evaluate this for S =: P ij,a (t + τ ) − P ij,a (t) under the restriction of given P ij,a (t). This, however, turns out to be mathematically rather involved and we have not managed to do so, yet. But, for the Hilbert space variance of any Hermitian operatorŜ without any restriction one gets (cf. [15]) where the term in brackets obviously is the spectral variance ofŜ and N denotes the dimension of the full system. At this point it simply appears plausible (which is of course far from being a proof) that the spectral variances of the above defined S remain constant if one varies N , but keeps the rates γ constant. Thus, for growing N the replacement becomes more and more justified. Such a scenario is in accord with the general ideas of quantum thermodynamics as presented in [15] and especially backed up by the numerical findings of Sect. 4.2.
Conclusion
Explicitly exploiting the Hilbert Space Average Method (HAM) we have in essence shown, that statistical relaxation may emerge directly from the Schrödinger equation. This requires the respective system being coupled in an adequate way to a suitable environment. This environment must feature many eigenstates. There is, however, no minimum particle number limit. Thus the thermodynamic limit appears to be essentially controlled by the number of environmental eigenstates involved in the dynamics rather than by the number of environmental particles. This relaxation behavior results even for correlated initial states, nevertheless, standard open system methods may fail to produce the correct result.
We are indebted to H.-P. Breuer and G. Mahler for interesting discussions on this subject. Financial Support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
[We already mention this here for later reference, cf. (75)]. Assuming weak interactions (17) and short times τ the time evolutionD(τ, t) resulting from the Dyson series may be truncated at second order, i.e., with the two time evolution operatorŝ Note that the integration in (67) is time ordered, i.e., τ ≥ τ ′ ≥ τ ′′ . Furthermore, the first order operatorÛ 1 (τ ) is Hermitian due to the Hermiticity of the interaction.
Appendix B: Correlation Functions
One has to analyze the traces on the right hand side (34). Let us therefore abbreviate those term byS. Using (65) we get where we used the Hermiticity of the operatorÛ 1 .
To evaluate this complicated trace expression we will consider each order of time evolution operators in (68) separately, definingS Using (11) the zeroth order of (68) yields By a cyclic rotation within the trace the first order may be written as where we used (11) again. Concentrating on the first term, introducing the definition of the time evolution operator (66) and the interaction (63) one gets Due to the condition on the interaction (9) those terms are zero. We find an analogous result for the second trace of (71) and thus we finally end up with For the second order terms of (68) we get We concentrate first on the last term, plugging in the definition ofÛ 2 from (67) yields exploiting (63) and performing the trace with respect to S we find (76) Since the operators that generate the time-dependence of V (t) [cf. (64)] commute withΠ a ′ and due to the invariance of the trace with respect to cyclic permutations of the traced operators, the above "projected correlation functions" only depend on the difference between the time arguments of theV 's. Since then the integrand no longer depends on t, the t which appears in the integration boundaries may simply be set to zero. Hence one finds for the above expression As argued in the beginning the parts of the interaction are uncorrelated unless they are not adjoints of each other. This means that the above traces can only be nonzero for the case j = i ′ . Furthermore, one does the transformation (τ ′ − τ ′′ ) → τ ′′ , thus, Finally, plugging in the unit operator of the environment in terms of projection operators, one finds Comparing this to (37) we end up with Completely analogous we find for It remains the computation of the first term of (74). Using the same argumentation as before (the fact that there are no correlations between different parts of the interaction as well as a cyclic rotation within the trace operation) we find for the trace By the same arguments which are given below (75) this may be written independently of the absolute time t The (non-time-ordered) integration of the above expression may be written in terms of a time-ordered integration by adding the time-reversed integrand Shifting in the second term the time dependence to the otherĈ operator and expressing everything within the trace by its adjoint yields Since the trace of an adjoint operator is the complex conjugate of the original trace we may, after performing the same integral transformation described before (78), eventually write Tr{Û 1Pij,aÛ1Pi ′ j ′ ,a ′ } = δ ij δ i ′ j ′ 2 Re f ii ′ ,aa ′ (τ ) . |
225512400 | s2orc/train | v2 | 2020-08-06T09:08:01.170Z | 2020-07-31T00:00:00.000Z | Profile of Online Learning in Building Engineering Education Study Program During the COVID-19 Pandemic
Article Info ABSTRACT Article history: Submitted: June 18, 2020 Final Revised: June 23, 2020 Accepted: July 11, 2020 Published online: July 31, 2020 This study attempts to discuss the profile of brave learning during the COVID-19 pandemic: (1) validity and reliability of instrument; (2) interesting learning for students; (3) implementation of learning; (3) strengths and weaknesses of learning; and (4) application that matches the learning profile and the condition of the existing internet network. The participant of this study were students and lecturers supporting courses in the Building Engineering Education study program. Data collection uses quantitative and qualitative methods. The questionnaire was given by online to 67 students and 6 lecturers. The result of this research shows that (1) questionnaire instruments have been tested as valid and reliable; (2) online learning is not all interesting; (3) online learning has been implemented, but some lecturers have problems when making corrections, the condition of the internet network in some regions is not smooth enough to be an obstacle for students to access applications; (4) using of the application adjusts the online learning profile and the condition of the internet network in the area. The conclusion reveals that applications of online learning must be easily accessible, used, interesting, and needs to be combined with several applications to provide the perfection of delivery and acceptance of material in teaching and learning activities.
INTRODUCTION
Implementation of online learning on undergraduate students of Building Engineering Education during the COVID-19 period was conducted from the 7th meeting to the 15th meeting in the even semester of 2019/2020. Online learning is carried out according to the chancellor's circular about prevention measures for the spread of COVID-19. A variety of online applications are used to support teaching and learning activities. Applications used include E-learning, WhatsApp Group, Google Classroom, Zoom, and other applications. Temporary data from the results of initial observations found some input, suggestions, and obstacles experienced by lecturers and students related to the implementation of online learning. So an evaluation was needed regarding the online learning profile that has been carried out. Supporting factors that influence teaching and learning activities include learning environment, lecturer competence, learning media, curriculum, teaching materials, facilities, and infrastructure. Teacher competency, facilities, and adequate infrastructure become supporting factors of the successful implementation of learning (Indriani & Atiaturrahmaniah, 2019).
The selection of appropriate and interesting learning media influences students' interest and motivation to learn. The curriculum, teaching materials, facilities, and infrastructure also have a large role to contribute in the learning success. Blended learning evaluation results that are valid, practical, and effective are appropriate for the development of learning (Hamid & Aras, 2020). Evaluation on learning can describe the results that (1) how the learning tools that have been used are categorized as good or not, (2) know the implementation of the learning process, (3) assessment categories, and (4) minimum standards of passing (Tompong & Jailani, 2019). In the learning process, it is closely related to learning resources and learning experiences. Some examples of learning resources such as human, environment, events, experiences, media, teaching materials, social culture, and so on.
Learning resources can be presented in the form of learning media. Learning experiences in the cognitive, skills, and affective domains are interrelated. These three domains have a positive effect on learning behavior, and support success for improving student learning outcomes (Hill et al., 2019). Problems that arise when online learning is carried out in full without any face-to-face meetings in the classroom between lecturers and students are (1) some students and lecturers are not familiar with online learning, and (2) internet network was not good. And also, some complaints from students related to them were the changing online learning schedules and the implementation of online learning that was not suitable for students. Online learning must be done, therefore to prevent the spread of COVID-19. This forces lecturers to continue to be more creative and innovative in learning. Critics and complaints related to the implementation of online learning from students are used as input to continue to make improvements for the best results. Creativity and innovation are the demands of lecturers in packaging learning, this aims to make students not bored when studying online from home.
Several previous studies related to online learning have been conducted. Students can be interested in learning using the E-learning application (Hogo, 2010). Using simulation models in learning activity gets a better understanding of the material (Pfahl & Laitenberger, 2004). Students need time to adapt when accepting the innovative learning strategies that used (Savec & Devetak, 2013). Institutions that have a focus on student learning and career success do not use learning evaluation as a measure of the success of teaching effectiveness (Uttl et al., 2016). Learning modules using electronic media accompanied by discussion can provide an increased understanding of concepts and problem solving (Wong et al., 2017). The use of books or modules can improve student skills so that it can make learning easier (Güven, 2010).
Blended learning that was done well structured can provide benefits for students and teachers, so that learning runs more effectively, student learning outcomes can be improved when compared to traditional learning (Yigit et al., 2014). Applications can be used efficiently to complete student assignments (Alomari et al., 2020). The competence possessed by the lecturer, the learning is done in accordance with the needs of the current student, the process of lecturer-student interaction, and the relevance of the learning resources used must always be updated. The inquiry-based modules presented in online learning can be effective for use (Mamun et al., 2020). Online learning using Elearning can increase student participation, motivation, and learning outcomes (Novocorti et al., 2013). Web-based information technology was presented in an interesting way, equipped with interactive features, and flexible able to motivate students to learn (Vivien et al., 2020). Not all online learning implementations using E-learning can be maximally successful, some are even unsuccessful. However, most studies reveal the positive effects and advantages of e-learning in online learning (Jorge et al., 2018). The characteristics of the learning model must be as needed, sophisticated, have a strong theoretical foundation, and be consistent (Limatahu & Mubarok, 2020).
Problem of Research
The research problem in this study was what the online learning profile was appropriate to be applied to students during the COVID-19 pandemic?
Research Focus
The focus of this study was to find out what online learning profiles were appropriate to be applied so that learning objectives can be achieved and students were able to take part in teaching and learning activities in online learning easily. It aims to evaluate the online learning profile that has been carried out during the COVID-19 pandemic period in the even semester of the 2019-2020.
RESEARCH METHOD
The data collection method was carried out using a questionnaire that was given online to students and lecturers supporting the course. The research trial sample used was 67 undergraduate students in the Building Engineering Education, Department of Civil Engineering, Faculty of Engineering, Universitas Negeri Surabaya. Consisting of 2017, 2018, and 2019 students level. Retrieval of data by asking respondents to fill out an evaluation form (Rosenblatt, 2004).
This research activity starts with the preparation of the questionnaire instrument which carried out jointly with the team. Validity and reliability testing has done before the distribution of data collection questionnaires after the instrument has been declared valid and reliable, it can be used for data collection. Testing the validity and reliability can be done using the Alpha Cronbach analysis (Pandiangan et al., 2017).
Collecting data have done using online by google form after lecturing and filling in student grades in the even semester 2019/2020, it was done to students feel free and free without pressure in giving answers in the form of both positive and negative responses to online learning that has been done. Data retrieval can also be done by using qualitative and quantitative variables to analyze user perceptions of technology through questionnaires (Sa et al., 2019). In this study, the results of the questionnaire data from students and lecturers supporting the subjects obtained were further analyzed descriptively quantitative and qualitative.
RESULTS AND DISCUSSION
Before distributing online questionnaires for data collection, testing the validity and reliability of the instrument was conducted. Validity and reliability testing was used to test the level of validity and how much the measurement results of the questionnaire will be used to collect data on students and lecturers supporting courses in the Building Engineering Education study program. The recapitulation of the validity and reliability test results from the Building Engineering Education student questionnaire has been done which is presented in Table 1. The results of the recapitulation of the validity and reliability testing data from the questionnaire supporting lecturers in the fields of education and the field of civil engineering that have been done which is presented in Table 2. Questionnaire for collecting data on 67 students contains about student selfmotivation, learning applications, learning implementation, evaluation of learning outcomes, and obstacles faced by students during online learning during the COVID-19 pandemic. The recapitulation of data from the questionnaires are presented in Table 3. Student self motivation towards online learning which was considered interesting and not boring is presented in Figure 1. Online learning for students has been effective, interesting, easy to use, and convenience for the collection of assignments, midterm and final exam is presented in Figure 2. Implementation of online learning was easy for students to access, easy in understanding the material provided by lecturers, classroom management was good, media suitability and methods are presented in Figure 3. Evaluation of online learning made it easy to do the exercises, assessments made by the lecturer were appropriate, learning outcomes increased, there was feedback from the lecturer presented in Figure 4. Obstacles encountered by students during online learning, which were caused by internet network conditions, expensive data package fees, and unsupported devices are presented in Figure 5. The data questionnaire of 6 lecturers that supporting education and civil engineering subjects consists of learning planning, learning, training, evaluating learning outcomes, and training discussed by lecturers during bold learning during the COVID-19 pandemic. The results of data recapitulation from the online questionnaire lecturers supporting the courses are presented in Table 4. Lecturer preparation needed more planning to do online learning activities, requires the development of existing semester learning plans, learning materials and resources must be interesting, objective task preparation was required, and online learning is preferably presented in Figure 6. Online learning for lecturers has been effective and interesting, easy to use to correct assignments, midterm, final exam is presented in Figure 7. Implementation of online learning is easy to access by lecturers, ease in giving assignments, ease of classroom management, appropriateness of media and methods, as well as student activity easily monitored are presented in Figure 8. Online learning evaluations were easy to do, learning evaluations were easy to control, student learning outcomes improve, and lecturers have provided feedback presented in Figure 9. This online learning research was supported by several previous studies, in which students can be interested when accessing E-learning learning (Hogo, 2010), simulation model learning was able to make the understanding of material obtained by students better (Pfahl & Laitenberger, 2004), Learning modules presented in electronic media combined with discussion methods were able to improve concept understanding and problem solving (Wong et al., 2017), the application of good blended learning can run more effectively (Yigit et al., 2014), the use of inquiry-based modules in online learning was more effective (Mamun et al., 2020), online learning using E-learning can increase participation, motivation, and learning outcomes (Novo-corti et al., 2013), Web-based information technology that was presented interesting, interactive features, and flexible able to motivate to learn (Vivien et al., 2020), and the implementation of online learning using E-learning has not been able to succeed optimally, it could even be unsuccessful, although most other studies have praised, and implemented e-learning because it has many advantages (Jorge et al., 2018).
The implementation of learning using E-Learning applications in the education system continues to develop, the application of technology using IT has been developed as a substitute for traditional learning (Khan & Alotaibi, 2020). Learning implementation with E-learning using the google classroom application was recommended because it can increase interest in learning (Ansong-gyimah, 2020). There was an increase in the learning performance of students when using virtual learning (Al-azawei & Al-masoudy, 2020). One of the solutions offered to overcome the weaknesses that arise from the implementation of problem-based learning, that was needed a form of innovative learning development so that Blended Web Mobile learning was integrated with the Hybrid Learning model and the problem-based learning model that was supported by the use of the MoLearn application (Prahani et al., 2020). Integration of the use of cellular-based technology for learning E-learning can provide positive responses, ease of learning material, and can improve the skills (Elsofany et al., 2020). E-learning was felt by users to be very useful if it was able to meet the quality of the appropriate content, support learning activities, proper assessment, and ease of accessing the learning system were fulfilled (Rutter & Smith, 2019). Educators must provide information related to how to access learning content through a blended learning system, provide usefully, latest, and interesting content for users (Rutter & Smith, 2019).
The use of mobile learning applications can improve academic performance and provide increased collaboration between students (Blilat & Ibriz, 2020). Appropriate learning innovations were able to foster independence and self-confidence, the impact of implementing online learning made students more actively communicating in online classes, students become enthusiastic, actively participate, and more confident (Tubagus & Muslim, 2020). The implementation of learning using this platform can be more interactive and can be used in other fields of education (Hazim, 2020). Universitas Negeri Surabaya has been an online learning application in the form of Vi-learning. This application has a variety of features that are easy to use. This application access was lightweight, can be accessed not necessarily at that time if the lecturer has allowed the settings. The material can be accessed or downloaded beforehand and the collection of tasks was left submitted. This application was able to accommodate students who have been programmed according to the schedule. The weakness of this application there was no video call menu.
Whatsapp group is the application that easiest to use, efficient, and the use of data packages needed to access it is also not too heavy. This application is a social media that is familiar to the majority of students. The weakness of this application is the use of video calls in classroom learning still limited, can not be used for large classes, chat, and documents are more dominant in the use of this application in learning. Google Classroom was able to improve student writing performance and skills, positive responses were shown by students because of the ease of use, and application access (Albashtawi et al., 2020). Google classroom was more effective and has a positive influence on behavioral intentions to learn (Al-maroof & Al-emran, 2018). Learning using Google Classroom can provide an attractive appearance so that for most students they feel increasingly interested in presenting material with this application. Access to learning is easy because it is website based. The preparation needed in this learning is more in the preparation of the class, for older lecturers, it becomes an obstacle because they are not familiar with this application. Complaints delivered by students, learning to use the zoom application if accessed from rural areas with the condition of the internet network is not smooth so the main obstacle for them to follow online learning, even to the point where there are students who have to move to other locations to obtain a good internet network. The advantage felt by students when using the zoom application media is that learning becomes more interesting and the ease of receiving material delivered by lecturers. This application can accommodate online students directly simultaneously with lecturers in large classes. Being able to display presentations both by lecturers' presentations and exercises in the form of student skills tests can also be displayed.
After testing the questionnaire, the results are obtained that the questionnaire was valid and reliable so that it can be used in data collection. Based on the results of the questionnaire from the data collected online learning that has been done shows that online learning is not all interesting for most students, so that makes students get bored quickly. Besides, online learning must be easy to use to receive materials, gathering tasks must also be easy to do, making it easy to do exercises, easy to do question and answer interactions, easy to conduct discussions during learning activities. Online learning has been implemented starting from meeting 7 to meeting 15, but some lecturers have problems when making corrections because they have to download these files one by one. The advantages provided by the use of online learning, namely the ease of being able to be accessed from any place as long as it can still be connected to the internet network. Weaknesses of the use of online learning, i.e. in certain applications require a strong network signal in accessing and using it, so it becomes an obstacle for them to access from rural areas with poor internet connection network conditions.
The results of filling out the questionnaire showed that students could not be sure there was an increase in learning outcomes after participating in online learning. Lecturers also cannot be sure that the work that has been collected in the original work of students. The selection of applications used must be in accordance with the online learning profile and under the characteristics of the course so it needs to be managed nicely interesting and not boring. Alternative solutions offered for online learning in the Building Engineering Education study program, are (1) it is better to use a lightweight application that is easily accessed by the internet, if teaching and learning activities require tutorials in learning to be made in the form of videos that can be downloaded, so students do not need to be online directly during learning; (2) hands-on training using the video call application is still needed to check the student's original abilities, but the tests are conducted individually rather than in a large online class; and (3) the use of online learning applications should be combined from several applications to make it interesting and not boring.
CONCLUSIONS
The conclusions obtained based on the results and discussion, are: (1) The questionnaires used was valid and reliable; (2) online learning was not all interesting, learning has not interesting made students get bored easily, online learning must be easy to use to receive delivery of material from lecturers, gathering assignments was easy to do, easy to do exercises, easy to do question and answer interactions discussion activities during learning activities; (3) online learning has been implemented, but some lecturers have problems when making corrections, the advantage of online learning is can accessed anywhere as long as it is connected to the internet, the weaknesses of online learning in certain applications require strong network signals, so that sometimes it becomes an obstacle to access for students who live in rural areas with substandard internet connection conditions, student work cannot be believed to be able to measure whether there is an increase in learning outcomes; (4) the use of the application adjusts the online learning profile and the condition of the internet network in the area, the selected online learning application must be in accordance with the course and must be managed as well as possible in order to be interesting and not boring for students. So the online learning application in the building engineering education study program must be easily accessible, easy to use, interesting to use, and needs to be combined with several different applications to provide excellent material delivery by lecturers and acceptance of the material for students in teaching and learning activities. The results obtained from this study are expected to have implications, i.e. (1) as a reference in implementing online learning; (2) increase knowledge, thought contributions, and study material for lecturers who wish to further study related to the implementation of online learning; and (3) online learning is appropriate to be implemented as a preventative measure for the spread of COVID-19. Suggestions for further research, namely: (1) need special instruments to measure student learning outcomes when implementing online learning; and (2) further research |
9776600 | s2orc/train | v2 | 2018-04-03T00:11:25.846Z | 2002-07-15T00:00:00.000Z | Giant diffusion and coherent transport in tilted periodic inhomogeneous systems
We investigate the dynamics of an overdamped Brownian particle moving in a washboard potential with space dependent friction coefficient. Analytical expressions have been obtained for current and diffusion coefficient. We show that the effective diffusion coefficient can be enhanced or suppressed compared to that of the uniform friction case. The diffusion coefficient is maximum near the critical threshold ($F_{c}$), which is sensitive to temperature and the frictional profile. In some parameter regime we observe that increase in noise (temperature) decreases the diffusion, which is counter-intuitive. This leads to coherent transport with large mean velocity accompanied by small diffusion. This is shown explicitly by analysis of P\'{e}clet number, which has been introduced to study coherent or optimal transport. This phenomena is complementary to giant diffusion.
I. INTRODUCTION
In recent times there has been a renewed interest in the study of transport properties of Brownian particles moving in periodic potential [1], with special emphasis on coherent transport and giant diffusion [2,3]. The phenomenon of coherent or optimal transport is complimentary to the enhanced diffusion, wherein one is mainly concerned with transport currents with minimal dispersion or diffusion [4]. Compared to free diffusion coefficient(DC, D = k B T /γ), DC is suppressed in the presence of periodic potential. However, in a nonequilibrium case i.e., in the presence of bias, it has been recently shown that DC can be made arbitrarily large (giant diffusion) compared to the bare diffusion, in the presence of periodic potential [2]. This enhancement at low temperature takes place near the critical threshold ( at which deterministically running solution sets in). The reason for this enhancement has been attributed to the existence of instability between locked to running solution. In some cases enhancement by fourteen order of magnitude has been predicted, so that diffusion can be observed on a macroscopic scale at room temperature [2]. This enhancement decreases as we move away from the critical field in either direction. Exact result for DC in arbitrary potentials has been obtained in term of quadratures. In special cases an elegant simplification of quadrature have been carried out. Near the critical tilt, scaling behavior of DC for weak thermal noise has been obtained and different universality classes have been identified [2]. Approximate expression for DC in terms of mobility has been obtained earlier which deviates from the exact results near the critical threshold [3].
In a related development, study of coherent or optimal transport has been reported recently [4]. Coherent or optimal transport of Brownian particles refer to the case of large mean velocity accompanied with minimal diffusion. This can be quantified by dimensionless Péclet number (ratio of mean velocity to DC). The transport is most coherent when this number is maximum. The particle motion is mainly determined by two time scales; noise driven escape from potential minima over the barrier along the bias, followed by the relaxation into the next minima. The former depends strongly on temperature and the later weakly on the noise strength and has a small variance. It is possible to obtain coherent transport in the parameter regime at which the traversal time across the two consecutive minima in a washboard potential is dominated by the relaxational time. At optimal noise intensity certain regularity of the particle motion is expected which accounts for the maxi-mum of Péclet number. In some cases (molecular separation devices) for higher reliability, one requires higher currents but with less dispersion (or DC) [5]. This effect of coherent transport is related to the phenomenon of coherence resonance [6] observed in excitable systems [4].
In the present work we study both the mentioned phenomena in a space dependent frictional medium. For this we have considered a simple minimal model where the potential is sinusoidal and the friction coefficient is also periodic (sinusoidal) with the same period, but shifted in phase. Frictional inhomogeneities are not uncommon in nature. Here we mention a few. Brownian motion in confined geometries, porous media experience space dependent friction [7]. Particles diffusing close to surface have space dependent friction coefficient [7,8]. It is believed that molecular motor proteins move close along the periodic structure of microtubules and will therefore experience a position dependent friction [9].
Inhomogeneities in mobility ( or friction) occurs naturally in super lattice structures and
Josephson junctions [10].
Frictional inhomogeneity changes the dynamics of the diffusing particle non-trivially, thereby affecting the passage times in different regions of the potential. However, it does not effect the equilibrium distribution. Thus thermally activated escape rates and relaxational rates within a given spatial period are affected significantly. This in turn has been shown to give rise to several counter-intuitive phenomena. Some of them are stochastic resonance in absence of external periodic drive [11], noise induced stability in washboard potential [12]. Single and multiple current reversals in adiabatic [13] and nonadiabatic rocked system (thermal ratchets [14]) respectively have also been reported [15]. In these ratchet systems the magnitude of efficiency of energy transduction in finite frequency regime may be more than the efficiency in the adiabatic regime, i.e, quasistatic operation may not be efficient for conversion of input energy into mechanical work [16]. All these above features are absent in the corresponding homogeneous case for the same simple potential.
In our present work we show that frictional inhomogeneities can give rise to additional new features in a tilted periodic potential. The observed giant enhancement of DC near the critical tilt can be controlled (enhanced or suppressed) in a space dependent frictional medium by suitably choosing the phase difference between the potential and the frictional profile. The most surprising feature is the noise induced suppression of diffusion, leading to coherent transport. Our results are based on analytical expressions for Péclet number and DC in term of moments of first passage times.
In Section II we present our model and derive the expression for moments of first passage times. Using these, DC and Péclet number have been defined. In section III A we analyze the nature of DC as function of system parameters, such as the applied external force and temperature. Section III B is devoted to the study of coherent or optimal transport in different regimes of parameter space. Finally in section IV we present the summary of our findings.
II. MODEL
We consider an overdamped Brownian particle moving in a symmetric one dimen- shift between friction coefficient and potential. The correct Langevin equation for such systems has been derived earlier from microscopic considerations [17] and is given bẏ where ξ(t) is a zero mean Gaussian white noise with correlation ξ(t)ξ(t ′ ) = 2δ(t − t ′ ). It should be noted that the above equation involves a multiplicative noise with an additional temperature dependent drift term which turns out to be essential for the system to approach correct thermal equilibrium state in absence of nonequilibrium forces [18]. The quantity of central interest in this work is the effective diffusion coefficient D given by In the absence of potential, D = k B T η (uniform η), is the usual Einstein relation. An exact analytical expression for D and average current J in terms of the moments of first passage time have been recently given [2,4]. If the n-th moment of the first passage time from an For our problem (1), the moments of first passage time follow closed recursion relation [2,19].
happens to be the average value of friction coefficient over a period.
By using Eq. (3) and Eq. (4), and some straight forward algebra, we get where, The average current density J for this system has been derived earlier [12] which in term of Eqs. (6) is given by The above expressions go over to the results obtained earlier for the case of space independent friction (η(x) = η 0 , λ = 0) [2]. It should be noted that Eqs. (6) are applicable when η(x) and V (x) have the same periodicity. Otherwise they have to be modified appropriately.
We obtain results for DC by numerically integrating Eqs. (6) using a globally adaptive scheme based on Gauss-Kronrod rules. For the special case of V 0 (x) = 0 we get We would like to mention here that in the absence of potential, DC explicitly depends on system inhomogeneities ( via λ), however, steady current is independent of λ for the same case [12]. F = 0 is the equilibrium situation and as expected D = D 0 , which corroborates with the fact that frictional inhomogeneities cannot affect the equilibrium properties of the system. In the high temperature regime, D = D 0 as anticipated. For asymptotically large field, DC saturates to a λ dependent value. This is solely attributed to space dependent friction. This somewhat surprising result also appears in the dependence of current on λ in the presence of potential and high field limit [13,15].
In our subsequent analysis all the physical quantities are expressed in term of dimensionless units, DC is scaled with respect to D 0 or V 0 /η 0 and T is scaled with respect to V 0 , where V 0 is half the potential barrier height (which is one). Period of the potential, L = 2π.
III. RESULTS AND DISCUSSIONS
A. Diffusion Coefficient Though the system response to the applied field is generally given by the stationary current density J = L < v >, but this directed motion (or average position of the particle) is accompanied by dispersion due to the inherent stochastic nature of the transport. It has been shown previously that in a homogeneous medium this dispersion or diffusion becomes very large (giant enhancement of DC) near the critical tilt. This enhancement in DC can be order of magnitude larger than the bare DC in the absence of potential. We make a systematic study of this phenomena in the presence of system inhomogeneities. Though our parameter space is large, we restrict to a narrow relevant domain where we observe effects which are surprising, and arise due to system inhomogeneities. In fig. 1 we plot DC as function of external tilt F for different values of temperature T (λ = 0.9 and φ = 1.6π). It can be seen from the figure that DC exhibits a maxima as function of F . However, quantitative details depend sensitively on system parameters such as φ, T and λ. It can readily noticed from the curves A and B that DC has been enhanced by more than factor 2 as compared with the homogeneous case. On lowering the temperature the relative enhancement of DC still increases. DC can also be suppressed by properly tuning φ. the force at which diffusion peaks (F p ) as function of φ at λ = 0.9 for T = 0.1 and 0.01. For T = 0.1 the peak can occur at as low as F = 0.8. The fact that F = 0.8 is away from critical threshold, hence enhancement in DC in this regime has to be attributed to space dependent friction. This is a clear example of system inhomogeneity affecting the dynamical evolution of the particle nontrivially. This will be discussed at the end of this section to explain many of our observations. We would also like to emphasize that critical threshold is not altered at temperature T = 0 for our present case as shown earlier [12].
Since with increasing tilt the barrier to forward motion decreases (thereby reducing the effect of exponential suppression of DC in the presence of periodic potential), therefore it is natural to expect that D/D 0 will increase with increasing F (for F < barrier height ). This is amply reflected in ref. [2], which corresponds to our λ = 0 case. In the presence of space smaller. As opposed to this, η(x) is higher between the barrier height and the next potential minima along the bias thus slowing down the relaxation motion to the next minima. This naturally enhances the dominance of the relaxation time scale over the barrier crossing rate.
This qualitatively explains our observed behaviour.
B. Péclet number and coherent motion
By coherent motion we mean large particle current with minimal diffusion. This property can be quantified by the dimensionless Péclet number P e defined as [4] P e = L ẋ D , where L ẋ is the average current density J. The expression for the current density is given in Eq. (7). We make use of expressions (5 and 7) to calculate P e. The parameter values at which P e shows maxima correspond to the most coherent motion for that particular model. Higher the P e more coherent is the transport. It should be noted that the P e can show maxima though neither J nor D may show extrema. In fig. 5, condition of transport where increasing current is accompanied by decreasing DC. This is aptly reflected in the P e, which shows enhancement (coherent motion or optimal transport) by an order of magnitude as compared with corresponding uniform friction case. The Péclet number is sensitively dependent on the phase factor φ and it can also be suppressed (diffusion dominates the transport) which we have not reported here. Hence we can control the degree of coherent motion.
IV. CONCLUSIONS
We have thus shown that both giant diffusion and coherent transport in a tilted periodic potential is sensitive to the frictional inhomogeneities of the medium. To analyze this problem we have taken a simple sinusoidal potential and periodic frictional profile with same periodicity but with a phase difference. Depending on the system parameters the value of DC near the critical threshold and Péclet number (indicating coherence in the transport) can be enhanced or decreased by an order of magnitude. Both these complimentary effects are important for practical applications [2,5]. The regime where we observe the optimal transport is accompanied by decrease of DC with temperature, the aspect which is absent in the corresponding homogeneous case. We have focussed on a restricted parameter regime to highlight the most interesting results arising due system inhomogeneities, in systems with simple potential. It is known that in the present model depending on system parameters current decreases with temperature, the effect akin to noise induced stability [20]. However, in this regime we have not observed any dramatic effect on DC as well as Péclet number. It is not clear whether noise induced stability (NIS) can enhance the coherence in the motion.
The phenomenon of stochastic resonance (SR) in absence external ac field [11]is seen in this model. SR is characterized by the observation of peak in the particle mobility as function of system parameters such as T and F in certain parameter space. The analysis of Péclet number in this parameter space does not show any surprising features, so as to correlate with SR. This is due to the fact that SR occurs in the hight T or high F regime, where barriers to motion are absent. To observe the peak in the Péclet number the existence of barrier seems to be essential. To clarify the relation between SR, NIS and Péclet number one requires further detailed analysis. |
246275930 | s2orc/train | v2 | 2022-01-26T02:16:29.524Z | 2022-01-25T00:00:00.000Z | Search for $W^{\prime} \to tb$ decays in the fully hadronic final state with the ATLAS experiment
A search for a new heavy boson $W^{\prime}$ in proton-proton collisions at $\sqrt{s}$ = 13 TeV is presented. The search focuses on the decay of the $W^{\prime}$ to a hadronic top quark and a bottom quark, using the full Run 2 dataset of the ATLAS detector. The hadronic decay of the top quark is identified using DNN-based boosted-object techniques. The dominant background is obtained by a data-driven method with small systematic uncertainties. The results are presented as upper limits on the production cross-section times decay branching ratio for the $W^{\prime}$ boson with right-handed couplings that decays to a top quark and a bottom quark, for several $W^{\prime}$ masses between 1.5 to 5 TeV.
Introduction
A new heavy charged vector boson W is found in various BSM scenarios such as the Little Higgs models [1] and extra-dimensional models [2]. In this analysis, the ATLAS collaboration [3] conducts a resonance search for the W in the all-hadronic tb final state (tb and tb). The tb channel is sensitive to W candidates with larger couplings with the third generation [4,5,6,7] or quarks [8]. Hadronically decaying top-quarks are reconstructed as R = 1 jets and identifiable via jet substructure techniques [9].
The jet definitions and the invariant mass m tb
Using the Run-2 ATLAS data equivalent to 139 f b −1 , this search considers proton collision events with no leptons, at least one large-R (R = 1) jet with p T > 500 GeV, and a small-R jet (R = 0.4) with p T > 500 GeV in the opposite direction, as viewed on the plane perpendicular to the proton beams. The two jets corresponds to the top quark and bottom quark decaying from the W . These two jets are summed, resulting in the invariant mass m tb , which is expected to assume a smoothly falling distribution for the SM backgrounds. By identifying jets associated with the top and bottom quarks, a peaking structure around the W mass becomes significant.
The identification of jets associated with a top quark or a bottom quark
The energy pattern of the calorimeter cluster energy [10] in a large-R jet facilitates the top-quark tagging. Jet substructure variables such as the N-subjettiness [11,12] distinguish the energy deposits due to the three quarks of top-decay from a light parton's QCD radiation. Several substructure variables, the jet mass, and p T are combined via a Deep Neural Network (DNN) into a top-tagging score [13]. A large-R jet with p T > 500 GeV and passing the 80% Working Point (80% WP, the threshold corresponding to an 80% probability) is designated as a top-candidate jet. Events with more than one top-candidate jet are rejected for the suppression of the top-quark pair (tt) background. In contrast, events without a top-candidate jet but with large-R jets (p T > 500 GeV) passing the DNN score above e −7 are reserved for separate analysis regions. Such large-R jets are called the top-proxy jets.
The highest p T small-R jet among those with p T > 500 GeV and ∆φ > 2 from the top-candidate jet is declared the b-candidate jet. Each top-proxy jet is also paired with a b-candidate jet. Despite bearing the letter b in its name, the b-quark identification is employed later for region assignment. The ATLAS b-tagging algorithm called the DL1r [14] reconstructs the b-hadron decay vertices using charged particle tracks around the small-R jet axis. The b-tagging requirement applied to the b-candidate jet is the 85% WP. Since the top decay products include a b-quark, small-R jets with p T > 25 GeV inside the top-candidate (top-proxy) jet are checked by the same b-tagging requirement.
4 Data-driven background estimation and region assignment Figure 1: The Signal Regions (SR1 to SR3), the Validation Region (VR), the Template Regions (TR1 to TR4), and the Control Regions (upper: CR1a to 4a and lower: CR1b to CR4b) are shown with the cuts and selections. Taken from Ref. [15].
The dominant background to this analysis is the QCD production of multi-jet consisting of mainly light parton jets. A data-driven estimation for this background is adopted under the assumption that the b-tagging probability for b-candidate jets is unchanged for either top-proxy or top-candidate jets. The tables in Figure 1 show the regions defined by the top-tagging and b-tagging configuration, allowing the QCD multi-jet events in each of the Signal Regions (SR1 to SR3) to be estimated using data events from the neighboring Template Regions (TR1 to TR3) and the Control Regions (CR1a to 4a) below. These Control Regions require a top-tagging score > e −4 for the top-proxy jets such that the parton flavor compositions approach the Signal and Template Regions. The two tables differ by the number of b-tagged small-R jets inside the top-candidate (top-proxy) jets: none for the 0 b-tag in category and more than one for the 1 b-tag in category. This b-tag requirement increases the contribution from heavy-flavor, impacting the b-tagging rates of the b-candidate jet.
For the Template Regions (TRs), the only significant background other than that from QCD multi-jets is the background from tt production. The tt events estimated by event generators have to be subtracted from the TRs as the Control Regions cannot account for them. All other backgrounds are included in the data-driven background, dominated by the QCD multi-jet. The DNN cut of e −7 for the lower Control Regions -CR1b to CR4b -corresponds to parton flavor variations between top-proxy and top-candidate jets observed in multi-jet simulation. The double ratios between the two rows of Control Regions are calculated in data to estimate the systematic uncertainties of the data-driven estimate. The tighter 50% WP of toptagging is applied to top-candidate jets, leaving a signal-sensitive region SR1 and a signal depleted Validation Region for cross-checking the data-driven method before unblinding data in the SRs. . The blue hatched band shows the systematic plus statistical uncertainty of the total background post-fit. The pre-fit total background is overlayed by blue dashed lines. The W R shown in the red dashed histograms assume the expected cross-section. Taken from Ref. [15].
The data-driven background and the MC-estimated tt background are fit to the m tb distribution in data with a profile-likelihood function to constrain systematic uncertainties. The post-fit distribution is compared with data in Figure 2 for the region SR1. The data-driven background (pale red) is plotted above the all-hadronic tt (green) and then the semileptonic plus dileptonic tt (yellow). Their sum is consistent with data adding the uncertainty bands, meaning that no significant presence of the W signal is observed. The overlayed right-handed W signal is not included in this background-only fit.
6 Exclusion upper limit Figure 3: The exclusion upper limit on the W R production times the tb decay branching ratio at the 95% confidence level as a function of the W R mass. The red band includes the theory uncertainties from Parton Distribution Functions, the strong coupling constant, renormalization and factorization scale, and the top quark mass. Taken from Ref. [15]. We compute the upper exclusion limit for several W masses from 1.5 to 5 TeV at the 95% Confidence Level, as depicted in Figure 3. The black line is the observed limit calculated with data; the blue dashed line is the expected limit obtained by treating the pre-fit background as expected data. The green (yellow) band corresponds to the one-sigma (two-sigma) uncertainties for the expected limit. The red line follows the theoretical cross-section for a right-handed W with SM-like electroweak coupling, computed by the ZTOP framework [16,17]. As the observed limit excludes crosssections above the curve, the W R signal is excluded at the 95% Confidence Level up to 4.4 TeV. For more details about this analysis, please refer to the conference note [15]. |
237516940 | s2orc/train | v2 | 2021-09-16T06:23:24.379Z | 2021-09-14T00:00:00.000Z | Regional heterogeneity in coral species richness and hue reveals novel global predictors of reef fish intra-family diversity
Habitat heterogeneity shapes biological communities, a well-known process in terrestrial ecosystems but substantially unresolved within coral reef ecosystems. We investigated the extent to which coral richness predicts intra-family fish richness, while simultaneously integrating a striking aspect of reef ecosystems—coral hue. To do so, we quantified the coral richness, coral hue diversity, and species richness within 25 fish families in 74 global ecoregions. We then expanded this to an analysis of all reef fishes (4465 species). Considering coral bleaching as a natural experiment, we subsequently examined hue's contribution to fish communities. Coral species and hue diversity significantly predict each family's fish richness, with the highest correlations (> 80%) occurring in damselfish, butterflyfish, emperors and rabbitfish, lower (60–80%) in substrate-bound and mid-water taxa such as blennies, seahorses, and parrotfish, and lowest (40–60%) in sharks, morays, grunts and triggerfish. The observed trends persisted globally. Coral bleaching's homogenization of reef colouration revealed hue’s contribution to maintaining fish richness, abundance, and recruit survivorship. We propose that each additional coral species and associated hue provide added ecological opportunities (e.g. camouflage, background contrast for intraspecific display), facilitating the evolution and co-existence of diverse fish assemblages.
A principal ecological attribute of coral reef communities is their spectral variability. This comprises the widely-recognized diversity of colours and patterns that vary ontogenetically within and among fish species 14,27,28 . It also includes the hue diversity imparted by scleractinian corals. Produced by varying concentrations of hostbased fluorescent proteins and endosymbiotic dinoflagellates pigments 29,30 , coral hue contributes a kaleidoscope of colours to the ecological backdrop 9,14,31 . The heterogeneous distributions of coral species globally, combined with the various configurations of con-and hetero-specific corals within any particular region, causes colour diversity to vary considerably across coral reef ecosystems 30,32,33 . However, the connection between taxonomic richness and habitat hue is unsubstantiated within global reef communities. This limitation has endured despite the recognition that coral reef teleost fish exhibit mono and penta-chromacy, and perceive, albeit at different sensitivities, the colouration of corals [33][34][35][36][37] . At short distances, fish colour vision functions similarly to terrestrial equivalents, with pigmentation being an informative component of perception, but as distances increase, detection of visual contrasts relative to background colouration becomes increasingly important 12,14,15,33 . As fishes interact amongst varying coral communities and over a range of distances, the interplay between fish communities and coral colouration may represent a critical aspect of coral reef ecosystems 12,14,33 .
In this paper, we examine the extent to which coral species richness predicts fish species richness within each of the 25 most common fish families found on reefs for 74 ecoregions across Pacific, Indian and Atlantic oceans 26,38 . Secondly, we apply a novel method of quantifying digital image colouration to Corals of the World's 784 coral species images and enumerated hue diversity within each of the 74 ecoregions 38,39 . We then integrated these data into the analyses of fish and coral taxonomic diversity by evaluating coral hue diversity as a covariate facilitating fish diversity, which may function concurrently or independently of coral species richness 40 . We then expanded this analysis to examine the contribution of the relationship observed within the 25 common reef fish families to maintaining global reef fish diversity. To do so, we evaluated coral richness and hue diversity as separate and combined predictors of the total reef fish diversity (4465 species). Finally, a subsequent examination considered coral colour loss as a natural experiment examined reef hue's contribution to maintaining fish diversity, abundance, and recruit survivorship. We predict that the number of small substrate-bound fish species (e.g. seahorses-Syngnathidae, gobies-Gobiidae) will have the highest positive associations with number of coral species, large-bodied and mobile herbivorous species (e.g. parrotfish-Scarini (formerly Scaridae), surgeonfish-Acanthuridae) lower associations and larger-bodied and mobile predators (e.g. sharks-Carcharhinidae, jacks-Carangidae) the lowest associations. Furthermore, we postulate that integrating coral hue diversity as a covariate predictor of fish richness will explain additional variability, a trend that will persist within and among fish families, and across oceanic basins.
This examination represents the first multivariate analysis of coral species and hue diversity's contribution to reef fish intra-family diversity. Geographic and taxonomic consistency in the trends would suggest that coral hue diversity is an important ecological mechanism that functions in combination with coral species richness to facilitate the evolution and co-existence of diverse coral reef fish assemblages. Our consideration of coral bleaching may elucidate whether the spectral diversity of coral cover, independent of coral species diversity and structural complexity, is a contributing covariate facilitating fish diversity, abundance, and recruit survivorship. By examining coral species and hue diversity's contribution to reef fish diversity across geographical regions and within a diverse array of fish families, we aim to focus new attention on the ecological importance of the kaleidoscope of colours exhibited by healthy coral reefs in shaping the perception of visually mediated selective pressures.
Results
Coral richness, hue, and intra-family fish richness. Coral species, associated hue diversity, and reef fish richness were all the highest within the Western Pacific and declined with increasing distance from this region (Figs. 1, 2, S3, Table S1). Within all 25 common reef fish families, species richness was positively correlated to coral richness and hue diversity (Figs. 3, S8, S9). The strength of these relationships, however, varied among families (Fig. 3). Coral richness and hue diversity explained 40-60% of the variance in large-bodied midwater fish families, including Carcharhinidae (requiem sharks) and Carangidae (e.g. jacks, mackerels). Between 60 and 70% of the variability in species richness within families that exhibit tightly coupled coral-fish relationships (e.g. mutualism, commensalism, Blenniidae, Gobiidae, Scorpaenidae) was explained by coral richness and hue diversity (Fig. 3). Generally, substrate-bound fish families exhibited the greatest association with coral richness and hue diversity. Within Siganidae (rabbitfish), Lethrinidae (emperors), Pomacentridae (damselfishes), Mullidae (goatfish), Chaetodontidae (butterflyfish), Lutjanidae (snappers), and Pomacanthidae (angelfish), coral richness and hue diversity predicted more than 80% of the variance in fish richness. Among the 25 families, hue's inclusion in this three-way interaction increased the variance explained in fish richness from 0 to 9% relative to when only coral richness was considered (Figs. 3, S8, S9). These patterns varied considerably among the four oceanic regions (Figs. S8, S9). Specifically, the families with the most explained variability, and the extent to which coral richness and hue diversity predicted fish intra-family richness, varied among the oceanic regions. The variation in fish diversity explained ranged from 40 to 86% when coral diversity and hue were analyzed concurrently, and from 33 to 86% and 37 to 86% when the analysis considered coral hue and coral diversity separately (Figs. 3, S8, S9, S10).
Coral richness, hue, and global reef fish richness. Taxonomic associations could be attributed to geographical or classifications biases; therefore, we examined coral richness and hue diversity, separately and in combination, as predictors of 4465 reef fish species from 117 fish families within the 74 global ecoregions. Across all regions, fish richness was linearly correlated to coral richness (Adjusted R 2 = 0.70, F (1, 72) = 169.9, p < 0.001) and exponentially correlated to hue diversity (Adjusted R 2 = 0.59, F (2, 71) = 54.3, p < 0. 0.001; Fig. 4). The diversity of coral hues increased with coral species, plateauing at approximately 180 unique colours and a minimum of with the combination of both accounting for more of fish community's variability than when either was evaluated separately (Fig. 4). Furthermore, model selection supported coral richness and associated hue as the best predictor of fish richness (Table S2). The influence of coral richness and hue on fish intra-family richness varied considerably among the four oceanic regions (Figs. S7, S8, S10). The Central and East Pacific (Adjusted R 2 = 0.75, F (3, 12) = 16.3, p < 0.001), and Western Pacific (Adjusted R 2 = 0.60, F (3, 27) = 16.3, p < 0.001.) exemplified the global trend ( Fig S7). Whereas, coral richness and hue diversity's correlation with reef fish richness was considerably higher within the Atlantic relative to the global average (Adjusted R 2 = 0.88, F (3, 9) = 30.8, p < 0.001; Fig S7). The extent of this association was reduced within the Northern Indian Ocean due to higher fish richness within South Africa and Oman (Adjusted R 2 = 0.09, F (4, 9) = 1.31, p > 0.1; Fig S7).
Coral hue homogenization. Reefs exhibit distinct hue heterogeneity. The colour distance matrix arranged the bleached and healthy reefs according to observed hue diversity, forming three separate groups, which primarily corresponded to the region's coral richness (Figs. 1, 5). The Australian and Indonesian reefs formed a distinct group relative to the other areas considered, indicating that their colouration is similar. Taiwan, and Turks and Caicos exhibited a comparable orientation; however, their colouration was more analogous to American Samoa than to Australia or Indonesia. Coral bleaching homogenized colouration, producing the most substantial measure of colour dissimilarity (Fig. 5). Among the pairwise distance comparisons of the bleached and healthy reefs, elevated levels of dissimilarity corresponded to increased colour diversity within the unbleached reefs. An analysis of the published literature indicated that coral bleaching-induced colour homogenization decreases fish species richness, abundance, and the survivorship of recruits, with declines in abundance varying among fish families (Fig. 5, Table S3). Coral species richness present in each ecoregion (C) Fish species richness present in each ecoregion. Generated using the 'ggplot2' package in RStudio version 3.6.1 41 . Figure 1A was created using ggplot2's map_data function.
Discussion
We establish for the first time the extent to which coral richness and hue diversity, when considered separately and in combination, predict fish species richness within each of the 25 common families found on reefs. We then determined this bivariate relationship persists among 4465 reef fish species present across the 74 ecoregions. These findings collectively suggest coral hue diversity may be an unrecognized ecological mechanism supporting taxonomic diversity within and among reef fish families. The species richness within fish communities expands exponentially with increasing hue diversity, conceivably supported by ancillary biological processes that are sustained within a more diversely coloured seascape. Furthermore, the richness of a reef 's fish community correlates linearly with increasing coral species richness, implying the two biological communities are ecologically linked. Collectively, the striking relationship between the species richness and hue diversity of scleractinian corals and associated reef fish is replicated among fish families and across geographical regions. Consequently, coral richness and colouration predict fish communities globally, with fish richness varying according to a region's unique coral richness and associated colouration. Replicating our analysis across geographical regions accounts for variability in the ecological processes that govern reef ecosystems at local and regional scales, including reef area, environmental conditions, reef isolation, metapopulation dynamics, and reef geological age 17,19,21,43,44 . The consistent correlation between coral richness, hue diversity, and fish intra-family richness indicates this relationship structures reef fish communities among and within fish families and extends beyond taxonomic biases. www.nature.com/scientificreports/ The most parsimonious model contained both coral richness and coral hue as predictor variables. The additional variability described under this condition indicates coral richness' and hue diversity's contributions to structuring reef fish communities are not equivalent, and that examining the variables in combination is necessary to effectively describe fish assemblages. This distinct relationship persists within a diverse array of fish families. Furthermore, the order of the 25 families, when ranked by how much intra-family variability was explained by the model, differed depending on whether coral richness and hue diversity were considered separately or in combination. This further supports the validity of considering coral richness and hue diversity concurrently when describing reef fish communities. Consequently, diverse reef fish assemblages are limited to regions with coral richness and colouration rich enough to facilitate the ecological processes needed to support these communities. Deviations from this relationship can be attributed to both biotic and abiotic conditions that disrupt species-habitat associations. For example, South Africa and Japan exhibited elevated fish richness relative to coral www.nature.com/scientificreports/ species richness and hue diversity. However, South Africa and Japan experience cooler water temperatures than surrounding ecoregions. This condition encourages the influx of cold-water tolerant fish species but excludes less tolerant coral species 45,46 .
Coral richness and associated hue diversity provide an ecological backdrop that shapes the perception of biological processes. The influence of habitat colouration on proximate biological processes varies depending on the life history of an organism, and the nature of the inter-or intra-specific interaction being considered 4,28 . Within reef fish, juvenile survivorship shapes populations due to strong selective pressures 47,48 . Across a range of taxonomically diverse reef fishes, approximately 55% of juveniles are consumed within two days of reef settlement 47 . Due to their disproportionally high contribution to larval recruitment, cryptobenthic families, including Gobiidae, Apogonidae, and Syngnathidae, experience elevated juvenile mortality levels and comprise nearly 60% of consumed reef fish biomass 48 . Furthermore, at minimum, 126 fish species are closely associated with live coral www.nature.com/scientificreports/ habitats as juveniles prior to undergoing an ontogenetic shift at maturity 49 . The present study's observation that recruitment success declines as reef colour homogenizes, and the established importance of juvenile fish-coral associations, suggests the post-settlement environment, characterized primarily by coral richness, structure, and colouration, strongly influences fish families' association with coral richness and hue diversity 47,49,50 . This evidence contrasts the 'Lottery hypothesis' proposed by Sale 51 and revisited by others 52 , which postulates that reef fishes' diversity within a trophic guild is primarily a consequence of competition for space and who gets there first. Consequently, the variability in morphology and hue among species within a guild is secondary or not relevant in an ecological or evolutionary context. Conversely, our results demonstrate a striking covariation among taxonomic diversity of fishes, corals and hue, in substantial contrast to a lottery mechanism as it suggests predictability and functionality of the unique attributes of each fish species on the reef. The observed exceptionally high correlations between fish species richness and coral hue diversity indicate a functional contribution of colour to reef fishes' selective landscape. The Type-3 survivorship curves present in all reef fish populations are primarily structured by predation during early ontogeny 47,48,53 . Despite the five decades of studies of coral reef fishes 24 , there remains minimal empirical data on the multiple spectral backgrounds against which predators see different reef species ontogenetic size classes. This basic spatial geometry of predator-prey interactions can be fundamental to any interpretation of adaptation 8,9,54 . For example, colour diversity in intertidal gastropods indicates that the microhabitat of early ontogenetic stages comprises the major selective landscape influencing the diversity and fitness of phenotypes 4,55 . As such, the unique hue for each additional coral species may facilitate increased ecological opportunity for any of the variable colour phases during ontogeny. The combined effects of coral species diversity and hue diversity constitutes a plausible contributory mechanism for the origin and persistence of the taxonomic and spectral diversity in reef fishes 33 .
Fish perception of seascape colouration is fundamental to unravelling the influence that coral hues have on reef communities. At a minimum, our results represent an exceptionally conservative proxy for hue diversity perceived by reef fishes as they do not address the elevated hue sensitivity of most fishes relative to human colour perception 15,27 , the added effects of fluorescence present in multiple coral species 56 , and the contribution of additional brightly coloured taxa including encrusting and epiphytic algae, sponges, soft corals and crinoids. Reef fish exhibit di-and tri-chromacy, with evidence that a species' photoreceptor spectral sensitivity is linked evolutionarily to the spectra it encounters 8,14,15 . This indicates that the hue diversity of corals, compounded by that of other benthic taxa, has been a significant driver in the evolution of reef fish photoreceptor spectral sensitivity. The ability of reef fish to alter their location, orientation, colouration, or pattern in relation to inter-and intra-specific perception further suggests that numerous aspects of reef fish behaviour, physiology, and community composition are functionally linked to coral colouration 12,27,28,57 . For example, various coral-associated species, including members of Gobiidae, Blennidae, and Monacanthidae, rapidly modify their luminance, hue, or saturation relative to background characteristics to avoid detection by visual predators [58][59][60] . Moreover, the conspicuous colouration observed in numerous reef fishes serves as a defensive strategy at close distances, while www.nature.com/scientificreports/ simultaneously blurring into the background when viewed from a distance 12,61 . Therefore, the observed exponential relationship between hue diversity and fish richness can be attributed to the myriad of biological processes facilitated by background colouration. Consequently, the visual interactions of reef fish are a function of the viewer's visual acuity, the subject's colouration and pattern, and the interplay between coral structure, colour, and the subject's orientation relative to the background 12,14,27 . As ecological processes occur over a range of spatial scales, the influence of background coral colouration on the detection of fish colour and pattern represents a critical aspect of coral reef dynamics 12,14,27 , which supports taxonomically diverse fish communities, and on an evolutionary time scale promotes reef fish diversification. The multitude of stressors that compromise the integrity of corals elicit local and global declines in hue diversity [62][63][64] . These devastating ecological events are natural experiments demonstrating coral hue's contribution to reef ecosystems, particularly because coral colour loss commonly occurs prior to, or in the absence of, structural degradation [65][66][67] . Our evaluation of reef colour homogenization illustrates that bleaching creates a distinct monochromatic seascape, causing major reductions in fish richness, abundance, and in particular, recruitment success. A consequence of this is the recognition that coral hue within intact reefs promotes diverse fish communities, increases fish family-specific abundances, and raises recruit's survivorship independent of coral species richness and structural complexity. Therefore, potential declines in fish communities will be highly influenced by a reef 's pre-bleaching coral richness and hue diversity, and the reef fish families present. The prominent rates of successful predation on bleached coral reefs and the strong association that numerous fish families have with coral colouration suggest that fish communities will be unlikely to recover if reefs remain devoid of colour 49,50 . Furthermore, if anthropogenic or natural stressors extirpated coral species and their associated hues, the extent to which fish communities can recover will be reduced accordingly, thereby threatening one of the highest concentrations of vertebrate diversity observed globally 25,32,68 .
Coral richness and colouration are among the most striking aspects of reef ecosystems. Despite this, the contribution of reef colouration in supporting diverse fish communities is frequently unrecognized. This omission is particularly evident when considering equivalent terrestrial ecosystems where substrate and background hue have featured prominently in ecological opportunities for invertebrate and vertebrate taxa 8,9 , and by association, verified the significance of species interactions with habitat colouration. The importance of coral reef colouration is becoming more apparent as the frequency and severity of reef disturbance events intensify, and the ecological ramifications of coral colour homogenization become increasingly evident [62][63][64] . This evaluation represents the first global and regional analysis quantifying the influence of coral richness and hue on reef fish communities. Examining either of these covariates independently would constitute a novel analysis, but when considered in combination, the established but unsubstantiated ecological importance of coral species diversity is coupled with each coral's spectral diversity. The evaluation revealed that the richness of coral reef fishes observed globally is sustained by regional coral species richness and functionally coupled to coral reefs hues' diversity. Whereby, each additional coral species contributes ecological opportunities to the surrounding seascape, which collectively provide the background heterogeneity required to support a diverse array of reef taxa. A pattern replicated across the diversity of coral reef fish and within 25 common fish families. With coral richness and hue functioning in combination to support diverse reef fish assemblages, we have described an aspect of coral reefs that is fundamental to a myriad of biological processes, that persists within and among fish families, varies according to the dynamic nature of reef ecosystems.
Materials and methods
Coral and fish richness. Coral and reef fish species data were compiled for 74 global ecoregions to evaluate the relationship between fish species richness, coral species richness, and coral colouration (Fig. 1). All available coral and fish species data were retrieved from Corals of the World 38 and FishBase 26 , respectively. Ecoregions were constructed by adapting Corals of the World's 150 ecoregions and FishBase's survey locations to determine regions where both coral and fish surveys occurred 26,32,38 . The assessment of coral richness considered the 784 scleractinian coral species that occurred within the 74 ecoregions. Two examinations of reef fish richness occurred. Firstly, we evaluated reef fish intra-family species richness for the 25 most common families within the same ecoregions, which encompassed 3250 fish species. Secondly, we analyzed the distribution of 4465 fish species from 177 families within the ecoregions. Each examination of reef fish richness was combined with the 784 scleractinian corals' distribution to generate reef fish and coral richness estimates within each of the ecoregions 40 (Table S1 in Supporting Information). All data analyses were conducted in RStudio version 3.6.1 41 . Data visualizations were generated using the 'ggplot2' , 'colordistance' , 'plot3D' and 'lattice' packages 39,41,[69][70][71] . All data and R code are available 40 . Coral image acquisition. Coral images were collected from the Corals of the World repository. This archive is the only publicly available coral image repository that includes at least one image of each scleractinian coral species that has been validated and quality controlled by experts in coral taxonomy 32,38 . For each of the 784 scleractinian coral species, the best quality photo was selected and downloaded as a JPEG image. Image acquisition targeted photos that displayed each species' typical appearance during the daytime, with consistent and appropriate lighting. Images with polyps fully or partially retracted were preferentially chosen, unless the species was known to open its polys diurnally. Images depicting rare colour morphs, irregular growth forms, or with distorted colouration were avoided. Each photo was cropped using ImageJ software to obtain an image containing only the focal coral species 72 . Image cropping removed the image background, other coral species, and any colour distortions. Minor shadows created by the focal coral were not removed, as they reflect the increased colour variance of structurally complex coral. All images were selected and processed by the same individual, with image cropping occurring in a random, non-taxonomically hierarchical order. www.nature.com/scientificreports/ Coral image colouration. Coral colour classification and categorization were conducted by integrating the Level 3 Inter-Society Colour Council and the National Bureau of Standard (ISCC-NBS) system into the recently developed 'colordistance' package ( Fig S1) 39,73 . The selection of this colour classification and categorization was validated relative to alternative colouration systems (Supplemental Text, Fig S1-S6). The Level 3 ISCC-NBS system calculates centroids based on Munsell colour space to produce 267 distinct colour categories. The ISCC-NBS centroids can also be considered in terms of sRGB colour space, as all sRGB denominations can be allotted into 260 of the ISCC-NBS categories 74 . The 'colordistance' package quantitatively derives colour trait data from images and is capable of comparing the similarity of different colour palettes, which allows for coral images' colour palettes to be quantified, categorized according to the Level 3 ISCC-NBS colour system, and analyzed accordingly.
The colour diversity of each of the 784 coral images was determined independently. All aspects of image colour acquisition and subsequent statistical analyses were performed using R statistical software version 3.6.1 41 . Each image's pixels were considered as three-dimensional coordinates in colour space to create a discrete multidimensional colour histogram (Figs. 1, S1). The maximum number of histogram bins was set a priori at 27, which corresponds to the number of regions in colour space that each of the standard red-green-blue (sRGB) channels were divided into (3 colour regions, 3 channels, 3 3 = 27 bins; Figs. 1, S1). The number of histogram bins occupied, the bin's sRGB colour composition, and the number of pixels allotted to each bin was a function of the image's colour diversity and the proportion of each colour present. To reduce the risk that abnormalities within the image, specifically small portions of discoloured pixels, were integrated into the analysis, bins that obtained less than 1% of the pixels were excluded. Each histogram bin was converted from its sRGB coordinates to ISCC-NBS, using hexadecimal colour codes as an intermediate conversion step. This resulted in coral colour richness illustrated by the number of visually distinct colour stimuli present within each image. The colour diversity of each ecoregion was determined by summarizing the number of unique colours present, given its coral composition ( Fig S3). Across all corals, 199 of the possible 267 ISCC-NBS colours were observed, with the number of unique colours within each coral family, and detected across ecoregions, varying considerably (Figs. S3-S5).
Coral, fish, and colour analyses. The influence of coral richness and hue diversity on fish species assemblages was examined within each of the 25 reef-associated fish families. The primary analysis examined this relationship among the four oceanic regions, and a subsequent examination considered this relationship within the oceanic regions. Multiple linear regression models evaluated the influence of coral richness and hue diversity on reef fish species richness within each family. Coral richness was incorporated as a second-order polynomial to account for the relationship between coral, hue and fish richness being non-linear in multiple instances. Specifically, the addition of a polynomial term accounted for occurrences when increasing hue diversity was highly correlated with fish richness, despite minimal coral richness (i.e. a few vividly coloured corals supporting diverse fish communities). Additionally, the influence of coral richness and hue diversity on the 25 reef-associated fish families was also considered independently.
The correlation between coral species richness, associated hue diversity, and reef fish assemblages within each ecoregion was subsequently evaluated using a combination of linear and non-linear models. The initial analyses considered the distribution of 4465 fish species from 177 families, 784 coral species, and coral colouration, across the 74 ecoregions (Table S1). A linear model assessed the correlation between coral and fish richness. To account for reef colour saturation, a logistic regression model quantified the relationship between coral richness and hue diversity. A quadratic (second-order) polynomial model assessed the relationship between fish richness and unique hue diversity. A multiple regression quantified the influence of hue diversity, and coral richness as a quadratic (second-order) polynomial term, on reef fish richness. Akaike's Information Criteria evaluated coral richness and hue diversity, when modelled separately and in combination, as predictors of fish species richness (Table S2). Separate multiple regressions quantified the influence of hue diversity, and coral richness as a quadratic (second-order) polynomial term, on the reef fish richness observed within each of the four oceanic regions (Fig S7).
Reef colour analysis.
To evaluate hue diversity across coral reef seascapes and quantify coral colour loss, reef images taken by the Ocean Agency during the XL Catlin Seaview Survey were collected from Coral Reef Image Bank. Survey images of shallow water healthy reefs under bright ambient lighting in American Samoa (Fogama), Australia (Lizard Island), Indonesia (Manado), Taiwan (Donghi harbor), Turks and Caicos (Providenciales), and a severely bleached reef surveyed in American Samoa were selected for analysis 42 . The XL Catlin Seaview Survey images were chosen to increase consistency across the photos. Image selection emphasized wide-angle survey images that captured a diversity of coral reef taxa, including stony and soft corals, sponges, other invertebrates, and small reef-dwelling fish. These images were cropped to remove the water column.
A distance matrix compared hue similarity between the Catlin Seaview Survey images. A discrete multidimensional colour histogram considered each survey image in sRGB colour space. Sixty-four histogram bins (4 colour regions, 3 sRGB channels, 6 3 = 64 bins) were used to account for the increased hue diversity relative to the single coral images previously analyzed. Each bin's associated sRGB colour space denomination, the proportion of pixels within each bin, and the number of bins occupied across the histogram, were functions of each reef 's hue diversity. The colour distance matrix considered the pairwise distances between survey images using earth mover's distance, a technique that compares histograms using transportation costs (i.e. the effort required to make one community resemble another). The pairwise distances between reefs were visualized by plotting a heatmap of the symmetrical distance matrix. The resulting heatmap illustrated the hue similarity between the reefs considered, with the plotted branch lengths being proportional to the earth mover distances. www.nature.com/scientificreports/ An examination of published literature was integrated into this analysis to determine the ecological consequences of coral colour loss (Table S3 in Supporting Information, Appendix 1 Data Sources). Data were extracted and summarized from studies that evaluated reef fish responses to bleaching events that induced colour loss but maintained coral richness and structure (Supplemental Text). The criteria that studies had to be explicit about coral richness and structure being preserved limited the potentially relevant literature on the topic considerably. Data from 133 comparisons were extracted and summarized data from eight studies (Table S3). The 'Metafor' package was used to calculate the standardized mean difference (Hedge's d) and the corresponding variance of each comparison 41,75 . Effectively, this determined the overall effect of colour loss on fish richness, abundance, recruitment, or fish family-specific abundances. |
236154920 | s2orc/train | v2 | 2021-07-22T01:16:25.732Z | 2021-07-20T00:00:00.000Z | An Arecibo Search for Fast Radio Transients from M87
The possible origin of millisecond bursts from the giant elliptical galaxy M87 has been scrutinized since the earliest searches for extragalactic fast radio transients undertaken in the late 1970s. Motivated by rapid technological advancements in recent years, we conducted $\rm \simeq 10~hours$ of L-band ($\rm 1.15-1.75~GHz$) observations of the core of M87 with the Arecibo radio telescope in 2019. Adopting a matched filtering approach, we searched our data for single pulses using trial dispersion measures up to $\rm 5500~pc~cm^{-3}$ and burst durations between $\rm 0.3-123~ms$. We find no evidence of astrophysical bursts in our data above a 7$\sigma$ detection threshold. Our observations thus constrain the burst rate from M87 to $\rm \lesssim 0.1~bursts~hr^{-1}$ above $\rm 1.4~Jy~ms$, the most stringent upper limit obtained to date. Our non-detection of radio bursts is consistent with expectations of giant pulse emission from a Crab-like young neutron star population in M87. However, the dense, strongly magnetized interstellar medium surrounding the central $\sim 10^9 \ M_{\odot}$ supermassive black hole of M87 may potentially harbor magnetars that can emit detectable radio bursts during their flaring states.
FRBs are millisecond-duration narrowband pulses of coherent radio emission originating outside our Galaxy. To date, over 600 FRB sources 1 have been discovered, of which at least 24 have been seen to repeat. Precise arcsecond localization (Chatterjee et al. 2017;Bannister et al. 2019;Prochaska et al. 2019;Ravi et al. 2019;Heintz et al. 2020;Law et al. 2020;Macquart et al. 2020;Marcote et al. 2020;Kirsten et al. 2021;Ravi et al. 2021;Fong et al. 2021) of 15 FRBs 2 to their respective host galaxies has revealed that FRB sources can reside in diverse host environments. Furthermore, the discovery of a luminous radio burst from the Galactic magnetar SGR 1935+2149 (Bochenek et al. 2020; CHIME/FRB Collaboration et al. 2020) suggests a plausible magnetar engine for FRB emission. Characterized by a 1.4 GHz fluence of 1.5 MJy ms at the 9 kpc distance (Zhong et al. 2020) of SGR 1935+2149, such a burst would be easily detectable with 120 Jy ms fluence at the ∼ Mpc distances to the nearest galaxies. FRB discoveries from the local Universe are hence necessary to bridge the luminosity scale between Galactic magnetars and FRBs. Detections of such bursts will further enable sensitive multi-wavelength follow-up to constrain models of FRB progenitors 3 (Platts et al. 2019).
While FRBs are of extragalactic origin, pulsar GPs constitute the most luminous Galactic radio transients at sub-millisecond timescales. First noted in the Crab pulsar PSR J0534+2200 (Staelin & Reifenstein 1968) and studied extensively (Lundgren et al. 1995;Cordes et al. 2004;Karuppusamy et al. 2010Karuppusamy et al. , 2012Mickaliger et al. 2012), GPs are typically identified as short duration ( ms), narrow-phase emission comprised of nanosecond-duration shot pulses (Hankins et al. 2003). GPs frequently exhibit powerlaw amplitude statistics (Bhat et al. 2008), un-1 FRB Newsletter Vol 2 Issue 6: https://doi.org/10.7298/ b0z9-fb71 2 https://frbhosts.org 3 https://frbtheorycat.org like general pulsar single pulses (Burke-Spolaor et al. 2012) that often display lognormal energy distributions. Cordes & Wasserman (2016) evaluated the detectability of radio bursts from an extragalactic population of neutron stars that emitted nanosecond shot pulses analogous to the Crab pulsar. They demonstrate that for a fluence of ∼ 1 Jy ms, bursts arising from an incoherent superposition of shot pulses can be detected out to distances few× 100 Mpc. The detection distance gets pushed out farther for conditions more extreme than the Crab pulsar, such as in young magnetars. Studying the GP emitter PSR J0540-6919 (B0540−69), Geyer et al. (2021) observed band-limited flux knots analogous to that seen in FRBs. However, unlike some repeating FRBs Fonseca et al. 2020), these GPs reveal no distinct sub-pulses that drift downwards in radio frequency with increasing arrival time.
Hosting a M 6.5 × 10 9 M supermassive black hole (SMBH; Event Horizon Telescope Collaboration et al. 2019), the giant elliptical galaxy M87 within the Virgo cluster has been a popular target in past surveys for pulsed radio emission (Linscott & Erkes 1980;Hankins et al. 1981;McCulloch et al. 1981;Taylor et al. 1981). Akin to the Galactic Center (Dexter & O'Leary 2014), rapid star formation near the SMBH of M87 likely yields a significant magnetar population. Michilli et al. (2018) argue that a young neutron star embedded in a strongly magnetized plasma such as that near a black hole or a supernova remnant may explain FRB 121102 (the first discovered repeating FRB: Spitler et al. 2014;Scholz et al. 2016;Chatterjee et al. 2017) and its large, dynamic rotation measure (|RM| ∼ 10 5 rad m −2 ). While the RM of FRB 121102 is unusually large among FRBs with measured RMs (typical |RM| 10-500 rad m −2 , Petroff et al. 2019), it is comparable to that observed for the Galactic Center magnetar PSR J1745−2900 (|RM| 6.6×10 4 rad m −2 , Eatough et al. 2013). The dense, magneto-ionic interstellar medium (ISM) at the core of M87 represents a possible host for FRB 121102 and PSR J1745−2900 analogs.
Intending to detect dispersed single pulses, we targeted the core of M87 with the William E. Gordon Arecibo radio telescope. Similar targeted searches for extragalactic radio bursts have previously been attempted in the direction of several galaxies Bhat et al. 2011;Rubio-Herrera et al. 2013;van Leeuwen et al. 2020), including the nearby galaxies M31 and M33.
Section 2 describes our observing setup. We detail our data analysis methods and results in Section 3. In Section 4, we evaluate the significance of our results in the context of potential neutron star populations in M87. Finally, we conclude and summarize our study in Section 5.
OBSERVATIONS
Radio pulsars are steep-spectrum sources (S ν ∝ ν −1.4±1.0 , Bates et al. 2013), emitting greater pulse-averaged flux density (S ν ) at lower radio frequencies (ν). As radio pulses traverse the astrophysical plasma along our lines of sight to their sources, they get dispersed (pulse arrival times ∝ ν −2 for cold plasma dispersion) and scattered (pulse broadening time scale, τ sc ∝ ν −4 or ν −4.4 for Kolmogorov scattering). Optimal pulsar detection requires a suitable trade-off between the weakening pulsar emission at high radio frequencies ( 10 GHz), and the growing, deleterious propagation effects at low radio frequencies ( 700 MHz). Largescale pulsar surveys (Manchester et al. 2001;Cordes et al. 2006;Keith et al. 2010;Barr et al. 2013;Keane et al. 2018) have hence, often been performed at 1-2 GHz, i.e., "L-band." In contrast, FRB spectra are band-limited, and show no preference for a specific observing frequency. Allowing for both FRB-and pulsar-like burst spectra, L-band observations are well placed to enable extragalactic single pulse discovery from the local Universe.
Hunting for outbursts from the SMBH of M87, Linscott & Erkes (1980) detected highly dispersed (dispersion measure, DM 1000-5500 pc cm −3 ) millisecond-duration pulses at radio frequencies of 430, 606, and 1230 MHz. However, no repeat bursts were seen in subsequent follow-up efforts (Hankins et al. 1981;McCulloch et al. 1981;Taylor et al. 1981) between 400-1400 MHz. Attempting to survey the core of M87 with increased sensitivity, we executed 18 hours of L-band search-mode observations with the Arecibo radio telescope. Figure 1 shows our Arecibo L-band beam of HPBW 3. 3, overlaid on an optical map of M87. Table 1 summarizes our observing program, comprised of 6 sessions lasting 3 hours (overheads included) each. We began each session with a 3-minute scan of a bright test pulsar to verify proper data acquisition system functioning. To mitigate data loss from intermittent backend malfunctions, we distributed our net on-source time per session across multiple scans of different lengths. All sessions used the singlepixel L-wide receiver with the Puerto Rico Ultimate Pulsar Processing Instrument (PUPPI) backend. The final data products generated by our observations contained 1536 usable spectral channels, each with 390.625 kHz resolution. The sampling time of our data was 64 µs. Table 1, persistent data dropouts occurred during sessions 1, 3 and 4, preventing us from achieving our desired exposure time of 2.5 hours per session. We discarded these dropout-affected data segments from our subsequent single pulse searches.
As indicated in
We estimate the sensitivity threshold of our observations by considering a flat-spectrum, band-filling, boxcar-shaped pulse of width W . The L-band system temperature at the time of our observations was T sys 27 K. For telescope gain, G = 10 K Jy −1 , the corresponding systemequivalent flux density is S sys = 2.7 Jy. The galaxy M87 contributes continuum flux density, S M87 212.3 Jy (Perley & Butler 2017) at 1.4 GHz. The radiometer equation then implies a minimum detectable fluence, (1) Here, B and (S/N) min denote, respectively, the observing bandwidth, and the minimum signalto-noise ratio required to claim a detection. Table 1 lists F min thresholds for different observing sessions assuming a W = 1 ms burst detected with (S/N) min = 7. Our observations reach down to F min 1.4 Jy ms, about 6 times deeper than previous targeted searches (Hankins et al. 1981;McCulloch et al. 1981;Taylor et al. 1981) for radio pulses from M87. For comparison, the commensal ALFABURST experiment (Foster et al. 2018) at Arecibo with B ≈ 56 MHz would have attained F min 4.6 Jy ms, i.e., a factor of 3 above our sensitivity limit.
METHODS AND RESULTS
Conventional searches for dispersed pulses typically involve matched filtering of dedispersed time series with template filters of various widths. However, the ubiquitous presence of radio frequency interference (RFI) in dynamic spectra (radio frequency-time plane) often complicates such searches. We discuss our RFI excision procedure in Section 3.1. Following RFI masking, we illustrate data integrity through our test pulsar detections in Section 3.2. Since the true DM of a radio burst is unknown prior to discovery, dynamic spectra need to be dedispersed over a range of trial DMs. These dedispersed dynamic spectra, one per trial DM, are then summed over radio frequency to produce dedispersed time series for single pulse searching. Sections 3.3 and 3.4 describe our dedispersion plan and single pulse search methodology respectively.
RFI Excision
Informed by Arecibo-specific RFI mitigation performed by Lazarus et al. (2015), we used the rfifind module of the pulsar search software PRESTO (Ransom 2011) to operate on 1-second sub-integrations of data. For each 1-second block in every frequency channel, rfifind computes two time-domain statistics, namely the block mean and the block standard deviation. A Fourier-domain statistic, i.e., the maximum of the block power spectrum, is also calculated. Blocks with one or more statistics that deviate significantly from the means of their respective distributions are labeled as RFI. For the time-domain statistics, we adopted a flagging threshold of 5 standard deviations from the distribution mean. The corresponding threshold for the Fourier-domain statistic was 4 standard deviations from the mean.
To mask RFI, the ensuing set of flagged blocks were replaced by median bandpass values of that time range. Time integrations containing over 50% flagged channels were masked completely. Likewise, channels with at least 20% flagged blocks were entirely replaced by zeros. All flagging thresholds chosen in our study were conservative choices based on visual inspection of short data segments and parameter estimates from Lazarus et al. (2015).
To remove broadband baseline fluctuations, we applied a zero-DM filter to subtract the mean over channels from each time slice in the masked, non-dedispersed dynamic spectrum. Eatough et al. (2009) investigated the sensitivity loss from zero-DM filtering for boxcar single pulse detection in the Parkes Multi-beam Pulsar Survey (ν = 1.4 GHz, B ≈ 288 MHz, Manchester et al. 2001). While DM = 0 pc cm −3 signals get completely eliminated, boxcar pulses with widths, W 9 ms, can be detected with 90% sensitivity at DM 100 pc cm −3 . The detection sensitivity to broader pulses increases further at higher DMs.
Implementing the above RFI excision process, the prominent signals masked out in our data include intermittent, narrow-band RFI between 1.26-1.28 and 1.72-1.73 GHz. In summary, up to 95-100% of our observing bandwidth was usable every session.
Test Pulsar Verification
As listed in Table 1, our observing program included 3-minute scans of the bright test pulsars J1136+1551 (B1133+16) and J1239+2453 (B1237+25). To detect the periodicity of these pulsars, we first dedispersed our pulsar dynamic Radio frequency (GHz) Figure 2. Single pulse detections of test pulsars J1136+1551 (left column) and J1239+2453 (right column) during observing sessions 2 and 4. The top panels depict non-dedispersed dynamic spectra, block-averaged to 512 µs time resolution and 3 MHz spectral resolution. The bottom panels show dedispersed data products (dynamic spectra and time series) after convolution with their respective S/N-maximizing temporal boxcar filters. The bottom panels also quote the pulse DM and the S/N-maximizing temporal boxcar filter width (W f ). The short orange dashes at the left edges of all dynamic spectra represent channels flagged by our RFI excision procedure. The red curves in the top panels illustrate ν −2 dispersion curves corresponding to the pulsars' DMs.
spectra to their respective known pulsar DMs.
Using the prepfold routine of PRESTO, we then ran a blind folding search for periodic pulsations in these dedispersed data. In doing so, we recovered pulsar rotational periods and average pulse profiles that were consistent with previously published results 4 (Manchester et al. In addition to periodicity confirmation, we searched our test pulsar data for single pulses. To do so, we dedispersed our pulsar data over trial DMs ranging from 0 pc cm −3 to 100 pc cm −3 , with a DM grid spacing of 0.4 pc cm −3 . We then block-averaged our dedispersed time series to 512 µs resolution, and searched these time series for single pulses using a matched filtering approach. We accomplished our single pulse searches using the single pulse search.py module of PRESTO, which convolves an input time series with boxcar filters of various widths. We considered boxcar filter widths of 1, 2, 3, 4, 6, 9, 14, 20, and 30 bins in our burst search analysis. Let (S/N) mf denote the S/N of a single pulse candidate in the convolution of its dedispersed time series with a boxcar matched filter. Setting (S/N) mf ≥ 10 as the detection criterion, we successfully detected dispersed pulses in all test pulsar scans. Figure 2 shows single pulse detections of pulsars J1136+1551 and J1239+2453 during observing sessions 2 and 4 respectively. The pulse from J1136+1551 is from only one of the two primary components seen in the average profile, while for J1239+2453, the pulse in the top panel shows emission in several of the five profile components. Matched filtering smears some of this structure in the bottom panel.
Dedispersion Plan
Looking to find possible repeats of the Linscott & Erkes (1980) bursts, we dedispersed our M87 data out to 5500 pc cm −3 . Table 2 summarizes our dedispersion plan, which attempts to optimize various contributions to pulse broadening.
For a pulse of intrinsic width W , its effective width ) in a dedispersed time series is Here, t samp is the sample interval, and τ sc is the scatter-broadening time scale. For channel bandwidth ∆ν, t ∆ν ∼ (∆ν) −1 is the receiver filter response time. At radio frequency ν, the intrachannel dispersive smearing is The use of a finite DM step size (δDM) for dedispersion introduces a residual broadband dispersive delay given by where B is the observing bandwidth. Since τ sc cannot be corrected in practice, we neglect it when devising our dedispersion plan. Therefore, the net optimizable contribution to the effective pulse width is With increasing DM, t chan grows and dominates t tot . To minimize computational cost, we downsampled our dedispersed time series via block-averaging, and increased δDM at higher DMs. Table 2 lists temporal downsampling factors and δDM values for various trial DM ranges in our study.
Through matched filtering, we incur negligible loss of sensitivity in our burst searches (Keane & Petroff 2015). For RFI-cleaned data, the finite DM grid explored in our study then determines our survey completeness. Specifically, for δDM 1 pc cm −3 , t BW limits burst detection for W 2 ms. Hence, we chose downsampling factors in Table 2 that provide optimal sensitivity to burst durations in different DM ranges.
Single Pulse Searching
Following the single pulse search methodology described in Section 3.2, we ran burst search analyses on our M87 data. Again, we operated with boxcar filters of widths 1, 2, 3, 4, 6, 9, 14, 20, and 30 bins for matched filtering. Table 2 lists the boxcar filter durations used for DM ranges with distinct downsampling factors. Our boxcar filters span widths, W f 0.3-8 ms at the lowest DMs to W f 4-123 ms at the highest trial DMs covered in our study. Figure 3 shows a sample single pulse search output from a 20-minute scan of M87 during session 6. Real astrophysical bursts are expected to manifest as localized spindles with non-zero central DMs in the DM-time plane. To verify the presence of such signals in our data, we visually inspected dynamic spectra of all promising candidates with matched filtering S/N, (S/N) mf ≥ 7. Figure 4 illustrates dedispersed dynamic spectra of two such candidates that were examined.
To discern dispersed bursts from RFI in dynamic spectra, Foster et al. (2018) devised a set of metrics based on a prototypical pulse model. However, RFI can manifest with diverse spectro-temporal morphologies and variable signal strengths, thereby rendering the burst S/N, bandwidth, and duration as unreliable classification criteria. We therefore demanded the presence of a continuous ν −2 dispersive sweep and natural burst sub-structure (analogous to known FRB and GP discoveries) as litmus tests for astrophysical pulses. We also entertained the notion of DM consistency across possible repeat events with the caveat that DMs may significantly vary between burst sources in different regions of M87. Adopting the above selection criteria, our manual inspection process reveals that all candidates with (S/N) mf ≥ 7 can be attributed to short duration ( 100 ms) RFI patches that were missed by our RFI excision procedure.
We set (S/N) min = 7 as the detection threshold for our M87 burst searches. Our nondetection of dispersed pulses in 10 hours of in- b We ignore session 1 due to its marginally higher F min compared to other sessions.
c R(> F) ∝ F −1 scaling applied to facilitate comparison of R with that obtained by Hankins et al. (1981) in their 1400 MHz observations. d Hankins et al. (1981), McCulloch et al. (1981, and Taylor et al. (1981) experimented with multiple instrumental setups at each observing frequency. For a given radio frequency, we quote here F min from their most sensitive observation. tegration time then imposes the upper limit R 0.1 bursts hr −1 on the burst rate (R) from M87 above F min 1.4 Jy ms, assuming a fiducial burst width of 1 ms. Table 3 summarizes burst rates/limits derived from all known searches for radio pulses from M87. Evidently, our Arecibo observations constitute the deepest single pulse searches of M87 conducted to date. Assuming a cumulative burst fluence distribution, R(> F) ∝ F −1 , similar to that seen for FRB 121102 Gourdji et al. 2019;Oostrum et al. 2020;Cruces et al. 2021), our observations constrain R to at least a factor of 25 better than previous surveys of M87. We postulate a likely nonastrophysical origin for the Linscott & Erkes (1980) pulses given their inconsistency with the burst non-detection reported in more sensitive surveys of M87. In the following paragraphs, we explore the significance of our radio burst non-detection in the context of likely neutron star populations in M87.
DISCUSSION
The Crab pulsar, with a characteristic age of τ c 1300 years, is among the best studied GP emitters in our Galaxy. Based on a sample of 13,000 Crab GPs at 1.4 GHz, Karuppusamy et al. (2010) inferred a cumulative burst rate distribution, above F 2 Jy ms. Empirical values of the power-law index, α, range from ≈ −2.5 to −1.3 depending on the observation epoch and the observing frequency (Mickaliger et al. 2012). Here, we nominally adopt α = −2 for illustration.
Following Cordes & Wasserman (2016), we extend R Crab (> F) to extragalactic radio pulsars and assess the detectability of Crab-like GPs from M87. Considering millisecond bursts from M87 (distance 16.4 Mpc, Event Horizon Telescope Collaboration et al. 2019), our detection threshold of 1.4 Jy ms corresponds to a limiting fluence of 94 MJy ms at the 2 kpc distance (Trimble 1973) of the Crab pulsar. Equation 6 then implies a negligible rate of 2 bursts Gyr −1 for Crab-like GPs from M87. GP detection from M87 therefore, entails young neutron stars capable of emitting more frequent supergiant pulses than the Crab pulsar.
We estimate the probable number of young pulsars in M87, starting from the Galactic canonical pulsar birth rate, β PSR, MW = 1.4 century −1 ). The star formation rate (SFR) in M87 is 0.05M yr −1 (Terrazas et al. 2017), about 38 times smaller than that of the Milky Way (Chomiuk & Povich 2011). Scaling β linearly with SFR, we expect ≤ 1 canonical pulsar in M87 younger than the Crab pulsar. The low SFR of M87 thus renders unlikely the prospect of detecting GPs from canonical pulsars in M87.
Aside from canonical pulsars, alternate potential burst sources include millisecond pulsars (MSPs) prevalent in globular clusters (Ransom 2008), magnetars theorized to power FRBs (Lyubarsky 2014;Beloborodov 2017;Margalit et al. 2019;Metzger et al. 2019), and binary neutron star mergers emitting radio precursors (Sridhar et al. 2021). M87 hosts a rich globular cluster system (Strader et al. 2011), with 650 globular clusters contained inside our Arecibo HPBW 3. 3 (≈ 15.7 kpc). Galactic pulsar surveys have thus far uncovered 120 millisecond pulsars in 36 globular clusters 5 , equating to a mean discovery rate of 3 MSPs per globular cluster. Extending this rate to M87 using a linear scaling with SFR, we predict at least 50 MSPs to be contained inside our Arecibo beam. However, single pulse detections from such objects are extraordinarily unlikely, requiring exotic systems emitting bursts 10 8 times more energetic than GPs from Galactic MSPs. For example, the brightest GP detected from the Galactic MSP B1937+21 (Backer et al. 1982) has fluence, F 200 Jy µs (McKee et al. 2019) at 1.4 GHz. Placing this burst source at the distance to M87, we observe a practically undetectable burst fluence, F 10 nJy ms ∼ 10 −8 F min . Moreover, our survey parameters together with the lack of baseband data (raw complex voltages) render potential GP detection from MSPs unlikely.
Magnetar births and neutron star mergers (Artale et al. 2020) are generally associated with gas-rich, star-forming regions in the Universe. But, such locations are scarce in a red elliptical galaxy like M87. Motivated by the hitherto nondetection of Galactic Center pulsars and the discovery of a single magnetar (Eatough et al. 2013) at the Galactic Center, Dexter & O'Leary (2014) suggest that strong ISM magnetic fields in the vicinity of a SMBH could boost magnetar production. However, a robust evaluation of burst detectability is difficult due to large uncertainties in intrinsic magnetar energy budgets, lengths of flaring and quiescent periods, and beaming geometries relative to our lines of sight.
SUMMARY AND CONCLUSIONS
We executed a set of 1.15-1.75 GHz observations of the core of M87 with the Arecibo radio telescope in order to search for millisecond bursts. Our observations lasted a total of 18 hours, of which 10 hours were spent onsource. Using a matched filtering approach, we searched our data for single pulses, at trial DMs up to 5500 pc cm −3 with boxcar filter widths between 0.3-123 ms. Adopting a 7σ detection criterion, we report the non-detection of astrophysical bursts in our data, implying a burst rate limit R 0.1 bursts hr −1 above F min 1.4 Jy ms. Invoking R(> F) ∝ F −1 , our observations constrain R to at least a factor of 25 better than previous single pulse searches of M87. We suggest a non-astrophysical origin for the Linscott & Erkes (1980) burst discoveries based on their non-confirmation in more sensitive subsequent surveys of M87.
We evaluated the significance of our radio burst non-detection in the context of different neutron star populations in M87. Millisecond pulsars are too weak to yield detectable emission at extragalactic distances, and the low star formation rate of M87 renders unlikely the existence of a significant Crab-like, GP-emitting pulsar population. Magnetars may however reside in the dense magneto-ionic medium near the SMBH of M87. Such magnetars may emit sufficiently energetic radio pulses for detection during their active phases. We encourage high sensitivity, multi-epoch observations of M87 to detect possible magnetar radio bursts, if they are favorably beamed towards our line of sight.
ACKNOWLEDGMENTS
AS thanks Scott M. Ransom for helpful software-related discussions. AS, SC and JMC acknowledge support from the National Science Foundation (NSF AAG−1815242). SC, JMC and FC are members of the NANOGrav Physics Frontiers Center, which is supported by the NSF award PHY−1430284.
The Arecibo Observatory was a facility of the National Science Foundation operated under cooperative agreement by the University of Central Florida and in alliance with Universidad Ana G. Mendez, and Yang Enterprises, Inc. The Arecibo observations presented here were gathered as part of program P3315, PI: A. Suresh.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE) through allocation PHY200054, which is supported by National Science Foundation grant number ACI−1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI−1445606, at the Pittsburgh Supercomputing Center (PSC). AS acknowledges the XSEDE and PSC staff for their timely and helpful responses to queries. |
72938760 | s2orc/train | v2 | 2019-03-10T13:13:25.680Z | 2019-03-09T00:00:00.000Z | The Same or Different? Convergence of Skin Gambling and Other Gambling Among Children
There is increasing attention on the introduction of gambling-like practices within video games. Termed convergence, this has been explored from the viewpoint of the product, examining similarities in game/gambling mechanics. Understanding convergence of practice is essential to map the epidemiology of these behaviours, especially among children. This paper focuses on the betting of skins within video games to explore co-occurrence with other forms of gambling among British children aged 11–16. Analysing the British Youth Gambling Survey showed that 39% of children who bet on skins in the past month had also gambled on other activities. Betting on skins and other forms of gambling increased with age and concordance of skin gambling/betting was greatest for those who also gambled online. Among gamblers, those who bet skins had higher rates of at-risk and problem gambling than those who did not (23% vs. 8%), though they had a greater breath of gambling involvement. Skin gambling alone was not significantly associated with at-risk gambling when other forms of gambling activity were taken into account. Skin betting and gambling on other activities cluster together, especially where the medium underpinning the behaviours is the same. Children who engage in both skin gambling/betting and other forms of gambling should be considered at-risk for the experience of harms because of their heightened engagement in gambling and gambling-like activities.
Introduction
New media and its associated technological infrastructure have created conditions in which forms of gambling can, and increasingly are, being incorporated into digital life and practice (Macey and Hamari 2018a;Griffiths et al. 2013). This is particularly true within video games, which incorporate relatively new and emerging practices that replicate and reproduce gambling-like activities within this media. These practices include loot boxes, where players pay to 'open' a virtual box in the hope of it containing in-game items of 1 3 significantly higher value than their original outlay, or the gambling or betting of 'skins' (decorative in-game items) through various mediums (Macey and Hamari 2018a). There has been much consideration of the intersection between video game participation and engagement in other risky practices, including gambling (Macey and Hamari 2018b) and it suggested that video game engagement could serve as a gateway into other gambling activities, though as Macey and Hamari (2018b) point out, evidence on this is mixed.
These studies have tended to focus on the relationship between any form of video game play and gambling behaviour (McBride and Derevensky 2017). However, within video games, an increasing number of gambling-like activities are available, and less consideration has been given to the intersection of these 'within game' features with more traditional forms of gambling activity. These features, such as skin gambling/betting or the purchase of loot boxes, are facilitated by micro transactions within video games, whereby players pay to purchase in-game virtual items or access to certain game features. These transactions are an increasingly common and profitable part of the gaming ecosystem (Parent-Zone 2018; King and Delfabbro 2018). This then facilitates a range of other actions for players, such as the betting or trading of skins, mainly on third party websites. Skins are virtual items earned or purchased within video games, which have their own value within the gaming community. They are decorative items that have no bearing on the outcome of the game but are highly sought after nonetheless and first emerged in 2012 within the game Counter Strike: Global Offensive (ParentZone 2018). Skins are purchased from a digital marketplace and some skin items are more valuable than others, often based on rarity, popularity and potential use (ParentZone 2018; Gambling Commission 2017a). The value of these items, like 'hard' currency itself, gold or diamonds (or using seventeenth century examples, tulip bulbs) can fluctuate based on these features. Through third party access to digital marketplaces, where skins are bought and stored, skins can be bet or traded on other websites and thus the virtual value of the skin converted into real currency. These practices are examples of a common phenomenon within digital games, where a range of different actions and industries develop around and extend from the core game (Kerr 2006). With regards to skin betting and gambling, there is ambiguity around the nature of the practice, though the British Gambling Commission (the industry regulator) stated that they consider skins to have real world value and that betting of them represents a 'money's worth prize' (Gambling Commission 2017a). Skins therefore function as a form of crypto-currency with their own value but can also be converted into 'real world' currency. This suggests that the betting of these items extends beyond game play and could be considered gambling conducted via processes and websites where there is no robust age verification in place and which are complex to regulate.
Concern about skin betting practices have been heightened with respect to children and the potential role they may play in shaping problematic gaming and/or gambling practices (Gainsbury et al. 2015;King and Delfabbro 2018). Described as convergence between gaming and gambling, three interlocking challenges have been considered: that these 'convergent' practices could prompt children to gamble more generally as a form of gateway activity, that engagement in this form of activity alone could be harmful and that these practices normalise gambling for a cohort of children (ParentZone 2018).
In Britain, as elsewhere, children are singled out for specific regulatory protections from gambling with legal age limits placed on most commercial forms. Nonetheless, it is estimated that 12% of children aged 11-15 have participated in some form of gambling activity in the past week, with over half of this activity being on commercial and (technically) legally restricted forms (Wardle 2018a). Furthermore, it is estimated that around 0.8% of children aged 11-15 in Great Britain experience problems with their gambling behaviour 1 3 and early onset of gambling in childhood is a known risk factor for subsequent problems (Blinn-Pike et al. 2010;Forrest and McHale 2018). It is in this context that the challenges of these seemingly convergent digital practices are raised as they are viewed as providing the means for children to gamble and access gambling content. Politicians are giving this increasing attention, with questions asked by UK parliamentarians about the impact of skin betting on underage children (UK Parliament 2018).
The betting and gambling of skins is popular among children and young people. A recent survey of 13-18 years olds in Great Britain estimated that 10% had ever gambled or bet skins (ParentZone 2018). In 2017, report by the British Gambling Commission found that 11% of 11-16 years olds had ever bet skins, and that 4% had done so in the past week. This made skin gambling/betting as popular as playing on fruit/slot machines and more popular than most other 'traditional' forms of gambling activity (Gambling Commission 2017b). Yet to date, there has been (to the author's knowledge) little empirical examination of the extent to which skin gambling among children is combined with other forms of gambling; empirical insight which is needed to explore the whether gambling and gaming are mutually reinforcing consumptive practices, and if so, to what extent.
Objectives and Hypotheses
Understanding the potential impact of engagement in skin gambling among children requires a greater consideration of children's behaviours in order to map the basic epidemiology of practices. To date, notions of convergence between gambling-like activities and more traditional forms of gambling have tended to be examined by focusing on the products, with researchers noting the similarities of these practices, their common structural features and reward system mechanisms (King and Delfabbro 2018;McBride and Derevensky 2017). It is, however, vitally important to understand convergence of behaviours, especially if theories about one practice leading to another are to be better explored. This research uses nationally representative data of children aged 11-16 to explore this and to estimate: (a) the extent to which skin gambling and betting among 11-16 years olds is combined with other, more 'traditional' forms of gambling; (b) how the prevalence of skin gambling (alone and in combination with other forms of gambling) varies by different socio-demographic and economic characteristics; and (c) whether rates of problem and at-risk gambling vary by engagement in skin betting/ gambling.
It is hypothesised that skin gambling and other forms 'traditional' forms of gambling will cluster together, given the similarities between the practices meaning that those who are interested in one form of practice are also likely to be interested in others (H1). It is also hypothesised that this clustering will be socially patterned, being more common among certain types of children, especially boys (H2) and those from more disadvantage backgrounds (H3). Finally, it is hypothesised that children who participate in skin gambling and other forms of gambling will display greater levels of at-risk or problem gambling (H4), as a function of their greater involvement with gambling and gambling-like activities more generally.
Data
Secondary analysis of the 2017 Youth Gambling Survey, conducted for the British Gambling Commission by Ipsos Mori via their youth omnibus survey, was undertaken.
The youth omnibus collects survey information from a random sample of school-aged children in years 7-11 on a range of topics (funded by different clients). The Gambling Commission funds a subsection of the questionnaire to collect some data about gambling behaviour. Overall, 446 secondary schools were randomly chosen from the Edubase list in England and Wales and from a listing provided by the Scottish Government in Scotland. The school sample was stratified by Government Office Region and, within each stratum, further stratified by Local Authority, area deprivation and school size. Within each participating school, one curriculum year group (Year 7-Year 11) was selected to participate at random for each school. All members of the randomly-selected class group were asked to fill out a paper self-completion survey. Overall, 103 selected schools participated, giving a school-based response rate of 23%. Questionnaires were obtained from 2881 pupils aged 11-16 (Ipsos 2017).
Skin Betting Measures
In 2017, four questions about video games and skin betting and gambling were included for the first time. The following questions were asked: whether children ever played computer games or game-apps these days; those who had were then asked if they were aware of betting with in-game items and whether they had personally done so. Those who had bet or gambled using skins were asked how often they had done so (within the past 7 days, month or past year). Questions asked about skin betting were preceded by this introduction: 'when playing computer games/apps it is sometimes possible to collect in-game items (e.g. weapons, power-ups and tokens). For some games, it is possible to bet these in-game items for the chance to win more of them.' This is the definition of skin gambling/betting used within the survey and thus is the definition for the analysis presented in this paper. Using this information, children who had bet using skins in the past month were identified.
Gambling Measures
All children were asked whether they had used their own money in the past week on one of 14 forms of gambling activity, ranging from purchasing lottery tickets, scratchcards or private betting to betting in bookmakers, casinos or online gambling or betting. All children were also asked how often in the past year they had spent their own money on each of the following: lottery tickets, scratchcards, fruit machines, bingo, online gambling or betting and private betting or gambling with friends. For this analysis, those who had gambled on at least one of these six activities on a monthly basis and anyone who had gambled in the past week were defined as 'past month gamblers'. The absence of more detailed frequency data for some forms of gambling (for example betting in 1 3 bookmakers) may mean there are some false positives within the non-past month gambler group, though the forms of gambling excluded were very low prevalence (Gambling Commission 2017b). Gambling problems were measured using the DSM-IV-J-MR instrument. This was developed and validated by Sue Fisher specifically to assess gambling problems among adolescences (Fisher 2000). Responses to 12 items are scored and summed out of a maximum of 10 (there are three items where a score of one is given if anyone of the three behaviours is endorsed). A score of 4 or more indicates problem gambling and a score of 2-3 indicates at-risk gambling (Fisher 2000;Olason et al. 2006;Castrén et al. 2015). Because of small base sizes (problem gambling n = 25), the at-risk and problem gambling categories have been combined in this analysis and, following Castrén et al. (2015) termed at-risk or problem gambling (Castrén et al. 2015).
Skin Betting and Gambling Measures
Using the measures described above, all children were allocated to one of the following groups: had bet with skins and gambled on other activities in the past month; had bet with skins in the past month only; had gambled on other activities in the past month only, had participated in neither in the past month. This was undertaken for participation in all gambling activities combined and for each of the six individual gambling activities where frequency data was available. These variables were used to explore the extent to which skin gambling may co-occur with certain types of gambling activity as well as gambling overall.
Socio-demographic/Economic Measures
The youth omnibus survey collects very limited details of children's socio-economic or demographic circumstances. This is partly because it is a school-based survey and limited questions can be asked about the home circumstances of their parents and families. It is also partly because it is an omnibus study and questionnaire space is reserved for paying clients. This is common among most surveys of children conducted within this setting. Therefore, demographic and socio-economic measures are limited but do include some key measures known to be associated with children's gambling behaviour, namely age, sex, ethnicity, self-rated academic performance and a measure of low-income status, represented by receipt of free school meals (Blinn-Pike et al. 2010;Forrest and McHale 2018). Because of small base sizes, age was grouped into 2-year bands and ethnicity grouped into White/ White British; Asian/Asian British, Black/Black British and mixed/other. Children reported how well they felt they were doing at school on a four point scale and responses grouped into those doing well versus not doing well. Children were asked whether they were in receipt of free school meals. Free school meals are only available to parents in receipt of income-based benefits and thus act as a proxy for identifying low-income families.
Analyses
Bi-variate associations between the prevalence of skin gambling (alone and in combination with other forms of gambling) and socio-demographic/economic characteristics were produced using SPSS's complex survey module. For bi-variate analyses, the complex survey function produces an adjusted Wald's F-test as its default test of significance, which assesses the extent to which the independent variable (prevalence of skin betting, for example) varies by the dependent variables (age or gender, for example), whilst taking into 1 3 account the survey weighting, stratification and clustering of children within classes (Rao and Scott 1984). All p values cited in the tables relate to this type of statistical testing. Following Graham et al. (2014), observed-expected ratios were computed to assess the extent to which skin gambling and other forms of gambling cluster together. Observed-expected ratios are interpreted relative to their confidence intervals. An observed-expected ratio greater than one, with a confidence interval that does not straddle 1, represents a higher prevalence than would be expected if the behaviours were independent and indicates clustering of behaviours. Finally, two multivariate logistic regressions were run to (a) examine whether certain forms of gambling were associated with skin gambling in the past month and (b) whether skin gambling was associated with at-risk gambling, once other forms of gambling engagement was taken into account. Checks for collinearity between individual forms of gambling activities were undertaken [assessment of phi correlations for binary data and variance inflation factor (VIF) diagnostic tests] and found to be minimal (available on request from the author). Both models also controlled for age, sex and academic attainment as bi-variate analyses showed these were associated with skin gambling. Regression models were produced using Stata v15, and took into account the survey weights and complex study design. Missing data was minimal and excluded from analyses. Ethical approval was provided by the London School of Hygiene and Tropical Medicine Ethics' committee (Ref. 15960). Table 1 shows overall prevalence of participating in skin gambling and other forms of gambling in the past month. Overall, 7% (95% CI 5.5, 7.5) of children had bet with skins and 16% (95% CI 13.6, 16.4) had gambled on other forms of gambling activity in the past month. Prevalence of past month betting for other gambling activities ranged from 2% (95% CI 1.7, 2.7) for online gambling or playing bingo to 8% (95% CI 7.3, 9.3) for betting with friends. Skin betting was the second most popular form of activity among children 1 3 overall and among boys, it was the most prevalent activity of those reported. Among girls, it was one of the least popular activities undertaken.
Engagement in Individual Activities
Rates of skin betting rates rose with age, rising from 4% (95% CI 2.3, 4.9) for those aged 11-12 to 7% (95% CI 5.4, 9.4) for those aged 15-16. Notably, rates of gambling on other activities did not vary significantly by age. Skin gambling did not vary significantly by ethnicity, self-reported academic performance or receipt of free school meals ( Table 2).
Concordance of Skin Betting and Gambling on Other Activities
Overall, 3% (95% CI 1.9, 3.1) of children reported gambling on both skins and other forms of gambling activity in the past month. The observed-expected (O/E) ratio for both skin gambling and gambling on other activities among all children was 2.5 (95% CI 1.9, 3.2, see Table 3). Among skin bettors, 39% (95% CI 31.6, 46.4) had also bet on some other form of gambling whilst 61% had bet on skins alone. To look at this another way, 16% (95% CI 12.4, 19.6) of children who had gambled on other forms of activity had also bet on skins in the past month.
Observed-expected ratios between skin betting/gambling and gambling individually on each of the six main activities all indicated a significant level of clustering than would be expected given their population prevalence. The observed-expected ratios for skin betting/gambling and gambling online were almost six times higher than expected (O/E = 5.9, 95% CI 3.3, 8.5), whilst for fruit/slot machine betting it was over three times higher than expected (O/E = 3.3, 95% CI 2.1, 4.5). Among those who gambled online in the past month, 37% had also gambled with or bet skins.
The strength of the association between online gambling, fruit/slot machine gambling and skin betting was confirmed in multivariate regression analysis. The odds of having gambled or bet skins in the past month were 3.8 (95% CI 1.1-12.8) times higher among those who had also gambled online than those who had not, even after engagement in other forms of gambling, age, sex and academic attainment were taken into account. Odds of skin gambling were also higher among those who had bet on fruit/slot machines in the past year (2.7; 95% CI 1.4-5.2). However, all other individual forms of gambling activity were not associated with past month skin gambling in the regression model (Table 4).
Patterns of Convergence by Socio-demographic/Economic Characteristics
Boys were more likely than girls to report gambling on both skins and other forms of gambling, though this is unsurprising given the increased preference for both individual activities among boys. However, observed-expected ratios were higher for girls suggesting that despite these being very low prevalence activities for girls overall, they were highly likely to cluster together. The concordance of gambling both on skins and other activities increased with age, being higher among those aged 13-16 than those aged 11-12. Observed-expected ratios for both skin betting and gambling on other activities rose from 1.6 among those aged 11-12 [though the 95% CI straddled 1 (0.4-2.8)] to 2.4 for those aged 15-16 (95% CI 1.7-4.3). Gambling on both skins and other activities in the past month was higher among those who reported that they were not doing well at school than those who were doing well, though observed/expected ratios suggested that skin gambling and other forms of gambling clustered for both groups [O/E for those doing well = 2.5 (95% CI 1.6-3.0); not doing well = 2.9 (95% CI 1.7-4.2)] (Table 5) 1 3
At-Risk/Problem Gambling and Gambling Involvement Among Gamblers
Those who gambled/bet on skins and other types of gambling participated in a greater number of gambling activities (excluding skin gambling/betting), on average, than those who only gambled on other things. At-risk and problem gambling rates were significantly higher among those who had both bet with skins and engaged in other forms of gambling activity in the past month (23%, 95% CI 12.7-34.3) than those who had gambled on other activities alone (8%, 95% CI 4.7-10.5). However, in the multivariate logistic regression model, skin gambling or betting was not associated with at-risk gambling once engagement in other individual gambling activities was taken into account (Tables 6, 7).
Discussion
Both skin gambling/betting and gambling on other activities were relatively common among British children aged 11-16, despite some legal restrictions on participation. In Britain, participation on most forms of commercial gambling, including the National Lottery, is age prohibited yet many children still find ways to access these activities, with over half of children's gambling activity estimated to be on age-restricted forms (Wardle 2018a, b). Playing video games is even more common among this age group and among boys, the gambling or betting of skins was the most prevalent form of 'gambling' activity. Evidence from this analysis shows that there is some overlap in who gambles or bets with skins and 1 3 Table 5 Observed/expected ratios for combinations of skin betting and gambling behaviour by socio-demographic/economic characteristics *Estimates not shown because of small cell sizeŝ Because of rounding, the lower CI in some cells looks the same at the observed-expected ratio who takes part in other forms of gambling (confirming hypothesis 1), with 3% of children aged 11-16 saying that they did both. Whilst this may seem like a small number, this equates to around 100,000 children aged 11-16 in Britain. Furthermore, observed/expected ratios show that these two behaviours co-occur more than would be expected given their independent population prevalence, indicating greater overlap between these behaviours than is expected. Notably, the greatest level of overlap was between skin betting and gambling and other forms of online betting or gambling. This is perhaps unsurprising, given the common media underpinning these consumptions. This therefore supports the notion of a 'convergence' in behaviours among some children who are engaging in both activities. These patterns of behaviour 'convergence' were greatest for boys, older children and those who felt they were doing less well at school, confirming hypothesis 2. However, there was little evidence that this clustering of behaviour occurred disproportionately among those from more disadvantage backgrounds. This may be related to the measure (receipt of free school meals) used to proxy low income households. However, the evidence is not unequivocal. The most common pattern among those who bet or gambled with skins was that they did not also engage in other forms of gambling. At younger age groups, children tended either to bet on skins or to gamble on other things, if they did this at all. Among older children, skin gambling/betting was more likely to be combined with gambling on other activities, though half of skin gamblers did this activity alone. This suggests a need for greater clarity when talking about processes of convergence 1 3 between gambling and gaming. As Macey and Hamari (2018a) have highlighted, there is often a tendency with newly emerging consumptive practices to view them in silos rather than to situate them within the broader context of existing behaviours. This paper attempts to address this issue and suggests that there are four distinct groups of children: the majority who engage in neither skin gambling or other forms of gambling; a significant minority who gamble but do not bet with skins (which includes a disproportionate number of girls given their lesser propensity to play video games); a minority who only bet or gambled skins and a further minority who bet and gambled skins and gambled on other things. For the vast majority of children, these behaviours are not converging simply because do not engage in these practices; yet for a minority they are and these behaviours cluster.
Notably, rates of at-risk and problem gambling were highest among gamblers who also engaged in skin gambling/betting (confirming hypothesis 4). This is to be expected. By definition, those engaging in both skin gambling/betting and other forms of gambling have higher levels of gambling involvement because they both gambled on traditional forms of gambling and engaged in a similar practice within video games. However, it is also evident that this group were also more involved in 'traditional' forms of gambling alone, with the average number of traditional forms of gambling undertaken being higher among this group also. Involvement theory postulates that the more someone engages in gambling the more likely it is that that they will experience harm from that engagement. This is often explored using the number of gambling activities someone undertakes as a measure of their breadth of gambling engagement (LaPlante et al. 2014;Dixon et al. 2016). The results of the regression analysis showed that the relationship between skin gambling and at-risk gambling attenuated once involvement in a number of other forms of gambling was taken into account. This suggests that it is the combination of skin gambling with other forms that needs further consideration. Therefore, whilst children who gamble with skins as well as other forms of gambling should be considered a high-risk group for the attendant experience of harms, this is likely related to their broader gambling repertoires than their engagement in skin/gambling or betting alone.
Notions of convergence underpin much academic thought about the seemingly mutually reinforcing practices of gaming and gambling. This has tended to approach this issue through analysis of the product, with examples of gambling-like practices embedded within the video game eco-system heralded as examples of convergent of practice and activities. However, there is notable conceptual ambiguity around the demarcation of gambling and play (Caillios 1958;Juul 2003), with some theorists querying where gaming stops and gambling begins. It is therefore important to assess the extent to which these are shared consumptive practices among individuals. This is especially so with children who have been subject to much concern around these developments. Whilst this paper provides some evidence that these behaviour co-occur for some, it does not explore how and why this occurs or, indeed, what type of practice children believe skin gambling to be. Previous research on young people's perceptions of gambling and gaming noted considerable ambiguity around how young people understand and define gambling activities (Korn 2005;Skinner et al. 2004). This ambiguity may arguably be heightened among children specifically because of the different values they attach to objects in lieu of access to monetary resources (Wardle 2018b). It is imperative, therefore, to understand how children themselves differentiate these consumptive practices and the meanings they attach to them.
3
Limitations Analyses presented are based on self-reported behaviours from a survey of school-aged children and inherits the attendant issues of this methodology. It is secondary analysis, meaning that the analysis presented is limited to the questions designed and funded by the original survey commissioners (for example, only four questions being asked about video games and skin gambling/betting). Only a very limited number of socio-economic characteristics were included in the original survey, limiting the extent to which it is possible to explore how behaviours vary among different types of children. This also limits the range of covariates available to include in the multivariate models and caution should be taken not to view these as models exploring the full range of factors associated with either skin gambling or at-risk gambling. They are presented to give greater descriptive insight into the relationships highlighted through the bi-variate analyses. The definition of skin gambling/betting used is broad and is likely to include private betting/gambling among peers as well as the betting and gambling of skins on third party websites. However, as the definition of gambling used in the survey also includes betting and gambling for money among peers, these are comparable. There is no data about the sequence of activities, only that they were undertaken at broadly the same time. The measure used to represent past month gambling is likely to slightly under-estimate gambling behaviour as frequency of gambling was only collected for six main forms of gambling and excluded less prevalent forms (for example, betting in a bookmakers). Finally, the digital world is fast moving and new products and practices emerge within a short pace of time. Whilst this data was collected in 2017, meaning it is relatively recent, it is possible that the digital landscape has changed in the intervening period.
Conclusion
Convergence of digital practices is often examined via the lens of the product, where consideration is given to how seemingly similar practices are transferred from one medium to another. Whilst theories about the demarcation between gambling and gaming may be contested, the assimilation of gambling cues within gaming practices and ambiguity about where gaming ends and gambling begins cannot be denied. When considering these issues, it is vital to understand how such conceptual ambiguity manifests in everyday consumption and practice. This paper has shown that, among children, whilst gambling and gaming behaviours do cluster, and do so more for some groups than others, there is also a sizeable majority of children who engage in neither activity or who do one but not the other. This paper also provides some evidence of co-occurring practices among children, especially those conducted through the same medium, where there is a high level of concordance between skin gambling/betting and online gambling. Children who engage in both skin gambling/betting and other forms of gambling should be considered an at-risk group for the experience of harms because of their heightened engagement in gambling and gambling-like activities.
1 3 this position through government by the Gambling Commission (the regulator). In her previous employment (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016), she worked on contracts funded by GambleAware, a national charity mandated by government to commission research into gambling in Great Britain. She is currently working on a small project for GambleAware looking at the relationship between gambling and suicide. Funds for GambleAware are raised by contributions from the industry, though decisions about what research to fund are made by the RGSB. Heather runs a research consultancy, Heather Wardle Research Ltd. She does not and has not provided consultancy services for the gambling industry. |
7421370 | s2orc/train | v2 | 2014-10-01T00:00:00.000Z | 2003-08-21T00:00:00.000Z | Low-energy dynamics in N = 2 super QED: Two-loop approximation
The two-loop (Euler-Heisenberg-type) effective action for N = 2 supersymmetric QED is computed using the N = 1 superspace formulation. The effective action is expressed as a series in supersymmetric extensions of F^{2n}, where n=2,3,..., with F the field strength. The corresponding coefficients are given by triple proper-time integrals which are evaluated exactly. As a by-product, we demonstrate the appearance of a non-vanishing F^4 quantum correction at the two-loop order. The latter result is in conflict with the conclusion of hep-th/9710142 that no such quantum corrections are generated at two loops in generic N = 2 SYM theories on the Coulomb branch. We explain a subtle loophole in the relevant consideration of hep-th/9710142 and re-derive the F^4 term from harmonic supergraphs.
Introduction and outlook
In our recent paper [1], a manifestly covariant approach was developed for evaluating multi-loop quantum corrections to low-energy effective actions within the background field formulation. This approach is applicable to ordinary gauge theories and to supersymmetric Yang-Mills theories formulated in superspace. Its power is not restricted to computing just the counterterms -it is well suited for deriving finite quantum corrections in the framework of the derivative expansion. More specifically, in the case of supersymmetric Yang-Mills theories, it is free of some drawbacks still present in the classic works [2] (such as the splitting of background covariant derivatives into ordinary derivatives plus the background connection, in the process of evaluating the supergraphs).
As a simple application of the techniques developed in [1], in this note we derive the two-loop (Euler-Heisenberg-type [3,4,5]) effective action for N = 2 supersymmetric QED formulated in N = 1 superspace. This is a supersymmetric generalization of the two-loop QED calculation by Ritus [6] (see also follow-up publications [7,8,9,10,11]). It is curious that the two-loop QED effective action [6] was computed only a year after the work by Wess and Zumino [12] that stimulated widespread interest in supersymmetric quantum field theory. To the best of our knowledge, the Ritus results have never been extended before to the supersymmetric case 1 .
Our interest in N = 2 SQED, and not the 'more realistic' N = 1 SQED, is motivated by the fact that there exist numerous (AdS/CFT-correspondence inspired) conjectures about the multi-loop structure of (Coulomb-branch) low-energy actions in extended superconformal theories, especially the N = 4 SYM theory, see, e.g. [13] for a discussion and references. None of these conjectures are related to N = 2 SQED which is, of course, not a superconformal theory. We believe, nevertheless, that the experience gained and lessons learned through the study of N = 2 SQED should be an important stepping stone towards testing these conjectures.
An unexpected outcome of the consideration in this paper concerns one particular conclusion drawn in [14] on the basis of the background field formulation in N = 2 harmonic superspace [15]. According to [14], no F 4 quantum correction occurs at two loops in generic N = 2 super Yang-Mills theories on the Coulomb branch, in particular in N = 2 SQED. However, as it will be shown below, on the basis of the background field formulation in N = 1 superspace, there does occur a non-vanishing F 4 two-loop correction in N = 2 SQED. Unfortunately, the analysis in [14] turns out to contain a subtle loophole related to the intricate structure of harmonic supergraphs. A more careful treatment of two-loop harmonic supergraphs, which will be given in the present paper, leads to the same non-zero F 4 term in N = 2 SQED at two loops as that derived using the N = 1 superfield formalism.
Some time ago, Dine and Seiberg [16] argued that the F 4 quantum correction is oneloop exact on the Coulomb branch of N = 2, 4 superconformal theories. It was also shown [17,18] that there are no instanton F 4 corrections. The paper [14] provided perturbative two-loop support for the Dine-Seiberg conjecture. Since the two-loop F 4 conclusion of [14] is no longer valid, it would be extremely interesting to carry out an independent calculation of the two-loop F 4 quantum correction in N = 2 superconformal theories (it definitely vanishes in N = 4 SYM).
The present paper is organized as follows. In section 2 we review, following [1], the structure of exact superpropagators in a covariantly constant N = 1 vector multiplet background. Section 3 contains the N = 2 SQED setup required for the subsequent consideration. The one-loop effective action for N = 2 SQED is reviewed in section 4. The two-loop effective action for N = 2 SQED is derived in section 5 -the main original part of this work. In section 6 we re-derive the two-loop F 4 quantum correction using the harmonic superspace formulation for N = 2 SQED. The salient properties of the N = 1 parallel displacement propagator are collected in appendix.
Exact superpropagators
In this section we review, following [1], the structure of exact superpropagators in a covariantly constant N = 1 vector multiplet background. Our consideration is not restricted to the U(1) case and is in fact valid for an arbitrary gauge group. The results of this section can be used for loop calculations, in the framework of the background field approach, of special sectors of low-energy effective actions in generic N = 1 super Yang-Mills theories. They will be used in the next sections to derive the two-loop (Euler-Heisenberg-type) effective action for N = 2 SQED.
Green's functions in N = 1 super Yang-Mills theories are typically associated with covariant d'Alembertians constructed in terms of the relevant gauge covariant derivatives with D A the flat covariant derivatives 2 , and A A (z) the superfield connection taking its values in the Lie algebra of the gauge group. So we start by recalling the algebra of gauge covariant derivatives: Here the spinor field strengths W α andWα obey the Bianchi identities There are three major d'Alembertians which occur in covariant supergraphs [20]: (i) the vector d'Alembertian ✷ v ; (ii) the chiral d'Alembertian ✷ + ; and (iii) the antichiral d'Alembertian ✷ − . The vector d'Alembertian is defined by Among its important properties are the identities The covariantly chiral d'Alembertian is defined by As can be seen, the operator ✷ + acts on the space of covariantly chiral superfields. The antichiral d'Alembertian is defined similarly, The operators ✷ + and ✷ − are related to each other as follows: Additional relations occur for an on-shell background In what follows, the background vector multiplet is chosen to be covariantly constant and on-shell, It is worth noting that the first requirement here implies that the Yang-Mills superfield belongs to the Cartan subalgebra of the gauge group.
Associated with ✷ v is a Green's function G(z, z ′ ) which is subject to the Feynman boundary conditions and satisfies the equation (2.11) It possesses the proper-time representation The corresponding heat kernel 3 [1] is where the determinant is computed with respect to the Lorentz indices, and I(z, z ′ ) is the so-called parallel displacement propagator, see the Appendix for its definition and basic properties. The supersymmetric two-point function ζ A (z, z ′ ) = −ζ A (z ′ , z) = (ρ a , ζ α ,ζα) is defined as follows: Let us introduce proper-time dependent variables Ψ(s) ≡ U(s) Ψ U(−s). With the notation for the buiding blocks appearing in the right hand side of (2.13) we then get 3 This heat kernel was first derived in the Fock-Schwinger gauge in [21].
One also finds [1] U(s) I(z, z ′ ) = exp s 0 dt Ξ(ζ(t), W(t),W(t)) I(z, z ′ ) , (2.18) where Ξ(ζ(s), W(s),W(s)) = U(s) Ξ(ζ, W,W) U(−s) and In the case of a real representation of the gauge group, the Green's function G(z, z ′ ) should be realizable as the vacuum average of a time-ordered product, for a real quantum field Σ(z). Therefore the corresponding heat kernel should possess the property As is seen from (2.13), this property is only obvious for the sub-kernelK(z, z ′ |s) defined by However, using the properties of the parallel displacement propagator listed in the Appendix, one can show 22) and this in fact implies (2.20).
Associated with the chiral d'Alembertian ✷ + is a Green's function G + (z, z ′ |s) which is covariantly chiral in both arguments, is subject to the Feynman boundary conditions and satisfies the equation Under the restriction D α W α = 0, this Green's function is related to G(z, z ′ ) as follows: The corresponding chiral heat kernel 4 turns out to be It is an instructive exercise to check, using the properties of the parallel displacement propagator given in the Appendix, that K + (z, z ′ |s) is covariantly chiral in both arguments.
For completeness, we also present the antichiral-chiral kernel The parallel displacement propagator is the only building block for the supersymmetric heat kernels which involves the naked gauge connection. In covariant supergraphs, however, the parallel displacement propagators, that come from all possible internal lines, 'annihilate' each other through the mechanism sketched in [1].
A very special and extremely simple type of background field configuration, is suitable for computing exotic low-energy effective actions of the form which are of some interest in the context of the Veneziano-Yankielowicz action [23] and its recent generalizations destined to describe the low-energy dynamics of the glueball superfield S = tr W 2 . Under the constraint (2.30), the kernel (2.13) becomes while the chiral kernel (2.26) turns into 5 (2.33) Here the parallel displacement propagator is completely specified by the properties: The action of N = 2 SQED written in terms of N = 1 superfields is The dynamical variables Φ and V describe an N = 2 Abelian vector multiplet, while the superfields Q andQ constitute a massless Fayet-Sohnius hypermultiplet. The case of a massive hypermultiplet is obtained from (3.1) by the shift Φ → Φ + m, with m a complex parameter. 6 Introducing new chiral variables with σ = (σ 1 , σ 2 , σ 3 ) the Pauli matrices, the action takes the (real representation) form We are interested in a low-energy effective action Γ[W, Φ] which describes the dynamics of the N = 2 massless vector multiplet and which is generated by integrating out the 5 A simplified version of the chiral kernel (2.33) has recently been used in [24] to provide further support to the Dijkgraaf-Vafa conjecture [25]. 6 The action of N = 1 SQED is obtained from (3.1) by discarding Φ as a dynamical variable, and instead 'freezing' Φ to a constant value m.
µ is the renormalization scale and Ω some real analytic function. The first term on the right hand side of (3.4) is known to be one-loop exact in perturbation theory, while the second term receives quantum corrections at all loops.
To evaluate quantum loop corrections to the effective action (3.4), we use the N = 1 superfield background field method in its simplest realization, as we are dealing with an Abelian gauge theory. Let us split the dynamical variables as follows: where Φ, V and Q are background superfields, while ϕ, v and q are quantum ones. As is standard in the background field approach, (background covariant) gauge conditions are to be introduced for the quantum gauge freedom while keeping intact the background gauge invariance. Since we are only interested in the quantum corrections of the form (3.4), it is sufficient to consider simple background configurations Upon quantization in Feynman gauge, we end up with the following action to be used for loop calculations with ✷ = ∂ a ∂ a . It is understood here that the quantum superfields q and q † are background covariantly chiral and antichiral, respectively, From the quadratic part of (3.7) one reads off the Feynman propagators Here the Green's function G(z, z ′ ) transforms in the defining representation of SO(2) ∼ = U(1), and satisfies the equation (2.11) with m 2 =ΦΦ. It is given by the proper-time representation (2.12) with the heat kernel K(z, z ′ |s) specified in (2.13). It is understood that the field strengths W α ,Wα and their covariant derivatives (such as F ab ) are related to W α ,Wα as follows (3.10)
One-loop effective action
For the sake of completeness, we discuss here the structure of the one-loop effective action [26,21,27,28,29]. Its formal representation is (see [19] for more details) where ω is the regularization parameter (ω → 0 at the end of calculation), and µ the normalization point. The functional trace of the chiral kernel is defined by Using the explicit form of the chiral kernel (2.26), we obtain where we have introduced the notation For the background superfields under consideration, we have The latter objects turn out to appear as building blocks for the eigenvalues of F = (F a b ) which are equal to ±λ + and ±λ − , where This gives Now, the effective action takes the form where Introducing a new function ζ(x, y) related to Υ by [29] Υ(x, allows one to readily separate a UV divergent contribution and to represent the finite part of the effective action as an integral over the full superspace. Making use of eq. (4.5) and the standard identity for the renormalized one-loop effective action 7 one ends up with with Ψ andΨ defined in (3.5). 7 In deriving the effective action (4.11), we concentrated on the quantum corrections involving the N = 1 vector multiplet field strength and did not take into account the effective Kähler potential , as well as higher derivative quantum corrections with chiral superfields. A derivation of K(Φ, Φ) using the superfield proper-time technique was first given in [30,19], see also more recent calculations [31,32] based on conventional supergraph techniques. The leading higher derivative quantum correction with chiral superfields was computed in [30].
Two-loop effective action
We now turn to computing the two-loop quantum correction to the effective action. There are three supergraphs contributing at two loops 8 , and they are depicted in Figures 1-3. The contribution from the first two supergraphs is The third supergraph leads to the following contribution It turns out that the expression for Γ I+II can be considerably simplified using the properties of the superpropagators and their heat kernels, which were discussed in sect.
SinceD
The latter relation in conjunction with the symmetry property leads to the new representation for Γ I+II In accordance with (2.5), we can represent and this identity turns out to be very useful when computing the action of the commutators of covariant derivatives in (5.5) on the Green's functions. A direct evaluation gives where we have omitted all terms of at least third order in the Grassmann variables ζ α ,ζα and W α ,Wα as they do not contribute to (5.5). It is easy to derive withF the Hodge-dual of F . Here we have taken into account the fact that F = F σ 2 .
As the propagator v(z)v(z ′ ) contains the Grassmann delta-function δ 2 (ζ)δ 2 (ζ), the integral over θ ′ in (5.5) can be trivially done. Replacing the bosonic integration variables in (5.5) by the rule {x, x ′ } → {x, ρ}, as inspired by [6], we end up with The parallel displacement propagators that come from the two Green's functions in (5.5) annihilate each other, in accordance with (A.5).
Using the explicit structure of the chiral kernel (2.26), it is easy to calculate the contribution from the third supergraph Following the non-supersymmetric consideration of Ritus [6], it is useful to introduce the generating functional of Gaussian moments where A is defined in (5.11) and is such that ηA = (η ab A b c ) is symmetric, with η ab the Minkowski metric. From this we get two important special cases: These results allow us to do the Gaussian ρ-integrals in (5.10) and (5.12).
As a next step, we have to compute the determinant of A, with A defined in (5.11), as well as the expression tr F sinh(sF )
The proper-time u-integrals in (5.10) and (5.12) are identical to the ones considererd by Ritus [6]. Two integrals occur , (5.20) and their direct evaluation gives However, the expressions obtained do not make manifest the fact that the two-loop effective action is free of any divergences, unlike the two-loop QED effective action [6]. This is why we would like to describe a different approach to computing the proper-time integrals, which is most efficient for evaluating effective actions in the framework of the derivative expansion.
The integrands in (5.20) and (5.21) involve two or three factors of (u −1 + a ± ) −1 , with a ± defined in (5.17). With the notation x = st/u, one can represent is regular at s = 0. Using these decompositions and replacing the integration variable u → x = st/u, one can easily do the integrals (5.20) and (5.21). Now, if one takes into account the explicit form of P ± , see eq. (5.18), as well as the structure of the effective action (5.24), it is easy to see that all the remaining proper-time s-and t-integrals are of the following generic form (after the Wick rotation s = −is and t = −it) with m, n and p non-negative integers such that p ≤ m + n + 1.
Recently, Dunne and Schubert [33] obtained closed-form expressions for the two-loop scalar and spinor QED effective Lagrangians in the case of a slowly varying self-dual background. In the supersymmetric case, the effective action vanishes for a self-dual vector multiplet. Nevertheless, the results of [33] may be helpful in order to obtain a closed-form expression for a holomorphic part of the two-loop effective action (5.24) with Ψ 2 andΨ 2 defined in (3.5).
The effective action (5.24) contains supersymmetric extensions of the terms F 2n , where n = 2, 3, . . ., with F the electromagnetic field strength. Of special importance is the leading F 4 quantum correction, whose manifestly supersymmetric form is It can be singled out from (5.24) by considering the limit B,B → 0 in conjunction with Direct evaluation, with use of (5.27), gives This result turns out to be in conflict with a prediction made in [14] on the basis of the background field formulation in N = 2 harmonic superspace [15]. According to [14], no F 4 quantum correction occurs at two loops in generic N = 2 super Yang-Mills theories on the Coulomb branch.
Unfortunately, the consideration of [14] contains a subtle loophole. Its origin will be uncovered in the next section. It will also be shown that a careful evaluation of two-loop N = 2 harmonic supergraphs leads to the same result (5.30) we have just obtained from N = 1 superfields. 6 The two-loop F 4 quantum correction from harmonic supergraphs In this section, we will re-derive the two-loop F 4 quantum correction using an off-shell formulation for N = 2 SQED in harmonic superspace [35].
The N = 2 harmonic superspace R 4|8 × S 2 extends conventional superspace, with coordinates z M = (x m , θ α i ,θ iα ), where i = 1, 2, by the two-sphere S 2 = SU(2)/U(1) parametrized by harmonics, i.e., group elements The main conceptual advantage of harmonic superspace is that both the N = 2 Yang-Mills vector multiplets and hypermultiplets can be described by unconstrained superfields over the analytic subspace of R 4|8 × S 2 parametrized by the variables , where the so-called analytic basis is defined by The N = 2 Abelian vector multiplet is described by a real analytic superfield V ++ (ζ). The charged hypermultiplet is described by an analytic superfield Q + (ζ) and its conjugatȇ Q + (ζ). The classical action for N = 2 SQED is Here W (z) is the N = 2 chiral superfield strength [36], dζ (−4) denotes the analytic subspace integration measure, and the harmonic (analyticity-preserving) covariant derivative is D ++ = D ++ ± i V ++ when acting on Q + andQ + , respectively. The vector multiplet kinetic term in (6.3) can be expressed as a gauge invariant functional of V ++ [37].
Upon quantization in the background field approach [15], the quantum theory is governed by the action (lower-case letters are used for the quantum superfields) which has to be used for loop calculations. The relevant Feynman propagators [35,15] are with δ (2,2) A (ζ 1 , ζ 2 ) the analytic delta-function [35], Here the two-point function ρ a is defined similarly to its N = 1 counterpart (2.15). The covariantly analytic d'Alembertian [15] is where W = ±W when acting on q + andq + , respectively. The algebra of N = 2 gauge covariant derivatives D A = (D a , D i α , Dα j ) = D A + i A A derived in [36] can be expressed in the form Let us recall the argument given in [14] that no non-holomorphic quantum corrections of the form occur at two loops. By definition, the two-loop effective action is and it is generated by a single supergraph depicted in Figure 4.
q + q + Figure 4: Two-loop harmonic supergraph Following [14], the crucial step is to lift the analytic subspace integrals to those over the full superspace, by representing, say, q + (1)q + (2) in the form and then using the standard identity dζ (−4) (D + ) 4 L(z, u) = d 12 zdu L(z, u) . (6.12) Since we are only after the quantum correction (6.9), it now suffices to approximate, in the resulting two-loop expression the covariantly analytic d'Alembertian by a free massive one, Now, the part of the integrand in (6.13), which involves the Grassmann delta-functions and spinor covariant derivatives, becomes 15) and this expression is obviously zero. Therefore, one naturally concludes H(W,W ) = 0.
Unfortunately, there is a subtle loophole in the above consideration. The point is that upon removing the two factors of (D + ) 4 from the hypermultiplet propagator (6.11), in order to convert analytic integrals into full superspace integrals, we apparently end up with a more singular harmonic distribution, A (−3,−3) (1, 2), than the original propagator. As a result, the expression (6.13) contains the product of two harmonic distributions and such a product is ill-defined. To make the consideration sensible, we have to regularize the harmonic distributions δ (−2,2) (u 1 , u 2 ) and (u + 1 u + 2 ) −3 from the very beginning. However, the analytic delta-function (6.6) is known to be analytic in both arguments only if the right hand side involves the genuine harmonic delta-function, see [35] for more details. With a regularized harmonic delta-function, however, one has to use a modified (but equivalent) expression for the analytic delta-function [35] This expression is good in the sense that it allows for a regularized nonsingular harmonic delta-function. But it is more singular in space-time than (6.6) -an additional source for infrared problems in quantum theory, as will be demonstrated shortly.
Using the alternative representation (6.17) for the analytic delta-function, we would like to undertake a second attempt to evaluate H(W,W ). Let us start again with the expression (6.10) for Γ two−loop in which the gluon propagator now reads In contrast to the previous consideration, we now make use of the two factors of (D + ) 4 from v ++ (1) v ++ (2) in order to convert the analytic subspace integrals into ones over the full superspace, thus leaving the hypermultiplet propagators intact. Such a procedure will lead, up to an overall numerical factor, to This does not seem to be identically zero and, in fact, can easily be evaluated. The crucial step is to make use of the identity [34] ( If we are only after H(W,W ), the covariantly analytic d'Alembertian can again be approximated as in (6.14). Because of the Grassmann delta-function in the first line of (6.19), only the first term in the right-hand side of (6.20) may produce a non-vanishing contribution. With the harmonic identities it can be seen that H(W,W ) is determined by the momentum integral The bad news is that this integral is both UV and IR divergent. This is the price one has to pay for having made use of the IR-unsafe representation (6.17).
It is of course possible to regularize the integral (6.23) and, then, extract a finite part. Instead of practising black magic, however, we would like to present one more calculation that will lead to a manifestly finite and well-defined expression for H(W,W ). The idea is to take seriously the representation (6.10) and stay in the analytic subspace at all stages of the calculation, without artificial conversion of analytic integrals into those over the full superspace (and without use of the IR-unsafe representation (6.17)). Instead of computing the contribution (6.9) directly, in such a setup we should actually look for an equivalent higher-derivative quantum correction of the form We are going to work with an on-shell N = 2 vector multiplet background In the analytic basis, the delta-function (6.6) can be represented as [35] δ Let us use this expression for δ (2,2) A (ζ 1 , ζ 2 ) in the gluon propagator v ++ (ζ 1 ) v ++ (ζ 2 ) , as defined in eq. (6.5), which appears in the effective action (6.10). It is obvious that the operator (1/✷ 1 ) acts on δ 4 (x 1 − x 2 ) only. The Grassmann delta-function, θ + 1 − (u + 1 u − 2 )θ + 2 4 , can be used to do one of the Grassmann integrals in (6.10). Similarly, the harmonic delta-function, δ (−2,2) (u 1 , u 2 ), can be used to do one of the harmonic integrals in (6.10). As a result, the hypermultiplet propagators in (6.10) should be evaluated in the following coincidence limit: θ 1 = θ 2 and u 1 = u 2 . To implement this limit, it is again advantageous to make use of the identity (6.20). It is not difficult to see that only the second term on the right of (6.20) can contribute. Each term in the operator ∆ −− , (6.21), contains two spinor derivatives. Taken together with the overall factor (D + ) 4 in (6.20), we have a total of six spinor derivatives. But we need eight such derivatives to annihilate the spinor delta-function δ 8 (θ 1 − θ 2 ) entering each hypermultiplet propagator. Two missing derivatives come from the covariantly analytic d'Alembertian. Introducing the Fock-Schwinger proper-time representation After that, it only remains to apply the identity (D + ) 4 (D − ) 4 δ 8 (θ − θ ′ ) θ=θ ′ = 1 (6.29) in order to complete the D-algebra gymnastics. The remaining technical steps (i.e. the calculation of Gaussian space-time integrals and of triple proper-time integrals) are identical to those described before in the N = 1 case. Therefore, we simply give the final result for the quantum correction under consideration: Harmonic superspace still remains to be tamed for quantum practitioners, and the present situation is reminiscent of that with QED in the mid 1940's. It is worth hoping that, as with QED, it should take no longer than half a decade of development for this approach to become a safe and indispensable scheme for quantum calculations in N = 2 SYM theories. |
213014500 | s2orc/train | v2 | 2020-03-19T19:35:02.266Z | 2019-01-10T00:00:00.000Z | IMAGE OF HUMAN IN THE POSTMODERN EPOCH
Purpose. Based on the study of philosophical anthropological concepts, to highlight the project of personality in different historical periods, to reveal the meaning of humanistic issues in the postmodern epoch, to identify the essential features of the image of human of the second half of the XX – the beginning of the XXI century. Theoretical basis. The methodological basis of the article is the principles of historicism, integrity, objectivity regarding the mastery of the issue of person’s image in postmodernism. The research applied comparative-historical, culturologi-cal, analytical, axiological approaches to reveal the problem of individuality in the second half of the XX – the beginning of the XXI century. The theoretical basis of the article consists of scientific works in the field of philosophical anthropology, history, cultural studies, and aesthetics. Originality. The author revealed the peculiarities of transformation of the personality model from antiquity to postmodernism, specified the image of man of the second half of the XX – the beginning of the XXI century. Conclusions. The analysis of anthropological ideas of Western philosophy of different ages shows the variety of views about understanding the nature of the person, its complexity and ambiguity. In the epoch of postmodernism humanistic issues are of particular relevance, which is connected with social and political uncertainty, domination of mass consciousness, loss of national and cultural identity. The image of a person of this period is deprived of a solid foundation, it is blurred and relative. The destruction of faith in the absolute in the context of the second half of the twentieth century contributed to the formation of confidence in the interdependence of all things (including certain historical periods), raised the problem of the personality image to a new ontological level. Orientation in the achievements of European civilization, perception of its anthropological experience, intercultural dialogue contribute to the productive use of the achievements of mankind in order to understand the modern person and to form its adequate image. In its essence, postmodernism does not set the goal to realize a retrospection of subject type. However, separating from the cultural memory the excerpts of ideas about a person, by certain styles and directions, it builds on their formations its own eclectic image of the individual.
Introduction
Anthropology in the twentieth century made a significant contribution to the awareness of human nature, however, there were revealed certain limits of its understanding, which cannot be extended from the standpoint of classical philosophy. In postmodernism, humanism has been subjected to sharp criticism. Feeling of frustration, confusion, the absurdity of being, a playful and mocking attitude to life and the high ideals affirmed (at least at the level of assertion) European civilization led to a revised interpretation of the image of the individual and its concept.
Although postmodernism has declared itself as an ideological system since the second half of the twentieth century, it has not yet established itself as a holistic paradigm with well-defined views on eternal problems. This direction has cast doubt on the leading spheres of activity of the person, proclaimed a certain absurdity of existence, at the same time declaring the desire to "reopen" the subjective world of the individual, to find his lost "Self". Logic and rationalism, which were the basis of the modern era, are recognized as destroying human freedom and manifestations of violence against him.
Anthropological scientific inquiries of postmodernism gave rise to doubt on the dominant principles of humanism. Theorist of this period U. Eco suggested that the hero of this era feels very uncomfortable, he is lonely because he lost his spiritual orientation. Accordingly, the individual is afraid of being himself, seeking certain images and roles that would make him relevant and understandable to others, the same lonely and unknowable. Therefore, the personality of this time is difficult to understand at the level of philosophical ontological knowledge.
The theme of the person's essence activates such issues as: being, consciousness, integrity, etc. For their solution, modern humanities involve in dialogue religion, history, ethics, politics, sociology, psychology, cultural studies, etc. Their purpose is to restore the unity of man, to get as much information about him as possible in order to find the true "Self". It is a kind of perfect utopian idea of a future "monad" person, which is more a project than a real goal.
It can be stated that today philanthropic issues are revealed in various aspects: rethinking of the content and nature of the individual, the importance of religion and traditions in his formation, studying of different types of worldview and their impact on the person, reflecting of philosophical and cultural foundations of anthropological sciences, etc.
At the same time, the idea of the "death of man", which was expressed by M. Foucault in the twentieth century, is particularly relevant. The loss of subjectivity, the dependence of consciousness on social, political and mental factors are real threats posed by postmodernism.
A critical view of the individual in the second half of the twentieth century developed under the influence of the philosophy of F. Nietzsche, who opposed the tenets of modernity. The thoughts of the thinker have caused a duality in the perception of the modern hero. On the one hand, his value is his own potential, which contributes to the achievement of the true purpose of existence, on the other -he is constantly subject to attacks of illusions, fear, struggle for life and happiness.
F. Nietzsche's beliefs laid the foundations for the leading strategies of person interpretation that changed the limits of his existence: one emphasizes the possibilities of artistic worldview (J. Bataille, J. Lacan, M. Foucault); the other shatters the foundations of metaphysical thinking (M. Heidegger, J. Derrida, Y. Kristeva). The directions presented confirmed a contrasting view of the individual: he is no longer a hero but appears as a creature that lost his orienting points; personality is social, biological and mental in nature, but intersubjective in life.
Despite the fact that man, his relationship with society, the search for his own "Self" became the subject of study by many scientists (J. Baudrillard, R. Barthes, D. Derrida, J. F. Liotard, M. Foucault, M. Heidegger, etc.), the current stage of the development of the humanities requires the specification and objectification of homo sapiens model contained in the postmodern program.
Purpose
Based on the study of philosophical anthropological concepts, to highlight the project of personality in different historical periods, to reveal the meaning of humanistic issues in the postmodern epoch, to identify the essential features of the image of human of the second half of the XX -the beginning of the XXI century.
Statement of basic materials
The study of peculiarities of the type of postmodern person implies the researcher's address to the historical and cultural reconstruction of its ideal in certain historical periods. Human life can be interpreted as a kind of worldview, dialogue and mutual enrichment of different civilizations. There is a deep connection between the epochs, despite the fact that often each successive epoch denies the achievement of the previous one. Therefore, comprehension of the image of the person of the second half of the XX -beginning of the XXI century requires understanding of the formation and development of the fundamental elements of anthropological science.
The subject's model question was first raised in European philosophical thought in ancient times. In the works of eminent thinkers (Protagoras, Sophocles, Plato, Aristotle, etc.), this problem is considered in the aspect of domination of physicality. The perception of the flesh as a "face", identical to thinking and symbolizing the principle of beautiful individuality, is a formative element of this civilization. The person explicates in harmony with space and nature. This harmony builds the idea of a free individual. According to S. Baranov (2009): "The image of man is a certain expression of dignity and freedom, resilience to the blows of fate, internal change of man" (p. 7). However, the hero of this period is not free enough, he must obey to the power of the absolute. Ancient personality is a means that promotes connection to higher values, it does not yet have a sense of self-importance. However, ethical categories such as charity, humanism, good, etc., are already being developed into the historical stage, which testifies to the emergence of a new axiological orientation that laid the foundation for medieval philanthropic views.
The eminent thinker M. Berdyaev (1933) expressed an opinion explaining the reasons for the development of a new type of middle-aged individual: "Christianity freed man from the power of cosmic infinity, into which he was immersed in the ancient world, from the spirits and demons of nature. It put him on his feet, strengthened him, made him dependent on God, not on nature" (p. 36). Theocentric views of the Middle Ages affirm the idea of the person as the image and likeness of the Lord. Personality moves to the forefront, though submissive to Him, but has the right to be responsible for himself. This idea is a source of understanding of the religious anthropocentric concept. Inner existence is the centre of knowledge for the fathers of the church. The soul is higher than space, it is associated with the Creator. Therefore, asceticism, renunciation of everything external and secular become the basis of life. Man is interpreted in two dimensionshe is free, immortal, God-like, the centre and purpose of the universe, but in consequence of the Fall, it is internally dissociated. That is, in the presented age, a type of believing subject appears, who lives righteously, directed to God -the source of creative activity.
The Renaissance proclaims new values, affirms a philanthropic system whose starting point was humanism -proclamation of the special purpose of the individual. It is based on three leading principles: liberation of man from rigid church dogmatism, awareness of his uniqueness, return to ancient ideals.
Pico della Mirandola (Italian Renaissance thinker) formulates a defining view of the individual: "… the human being is the intermediary between creatures … set midway between fixed eternity and fleeting time" (Bragina, 2001, p. 331). This postulate makes the subject a certain centre that connects the ideological world and the material (which, unlike the Middle Ages, is no longer interpreted as inferior). Accordingly, all needs, physical and sensual components become equal. Anthropological philosophical pursuits of the Renaissance lead to the following questions: who is a person, is he insignificant or mighty? The desire to find answers to them is in the treatises of Francesco Petrarca "Remedies for Fortune Fair and Foul", Facio "Of the Excellences and Outstanding Character of Man", Lorenzo Valla "On pleasure. Of the True and the False Good", etc. They affirm the image of the hero, considered the measure of all things. He harmoniously combines natural and social phenomena, nobility, divine basis, pure soul, beautiful body, morality, fight for good and justice. An individual is like God in that he is capable of creative selfdetermination, universal, not limited.
Baroque is a type of culture that originated in Europe in the seventeenth century, defining a new stage in the development of civilization. It develops a kind of doctrine of the world and the personality, which, though based on the modified ideas of previous epochs, has a unique style. The image of the person is transformed, indicated by the tension of feelings. In this period philanthropic quest is growing, becoming ontological. Theorists of this direction have found contradictions between the person and society, nature, laws of existence.
The sense of contradiction separates the integrity of the individual. However, the time presented laid the groundwork for the emergence of another type of creative subject -capable of subtle feeling, suffering, experiencing, thinking and fantasizing.
As A. V. Lipatov states: Baroque determined the problem of man in the philosophical, sociological, psychological aspect by scientists, ideologists, and artists. The aspirations of the era resulted in the psychological deepening of the conflict and the associated mainstreaming of the character, not as a type, but as a personality. The new concept of man became also reflected in the specificity of mass perception. The interest in the individual, the subjective, the unique in its singularity -all this became a sign of the times, which left the imprint on aesthetic tastes, literary teachings, the sphere of reception and the worldview in general. (Lipatov, 1977, p. 218) The image of the individual is a complex phenomenon that is constantly evolving and changing. The tension of time brings theatricality, pathos and dynamism to its understanding.
Classicism with rational thinking transforms views on the concept of a person -autonomous one, who can deeply and soberly evaluate phenomena, make predictions about their consequences. He influences social and historical events through the power of intelligence. The ideal became a subject with strong spirit. He sacrifices his interests and even his life for the sake of the public good. The person is regarded as one of the pillars of the universe, a translator of the social system, which through the mind asserts higher moral values.
The era of the ХІХ century is characterized by a special understanding of personality and its capabilities. P. Gurevych notes: Romantics have suggested that human existence is much more than its social dimension. The individual is cramped in the available historical space. He is easily transferred through imagination to other cultural worlds, many of which he creates himself. By denying reality, the romantic enters into unknown zones of his own being. Transforming reality, he embraces something unique, independent, which is inherent only to him as a living being. (Gurevych, 2001, p. 97) The philosopher justifies the view that the subject is interesting due to his spiritual qualities, creative attitude to life.
Nineteenth-century anthropology assumes that a person is a microcosm, in which all the harmony of the cosmos is potentially embedded. Emphasis is placed on its uniqueness, and the tradition of individualization is deepened. I. Kant, I. Fichte, F. Schelling (the Doctrine of the Absolute) laid the foundations for understanding the essence of man. The ontological status of the subject is loneliness. He is disappointed in the world that surrounds him, so he sinks into selfobservation and seeks unity with the Divine Principle. Herewith, romantics have developed a productive idea of personality, according to which it is able to create its own civilizations that do not exist in reality, but transform its inner being and environment.
In the modernist period we can trace a significant development of culture, science, technology, medicine. The huge creative potential of the hero is revealed. The humanistic problem is becoming dominant. The reason for this situation is a departure from the spiritual origins, the affirmation of atheistic consciousness. One of the principles of the era is the denial of the "old", which leads to the shattering of the value system.
An important postulate for the comprehension of this period is the "death of God" and the idea of a "superhuman" expressed by Nietzsche. In light of them, a new image of the personthe lonely one -begins to develop, moving away from society, with its own weighty inner world. The modernist conception of the individual suggests two approaches: objectivist and subjectivist ones. The first one comprehends the personality through the experimental study of its individual components (emotions, needs, ideals, etc.), the second one declares uniqueness, freedom, responsibility and the right to choose their own "Self".
Postmodernism was a testament to the crisis in the life of society and man in the second half of the XX -beginning of the XXI century. According to the researcher N. Amiri, it, unlike the modernism, Emphasizes the importance of the socio-historical context. In postmodern art, the temporality and historical existence of man is a major issue of study. The contemporary artist tries to organize "disorganization" without a clear form of the world … the postmodern artist knows that the result of his work is much more important and more valuable than his original intentions and desires. (Amiri, 2016(Amiri, , p. 1627 Modern humanities substantiate the view that an important feature of the era is game, irony, pluralism, multiculturalism, the parity of existence of different types of mentality. There is formed an idea of the person as one that simultaneously lives in all epochs, entertains with them, constructs his doctrine of being, based on typological figurative and historical collages of the individual. A retrospective analysis of the model of the individual until the second half of the ХХ century shows that each period offers its own vision, which in the second half of the twentieth century can be interpreted as one of the variants of the subject. P. Pavlidis points out that A person in postmodernism has great opportunities to engage himself in culture, thus developing as a personality in the context of world creativity. However, the modern hero develops in the aspect of the consumer of "cultural production", and therefore is not capable of conscious cultural activity that is based on eternal moral standards. Man is not fully realized, he remains infantile. (Pavlidis, 2005, p. 57) The leading task of the individual is to learn the experience of previous times, to create his own image thereon -eclectic, repetitive, variational, cited and so on. The subject of the second half of the ХХ century demonstrates readiness for dialogue -with civilizations, styles, directions, he tries to recognize the value of any point of view, appeals to texts of different historical periods, and at the same time, playfully and mockingly refers to life and high standards of society. The person of postmodernism seeks to combine in his own "Self" the traits of all eras, and at the same time, his essence remains fragmentary and discrete.
In the second half of the twentieth century, traditional approaches to the individual do not study him, but model in accordance with the values of a certain time. The image of the person refers us to the relevant socio-historical and artistic situation, which helps to understand the mechanisms of constructing the content of the individual. Thus, from antiquity the modern hero takes the ideal of a beautiful appearance, which in postmodernism is understood as a cult of the body. According to J. Baudrillard: Its (that is, the body) "new discovery" after the millennium era of Puritanism, which took place under the sign of physical and sexual liberation, is ubiquitous in advertising, fashion, mass culture (and especially the female body, it is necessary to understand why), the hygienic, dietary, therapeutic cult that surrounds it, the imposition of youth, elegance, masculinity or femininity, the care, regimes, sacrificial pursuits that are associated with it, the myth of Satisfaction -all today testifies that the body is the object of salvation. It literally replaced the soul in this moral and ideological function. (Baudrillard, 2006, p. 115) The philosopher clearly reveals the transformation of the ancient idea of beautiful appearance, the shift of semantic accents that lead to the re-reading of this thought, the leveling of its original meaning.
Creating one of the variants of the image of man in postmodernism is also due to the appeal to the postulates of the Middle Ages, considered in the typological dichotomous proximity. Problems raised by the second half of the twentieth century: "death of God", "death of man", "end of history" can be solved through the return of the individual to the traditional religious truths of the Middle Ages: eternity of God, immortality of soul, infinity of history. However, this era includes various ideal constructions -mythologemes, ideologemes, theories that emerge as the subjective mode of thinking and feelings of the modern individual. Thus, the principles of medium aevum demonstrate their vitality in solving the questions of finding the sense of life, holistic and harmonious existence, giving certain completeness and unity to a destructive type of postmodern person.
The time of ancient Greece gave origin to philanthropic thought, which became dominant in the Renaissance. The anthropological crisis of the turn of the XX -XXI centuries was manifested through the conflict of humanism and anti-humanism. It is a testimony to the basic ideological oppositions of human existence: person -God, reason -faith, rationalism -irrationalism, traditions -innovations, etc. Proponents of each direction bring to the level of social consciousness the hopes and fears of the society, influence the development of the outlook and axiological ideas of the individual. The consequence of this confrontation is the communication between the philosophical, religious and secular traditions of different eras, which in postmodernism facilitates a person's choice of his own image and concept of being.
Aesthetics of the baroque in the characterization of the subject received certain formative features. Theatricality, which was one of the dominant qualities of the ХVІІ -ХVІІІ centuries, is understood in the second half of the twentieth century as an aesthetic category. It is connected with the understanding of social and internal life of the individual in the context of the playing space, where he can simultaneously feel in different roles and model situations under the laws of a certain action that is aimed at the viewer (Bazaluk, 2017). Entertainment during this period creates a mosaic picture of the environment, combines incompatible phenomena. Anthropology, based on baroque allusions, irony, game, perception of the world as chaos, seeks landmarks that would construct the image of a hero who can live fully, free from the absurd wanderings in an endless space of variants of his own "Self".
One of the factors behind the transformation of postmodern philosophy is the critique of reason, which gives impetus to the development of the main irrational foundations of the second half of the twentieth century. The lack of a reasonable approach to being, the emphasis on the feelings, is a leading feature of a person of this period. Neglected structuring, substitution of science by scientific similarity, rejection of analyticity, normativity and orderliness are decisive for him. That is, the modern man affirms the "Self"-concept by contrasting himself with the ideas of classical philosophy. Although daily existence repeatedly shows us a subject that proclaims the dominance of rational thinking. This person is one that keeps everything under his control, he is the master of his life and calculates his every action. It shows a peculiar image of the individual, which completely curbed his own emotions, goes firmly to the goalmoney, success and glory. Thus, in postmodernism, we observe both antagonistic attitudes toward philanthropic ideas of the eighteenth century and their transformed implementation into present-day realities.
The main categories of romanticism -detachment from the outside world, loneliness, illusions and fantasies are getting a new interpretation in the second half of the ХХ century. The hero of the time is irrational, spontaneous, egotistical, that makes him similar to the subject of the ХІХ century. However, the difference is that he is constantly living in a situation of possible loss of his own identity, blurring of the individual. The problem of relations between the individual and the society in the second half of the XX -the beginning of the XXI century becomes a leading feature, it is often solved through the fantasy, game with dreams, departure to the invented worlds.
Postmodernism destroys the anthropological model of the early twentieth century. The transcendent idea of the F. Nietzsche's willed overman is replaced by the reconstruction of the real person. If the notion of "personality" was dominant in modernity, then in the second half of the ХХ century sociality prevailed. S. Kostyuchkov (2018) noted that "postmodern man is open to all, perceives the world as a symbolic space, not wanting to get into the content of things, the essence of phenomena, interpretation of images (representations) and meaning of symbols, choosing the symbolic, "sliding" (that is, light, not deep) being" (p. 104).
Modern homo sapiens becomes fragmentary, discrete, devoid of integrity. His image is shaped by contrasting the key principles of the modernist worldview. As O. Chistyakova (2016) rightly points out: "The man of modernism is an immanent "product" of his time with emerging conceptual justifications and narratives, and the person of postmodernism has an imprint of his epochal history and the radically changed state of society" (p. 996).
Thus, the study of the problem of human image in the second half of the XX -beginning of the XXI century revealed that there are two trends in present-day reflections. One illustrates the position of comparing the interpretations of the individual in different historical periods with his understanding in postmodernism, proving that the latter forms a unique type of person. It is much more complex, original and testifies to a qualitatively new development of civilization. The other justifies the personality as a certain social and biological model, constructed from the cultural experience of previous times. This collage subject independently chooses the era in which he lives, professed values, his image. Moreover, in certain situations of life, depending on the particular conditions, the doctrine of his own "Self" can be transformed.
Many concepts of the person declare the search for adequate and objective comprehension of the person, which in principle is impossible in postmodernism. None of the humanitarian knowledge programs examining this issue has a definitive scientific option. This situation is an indicator of subjectivity, any researcher dealing with anthropological issues creates by himself the image of the individual. On the one hand, there are simultaneously different epochs and their aesthetic, moral ideals; on the other, there is a re-reading of the postulates of each historical period on the principle of mirror reflection, at the level of opposition pairs: life -death, rational -irrational, spiritual -corporeal, eternal -temporary.
Originality
The paper revealed the peculiarities of transformation of the personality model from antiquity to postmodernism, specified the image of man of the second half of the XX -the beginning of the XXI century.
Conclusions
The analysis of anthropological ideas of Western philosophy of different ages shows the variety of views about understanding the nature of the person, its complexity and ambiguity. In ancient times, he was generally understood as free, but limited by certain generic and biological factors, subject to the rule of the Absolute. In the Middle Ages, the central tenet of personality cognition is the God-likeness. Renaissance humanism is manifested in the harmonious synthesis of the spiritual and physical nature, material and spiritual values, temporal and eternal ideas. Baroque type of the subject demonstrates the complexity, tension and drama of his inner life. Classicism offers a rational type of individual, who by the power of his own mind can influence social processes. Romanticism focuses on the private world of the hero, his creative beginnings. In modernism, man is treated as a free and autonomous substance, although placed in society and to some extent subject to the laws of the latter.
In postmodernism, philanthropic topics are of particular relevance due to socio-political uncertainty, domination of the mass consciousness, loss of national and cultural identity. A person, independent and free from any norms and dogmas, is a model of that time. One of the options for his formation is the reconstruction of anthropological postulates of different historical periods and their transformed implementation into general practice. They become the basis in pursuit of integrity and freedom of the individual. At the same time, criticism and rejection of worldview models of previous epochs open alternative ways of understanding the meaning of a person, constructing his qualitatively new type in the second half of the XX -beginning of the XXI century.
The image of a postmodern person is deprived of a solid foundation, it is blurred and relative. It combines what was previously considered incompatible. This is due to the absence of a single philosophical and ideological core. The destruction of faith in the absolute in the context of this time contributed to the formation of confidence in the interdependence of all things (including different civilizations), raised the problem of the personality image to a new ontological level, stimulated the search for his unity. Orientation in the achievements of European civilizations, perception of their anthropological experience, intercultural dialogue contribute to the productive use of the achievements of homo sapiens in order to understand the modern person and to form its adequate image. In essence, postmodernism does not aim to retrospect the type of subject. However, separating from the cultural memory the excerpts of ideas about a person, by certain styles and directions, it builds on their formations its own eclectic individual.
Finally, it should be noted that this publication is not a full-scale study of the whole range of issues related to understanding the image of human in the postmodern era. The future detailed study of the personality model in different historical periods seems necessary to us in the context of interaction with the ideas of the second half of the XX -beginning of the XXI century. Contemporary philanthropic problems require a thorough research in terms of their impact on the development of a new socio-cultural situation. |