text
stringlengths
3.22k
165k
Learning astronomy through Augmented Reality: EduINAF resources to enhance studentsโ€™ motivation and understanding <p>In this presentation, we will illustrate Augmented Reality (AR) resources developed by INAF (The Italian National Institute of Astrophysics) for communicating astronomy, distributed to schools and the general public by EduINAF, the online magazine devoted to education and outreach, (https://edu.inaf.it/). The impact of these initiatives and future perspectives will also be provided. AR and other innovative technologies have a very high potential in astronomy communication, outreach and education. By adding texts, images, overlays, sounds and other effects, AR enhances users&#8217; experience, allowing personal and interactive choices and offering unique educational opportunities. Due to its benefits of providing an engaging and immersive learning space, the use of AR in education has been recognized as a powerful instrument for educators and students.&#160; Among the first attempts and experiments with AR, in 2019 we created an augmented reality app - both in Italian and English - dedicated to the Museum of Specola inside the Astronomical Observatory of Palermo, in order to promote the cultural heritage of the institute. Using a simple tool like the app <em>Zappworks Studio Widgets</em> and a smartphone, the public could interact with the history and the instruments held in the museum, choosing between seven different levels of information. In 2020 - on the occasion of &#8220;Esperienza InSegna 2020&#8221;, a science fair for schools, which every year counts about 15.000 participants - INAF created an interactive game called &#8220;Terra Game&#8221; using Metaverse Studio. Discovering the &#8220;ingredients for life&#8221; and the composition, temperature and atmosphere of different planets, students were able to understand how special the Earth is in comparison to the other planets of the Solar System and to exoplanets orbiting around other stars. In 2021, to catch teenage students&#8217; attention, we integrated new technologies in the learning path dedicated by EduINAF to Mars on the occasion of the landing on Mars of NASA&#8217;s rover Perseverance. We developed the augmented reality experience &#8220;MARS2020 Perseverance&#8221; with <em>Zap works Studio Design</em>, showing the objectives of the mission, other rovers landed on Mars and the sophisticated instruments onboard. Using this app people can discover the instruments used by the rover for acquiring information about Martian geology, atmosphere, environmental conditions and potential biosignatures. The app also gives the opportunity to visit NASA resources and take a selfie with the Perseverance and the drone Ingenuity and share the pictures with friends through social media. To mark the event of the <em>Supermoon</em> of 26th May 2021 EduINAF also published educational resources dedicated to the moon. Among these, the augmented reality experience &#8220;Maree Lunatiche&#8221;, developed with Zap works Studio Design. This app explains the phenomenon of tides. From the menu, there is also the opportunity to interact with a 3D model of the moon and to take a selfie with the full moon. The impact of these and other AR initiatives in EduINAF, as well as their future perspectives, will also be provided in this talk.</p> <p><img src="data:image/jpeg;base64, /9j/4AAQSkZJRgABAQEAYABgAAD/4QCARXhpZgAASUkqAAgAAAAFABIBAwABAAAAAQAAABoBBQABAAAASgAAABsBBQABAAAAUgAAACgBAwABAAAAAgAAAGmHBAABAAAAWgAAAAAAAABgAAAAAQAAAGAAAAABAAAAAgACoAQAAQAAAPQBAAADoAQAAQAAAPQBAAAAAAAA/+EN7Wh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8APD94cGFja2V0IGJlZ2luPSfvu78nIGlkPSdXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQnPz4KPHg6eG1wbWV0YSB4bWxuczp4PSdhZG9iZTpuczptZXRhLyc+CjxyZGY6UkRGIHhtbG5zOnJkZj0naHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyc+CgogPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9JycKICB4bWxuczpBdHRyaWI9J2h0dHA6Ly9ucy5hdHRyaWJ1dGlvbi5jb20vYWRzLzEuMC8nPgogIDxBdHRyaWI6QWRzPgogICA8cmRmOlNlcT4KICAgIDxyZGY6bGkgcmRmOnBhcnNlVHlwZT0nUmVzb3VyY2UnPgogICAgIDxBdHRyaWI6Q3JlYXRlZD4yMDIxLTA1LTI2PC9BdHRyaWI6Q3JlYXRlZD4KICAgICA8QXR0cmliOkV4dElkPjMxZjc3ZDI0LWUzY2UtNDhjOC1hYzZiLTZmMTVlOTJlM2UzYjwvQXR0cmliOkV4dElkPgogICAgIDxBdHRyaWI6RmJJZD41MjUyNjU5MTQxNzk1ODA8L0F0dHJpYjpGYklkPgogICAgIDxBdHRyaWI6VG91Y2hUeXBlPjI8L0F0dHJpYjpUb3VjaFR5cGU+CiAgICA8L3JkZjpsaT4KICAgPC9yZGY6U2VxPgogIDwvQXR0cmliOkFkcz4KIDwvcmRmOkRlc2NyaXB0aW9uPgoKIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PScnCiAgeG1sbnM6ZGM9J2h0dHA6Ly9wdXJsLm9yZy9kYy9lbGVtZW50cy8xLjEvJz4KICA8ZGM6dGl0bGU+CiAgIDxyZGY6QWx0PgogICAgPHJkZjpsaSB4bWw6bGFuZz0neC1kZWZhdWx0Jz5hciBlcGNzPC9yZGY6bGk+CiAgIDwvcmRmOkFsdD4KICA8L2RjOnRpdGxlPgogPC9yZGY6RGVzY3JpcHRpb24+CgogPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9JycKICB4bWxuczpwZGY9J2h0dHA6Ly9ucy5hZG9iZS5jb20vcGRmLzEuMy8nPgogIDxwZGY6QXV0aG9yPkxhdXJldHRhIExlb25hcmRpPC9wZGY6QXV0aG9yPgogPC9yZGY6RGVzY3JpcHRpb24+CgogPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9JycKICB4bWxuczp4bXA9J2h0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8nPgogIDx4bXA6Q3JlYXRvclRvb2w+Q2FudmE8L3htcDpDcmVhdG9yVG9vbD4KIDwvcmRmOkRlc2NyaXB0aW9uPgo8L3JkZjpSREY+CjwveDp4bXBtZXRhPgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAo8P3hwYWNrZXQgZW5kPSd3Jz8+/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgB9AH0AwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A8xXxbrokEp13U96jgfa5MfzqNfGfiOe4P/E91PeTnm6k/wAaw2mgMwUnCmn3kASYSIcIRwQKAOok8eeJWtPss2sai0YOc/aXzn86pJ4h1vdn+19QBPXNw/8AjWHCspYdeeTV2STdE5kUk54Ze1AHTJ4tvxZRhtR1DeG5f7U/T86ztV8R62rvLa6vqPkFuFF1ISP1rn0jd43XBwecA84qJ0dYiIXbOcdaANIeI9ZlkX/iaXxPIz9pfn9ajv8AXtZYo8eq35G3YR9ocdPxrFQtBLG06kITzitK4MUjxyq6mHGNq8ED3oArN4h1gHnVdQ/8CX/xpw8RazjP9q6gf+3l/wDGmXltb7TIJMRA8sRyfwrPBUf6tWC+p70AaZ8Q6zkH+1tQ/wDAl/8AGpR4j1gjP9ragGz0+0v/AI1lcMDjsOaSF0WdDJyuefpQBvxeLdfjkQ/2vqLnGMNcuf61sW+uapKTJca3ejuF+0vnP51hzNFHbpMm3cGOFC54q7aCW6HmwwQsqqQw7getAF2bxFrwDSx6zeKynhftD5P61Faa34nhdr4a1qKbvlDfaH59utUbxmlwiR7WVfm2/wAXvS2t0cwrKWZVPC9qALreIfEV5c2/k6pqPB2t/pUgLc/WtU63r/8Aaf2RtV1SIkfJvuX/AA71X1IWtzcJNbhoo+CVB7gVGJ4p4bgSZeVTuWVetAF1PE+tkS2l3q+oR3aMAp+0vz9ealn8Ra/ERBJrmoAADDLcyY/nXNX9s00SSQyEyvnJ68isyC+ltp0jvN5UtjJ7UAdtbeK9cLeU2t6kD0BN0/8AjUv/AAkut+dtbW9RTA+99pkOT+dcnrt3BBeMulSGa34Kuwwau6HrtpBGyXbvGxXAKAZB/GgDpR4t1oj9zrV+WCnIa6k5/wDHqiXxlrxYRS6xqSp/EoupM/8AoVY99Pby3nmWZdk2jJYDJPrxSSliqGQj
Calibration of the NOMAD-LNO channel onboard ExoMars 2016 Trace Gas Orbiter using solar spectra <p><strong>Calibration of the NOMAD-LNO channel onboard ExoMars 2016 Trace Gas Orbiter using solar spectra</strong></p> <p><strong>Cruz Mermy </strong>(1), F. Schmidt (1), I. R. Thomas (2), F. Daerden (2), B. Ristic (2), M. R. Patel (3,4), J.-J. Lopez-Moreno (5), G. Bellucci (6), A. C. Vandaele (2) and the NOMAD Team</p> <p><strong>Introduction</strong></p> <p>The LNO channel is a compact high-resolution echelle grating spectrometer with an acousto-optic tunable filter (AOTF) working in the infrared domain from 2.3 &#956; m to 3.8 &#956; m (4250-2630 cm -1 ) with a resolving power ( &#955; / &#916; &#955; ) of around 10000, specially designed for nadir observation. With such high resolving power combined with the near-circular orbit of TGO that allow completion of 12 orbits in one sol, promoting a global coverage of the planet, the NOMAD-LNO instrument fits in with the science objective of the ExoMars program [Vandaele2015, Vandaele2018] and is perfectly suited to study the martian surface and atmosphere. The main objective here is to propose an original calibration procedure, adaptable for the full dataset of NOMAD-LNO. This calibration is complementary to the one proposed by [I.R.Thomas2021] in the sense that we will not assume the temporal stability of the instrument here. Our approach is thus able to test the temporal stability of the instrument in front of degradation by energetic particles. This approach will be based on an empirical continuum removal to take into account the departure between actual blaze function and theoretical one.</p> <p><strong>Dataset</strong></p> <p>The NOMAD-LNO fullscans is a solar observation made for calibration purposes. The instrument, normally in nadir position, is pointing toward the sun. The choice of using solar fullscans was made for two reasons: first, there are not enough miniscans to cover all diffraction orders with a significant amount of data while fullscans always cover the whole spectral range which allows testing the time dependence of the calibration. Second, it is important to estimate the instrumental sensitivity over the whole diffraction order range. As of June 2020, six solar calibrations have been performed. Typical fullscan observation is shown in figure 1 the x-axis is the spectel number, the y-axis is the diffraction order (i.e. the AOTF frequency) and the z-axis shows the sensitivity of the detector (in ADU for Analog to Digital Units).</p> <p><img src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAABdwAAASwCAYAAADhdIjxAAABJWlDQ1BrQ0dDb2xvclNwYWNlQWRvYmVSR0IxOTk4AAAokWNgYFJILCjIYRJgYMjNKykKcndSiIiMUmB/ysDKIMXAAyRZE5OLCxwDAnwYgABGo4Jv1xgYQfRlXZBZmPJ4AVdKanEykP4DxNnJBUUlDAyMGUC2cnlJAYjdA2SLJGWD2QtA7CKgA4HsLSB2OoR9AqwGwr4DVhMS5AxkfwCy+ZLAbCaQXXzpELYAiA21FwQEHVPyk1IVQL7XMLS0tNAk0Q8EQUlqRQmIds4vqCzKTM8oUXAEhlSqgmdesp6OgpGBkSEDAyjcIao/B4LDk1HsDEIMARBicyQYGPyXMjCw/EGImfQyMCzQYWDgn4oQUwOaLqDPwLBvTnJpURnUGEYmYwYGQnwA0IdKLUQ3GOwAAAB4ZVhJZk1NACoAAAAIAAUBEgADAAAAAQABAAABGgAFAAAAAQAAAEoBGwAFAAAAAQAAAFIBKAADAAAAAQACAACHaQAEAAAAAQAAAFoAAAAAAAAAlgAAAAEAAACWAAAAAQACoAIABAAAAAEAAAXcoAMABAAAAAEAAASwAAAAAGnumbgAAAAJcEhZcwAAFxIAABcSAWef0lIAAAI8aVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA1LjQuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOnRpZmY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vdGlmZi8xLjAvIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDx0aWZmOk9yaWVudGF0aW9uPjE8L3RpZmY6T3JpZW50YXRpb24+CiAgICAgICAgIDx0aWZmOlJlc29sdXRpb25Vbml0PjI8L3RpZmY6UmVzb2x1dGlvblVuaXQ+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4yMjUwPC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjE4MDA8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KAXphRQAAQABJREFUeAHsnXmQLEtV/2tm7n33vvfwgRAuwSKgCAooQqCgT1kEDDAQFOUPQ7bQYAkBMVg0cA0kVAQFxYUwiMA93HBBFpFFBBF3CBFlE+OnbCIIIttb7sz9nW+eOp1Z2Vm9zNTM9Mz95L3dmZV5tvxUdnXV6Z7qrYtWOgoEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIHIrB9IG2UIQABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQSARIuLMQIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhMQIOE+AURMQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAARIuLMGIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAITECDhPgFETEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAESLizBiAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACExAg4T4BRExAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABEi4swYgAAEIQAACEIAABCAAAQhAAAIQgAAEIAA
Newly Recognized QSO/Galaxy Pairs at Small Impact Parameters for Low Redshift Galaxies A search for emission lines in foreground galaxies, in QSO spectra (zgal 5 x 10-17 ergs cm-2 s-1 โ€ข Confirmed additional, expected galactic emissions at the same redshift as H๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ to weed out false positives: H๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ, H๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ, [O III], [O II], [N II], [S II] โ€ข Emission line search produced 21 examples of QSOs overlapped by foreground low redshift galaxies (QSO/Galaxy pairs). Figure 2: Composite SDSS image of QJ1042+0748 (the blue object just above center). The QSO is at z=2.665. Emission lines (Figure 1) from the spectrum of the overlaying galaxy are at z=0.03321. Figure 1: The SDSS spectrum of QJ1042+0748, a z=2.665 QSO with a superimposed spectrum of a (narrow-line) galaxy from an object that falls in the SDSS fiber. PHOTOMETRY โ€ข IDL and IDP3 software used to de-blend the overlapped QSO/Galaxy pairs โ€ข Color magnitudes for each galaxy obtained by subtracting an adjusted PSF to remove the paired QSO โ€ข Color magnitudes for each QSO obtained by measuring the magnitudes of the fitted PSF MEASURED PROPERTIES ๏‚ง QSO g and i-band magnitudes ๏‚ง Galaxy u and r-band magnitudes ๏‚ง QSO ๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ(g-i) [observer-frame color excess] ๏‚ง QSO E(B-V)g-i [absorber-frame color excess; extinction measure] ๏‚ง Star Formation Rate ๏‚ง H๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ/H๎€€๎€๎€‚๎€ƒ๎€„๎€…๎€†๎€‡๎€ˆ๎€‰๎€Š๎€‹๎€Œ๎€๎€Ž๎€๎€๎€‘๎€’๎€“๎€”๎€•๎€–๎€—๎€˜๎€™๎€š๎€›๎€œ๎€๎€ž๎€Ÿ๎€ ๎€ก๎€ข๎€ฃ๎€ค๎€ฅ๎€ฆ๎€ง๎€จ๎€ฉ๎€ช๎€ซ๎€ฌ๎€ญ๎€ฎ๎€ฏ๎€ฐ๎€ฑ๎€ฒ๎€ณ๎€ด๎€ต๎€ถ๎€ท๎€ธ๎€น๎€บ๎€ป๎€ผ๎€ฝ๎€พ๎€ฟ๎€๎๎‚๎ƒ๎„๎…๎†๎‡๎ˆ๎‰๎Š๎‹๎Œ๎๎Ž๎๎๎‘๎’๎“๎”๎•๎–๎—๎˜๎™๎š๎›๎œ๎๎ž๎Ÿ๎ ๎ก๎ข๎ฃ๎ค๎ฅ๎ฆ๎ง๎จ๎ฉ๎ช๎ซ๎ฌ๎ญ๎ฎ๎ฏ๎ฐ๎ฑ๎ฒ๎ณ๎ด๎ต๎ถ๎ท๎ธ๎น๎บ๎ป๎ผ๎ฝ๎พ๎ฟ๎‚€๎‚๎‚‚๎‚ƒ๎‚„๎‚…๎‚†๎‚‡๎‚ˆ๎‚‰๎‚Š๎‚‹๎‚Œ๎‚๎‚Ž๎‚๎‚๎‚‘๎‚’๎‚“๎‚”๎‚•๎‚–๎‚—๎‚˜๎‚™๎‚š๎‚›๎‚œ๎‚๎‚ž๎‚Ÿ๎‚ ๎‚ก๎‚ข๎‚ฃ๎‚ค๎‚ฅ๎‚ฆ๎‚ง๎‚จ๎‚ฉ๎‚ช๎‚ซ๎‚ฌ๎‚ญ๎‚ฎ๎‚ฏ๎‚ฐ๎‚ฑ๎‚ฒ๎‚ณ๎‚ด๎‚ต๎‚ถ๎‚ท๎‚ธ๎‚น๎‚บ๎‚ป๎‚ผ๎‚ฝ๎‚พ๎‚ฟ๎ƒ€๎ƒ๎ƒ‚๎ƒƒ๎ƒ„๎ƒ…๎ƒ†๎ƒ‡๎ƒˆ๎ƒ‰๎ƒŠ๎ƒ‹๎ƒŒ๎ƒ๎ƒŽ๎ƒ๎ƒ๎ƒ‘๎ƒ’๎ƒ“๎ƒ”๎ƒ•๎ƒ–๎ƒ—๎ƒ˜๎ƒ™๎ƒš๎ƒ›๎ƒœ๎ƒ๎ƒž๎ƒŸ๎ƒ ๎ƒก๎ƒข๎ƒฃ๎ƒค๎ƒฅ๎ƒฆ๎ƒง๎ƒจ๎ƒฉ๎ƒช๎ƒซ๎ƒฌ๎ƒญ๎ƒฎ๎ƒฏ๎ƒฐ๎ƒฑ๎ƒฒ๎ƒณ๎ƒด๎ƒต๎ƒถ๎ƒท๎ƒธ๎ƒน๎ƒบ๎ƒป๎ƒผ๎ƒฝ๎ƒพ๎ƒฟ Flux Ratio ๏‚ง QSO/Galaxy centroid offset [impact parameter] ๏‚ง Galactic length, width, and orientation QSOALS Expectations โ€ข Each QSO/Galaxy pair spectrum was searched for expected absorption features due to the foreground galaxy. โ€ข Ca II and Na I lines were identified and measured for equivalent widths and errors using IRAF. โ€ข Table 3 shows the relevant absorption features for best pairs.
Stratigraphic studies of Ganymedeโ€™s tectonic activity in the bright terrain: results from the Byblus Sulcus and Harpagia Sulcus regionsย  <p>The Jovian satellite Ganymede experienced a pronounced period of tectonic resurfacing forming the extended bright or light terrain the so-called <em>Sulci</em>, which cover about 2/3 of Ganymede&#8217;s surface (Pappalardo et al., 2004 and references therein). It crosscuts the older dark terrain of the so-called regions by long swaths of subparallel grooves or separates the different regions by a complex network of tens to kilometer wide polygons/cells (Patterson et al., 2010, Collins et al., 2013), with each cell being characterized by a different density and orientation of the structural grooves. Previous studies indicated that different tectonic styles are apparent reaching from roughly evenly spaced grooves oriented in a single dominant direction to smooth surface areas with only faint or undetectable grooves at decameter to kilometer resolution (Collins et al., 1998, Patterson et al., 2010) with cryovolcanic resurfacing playing a possible role in the formation of the latter.</p> <p>&#160;</p> <p>In order to better understand the formation of the bright terrain and its possible interaction with a subsurface ocean its investigation has been made one of the top goals of the upcoming JUICE mission (Grasset et al., 2013). For the purpose of preparing for the JUICE mission and maximizing its science return, we re-investigate the available Voyager ISS and Galileo SSI data set. Our focus lies on the characterization of the bright terrain, its contact to the dark terrain, the definition and characterization of the tectonic subunits/cells and their stratigraphic relationship to each other. The work is supported by the estimation of the surface ages from crater frequency distributions and compositional information derived from Galileo NIMS data. The goal is to study the local formation process and to identify any changes in tectonic style through time, but also to compare the formation process of the bright terrain across Ganymede&#8217;s entire surface in order to reveal any global/regional differences in the past tectonic activity, and to characterize possible differences and similarities of the bright terrain at different locations on the body.</p> <p>&#160;</p> <p>Among the studied areas are a portion of <em>Byblus Sulcus</em> (~40&#176;N/160&#176;E) and of <em>Harpagia Sulcus</em> (~16&#176;S/50&#176;E) located in the northern portion of the anti-Jovian hemisphere and in the southern portion of the sub-Jovian hemisphere, respectively. <em>Byblus Sulcus</em> is a 30 km broad and NNW-SSE oriented band reaching from <em>Philus Sulcus</em> into the dark terrain of <em>Marius Regio</em>, where it meets the east-west trending grooved lane named <em>Akitu Sulcus</em>, which is truncated by the grooved and smooth terrain of<em> Byblus Sulcus </em>and thus is relatively older. Whereas the ancient neighboring dark terrain is already highly furrowed, <em>Byblus Sulcus</em> is mainly characterized by grooved terrain (<em>lgf </em>and<em> lgc</em>) and smaller areas of smooth terrain (<em>ls</em>) on its northern border to the dark terrain of<em> Marius Regio. Byblus Sulcus</em> is superimposed by small two dark floor craters with a dark inner and bright outer halo of ejecta material representing the youngest features in this region.</p> <p>&#160;</p> <p>The imaged portion of <em>Harpagia Sulcus</em> covers a 20-km broad region of bright or light material units including the light grooved terrain (<em>lg</em>), a smooth-subdued terrain in the middle (<em>ls</em>), the lineated terrains (<em>ll</em>) on either sides of the smooth-subdued terrain and a region of undivided terrain (<em>undiv</em>) on the southern side of the Sulcus. The smooth-subdued terrain accounts for a comparably large number of craters as compared to other terrains and is being crosscut by the lineated terrains on either sides and the light grooved terrain on the eastern and southern sides, evidenced by the presence of sharp boundaries separating them. So, the smooth-subdued terrain is supposed to be older than other terrains. The light grooved terrain is supposed to be much younger terrain since it crosscuts the lineated terrain and accounts for the smaller number of craters. Lineated terrains to be intermediate in age between light grooved terrain and smooth-subdued terrain. Nevertheless, crater density frequency indicates a narrow timescale.</p> <p>While the crater frequency areas of measurement of the dark terrain units in <em>Byblus Sulcus</em> are clearly higher than those from bright terrain, inferring a higher age, the bright units in both the <em>Byblus</em> and <em>Harpagia Sulcus</em> show crater frequencies more or less identical within measurement uncertainties. Whether this is a general feature of Ganymede&#8217;s bright terrain, or specific to these two selected regions, will be a major issue in our ongoing studies. Also, bright grooved and smooth units cannot be separated very well in terms of crater frequencies in both regions. Since the existing cratering chronology models by Neukum et al. (1998) and Zahnle et al. (2003) are highly uncertain, it is difficult to infer if Ganymede was tectonically active only over a short period of time or over a much longer period, as implied by low impact rates in the Jovian system (e.g., Zahnle et al., 2003).</p> <p><strong><img src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAABdsAAAQ0CAIAAAAJ61IRAAAKN2lDQ1BzUkdCIElFQzYxOTY2LTIuMQAAeJydlndUU9kWh8+9N71QkhCKlNBraFICSA29SJEuKjEJEErAkAAiNkRUcERRkaYIMijggKNDkbEiioUBUbHrBBlE1HFwFBuWSWStGd+8ee/Nm98f935rn73P3Wfvfda6AJD8gwXCTFgJgAyhWBTh58WIjYtnYAcBDPAAA2wA4HCzs0IW+EYCmQJ82IxsmRP4F726DiD5+yrTP4zBAP+flLlZIjEAUJiM5/L42VwZF8k4PVecJbdPyZi2NE3OMErOIlmCMlaTc/IsW3z2mWUPOfMyhDwZy3PO4mXw5Nwn4405Er6MkWAZF+cI+LkyviZjg3RJhkDGb+SxGXxONgAoktwu5nNTZGwtY5IoMoIt43kA4EjJX/DSL1jMzxPLD8XOzFouEiSniBkmXFOGjZMTi+HPz03ni8XMMA43jSPiMdiZGVkc4XIAZs/8WRR5bRmyIjvYODk4MG0tbb4o1H9d/JuS93aWXoR/7hlEH/jD9ld+mQ0AsKZltdn6h21pFQBd6wFQu/2HzWAvAIqyvnUOfXEeunxeUsTiLGcrq9zcXEsBn2spL+jv+p8Of0NffM9Svt3v5WF485M4knQxQ143bmZ6pkTEyM7icPkM5p+H+B8H/nUeFhH8JL6IL5RFRMumTCBMlrVbyBOIBZlChkD4n5r4D8P+pNm5lona+BHQllgCpSEaQH4eACgqESAJe2Qr0O99C8ZHA/nNi9GZmJ37z4L+fVe4TP7IFiR/jmNHRDK4ElHO7Jr8WgI0IABFQAPqQBvoAxPABLbAEbgAD+ADAkEoiARxYDHgghSQAUQgFxSAtaAYlIKtYCeoBnWgETSDNnAYdIFj4DQ4By6By2AE3AFSMA6egCnwCsxAEISFyBAVUod0IEPIHLKFWJAb5AMFQxFQHJQIJUNCSAIVQOugUqgcqobqoWboW+godBq6AA1Dt6BRaBL6FXoHIzAJpsFasBFsBbNgTzgIjoQXwcnwMjgfLoK3wJVwA3wQ7oRPw5fgEVgKP4GnEYAQETqiizARFsJGQpF4JAkRIauQEqQCaUDakB6kH7mKSJGnyFsUBkVFMVBMlAvKHxWF4qKWoVahNqOqUQdQnag+1FXUKGoK9RFNRmuizdHO6AB0LDoZnYsuRlegm9Ad6LPoEfQ4+hUGg6FjjDGOGH9MHCYVswKzGbMb0445hRnGjGGmsVisOtYc64oNxXKwYmwxtgp7EHsSewU7jn2DI+J0cLY4X1w8TogrxFXgWnAncFdwE7gZvBLeEO+MD8Xz8MvxZfhGfA9+CD+OnyEoE4wJroRIQiphLaGS0EY4S7hLeEEkEvWITsRwooC4hlhJPEQ8TxwlviVRSGYkNimBJCFtIe0nnSLdIr0gk8lGZA9yPFlM3kJuJp8h3ye/UaAqWCoEKPAUVivUKHQqXFF4pohXNFT0VFysmK9YoXhEcUjxqRJeyUiJrcRRWqVUo3RU6YbStDJV2UY5VDlDebNyi/IF5UcULMWI4kPhUYoo+yhnKGNUhKpPZVO51HXURupZ6jgNQzOmBdBSaaW0b2iDtCkVioqdSrRKnkqNynEVKR2hG9ED6On0Mvph+nX6O1UtVU9Vvuom1TbVK6qv1eaoeajx1UrU2tVG1N6pM9R91NPUt6l3qd/TQGmYaYRr5Grs0Tir8XQObY7LHO6ckjmH59zWhDXNNCM0V2ju0xzQnNbS1vLTytKq0jqj9VSbru2hnaq9Q/uE9qQOVcdNR6CzQ+ekzmOGCsOTkc6oZPQxpnQ1df11Jbr1uoO6M3rGelF6hXrtevf0Cfos/ST9Hfq9+lMGOgYhBgUGrQa3DfGGLMMUw12G/YavjYyNYow2GHUZPTJWMw4wzjduNb5rQjZxN1lm0mByzRRjyjJNM91tetkMNrM3SzGrMRsyh80dzAXmu82HLdAWThZCiwaLG0wS05OZw2xljlrSLYMtCy27LJ9ZGVjFW22z6rf6aG1vnW7daH3HhmITaFNo02Pzq62ZLde2xvbaXPJc37mr53bPfW5nbse322N3055qH2K/wb7X/oODo4PIoc1h0tHAMdGx1vEGi8YKY21mnXdCO3k5rXY65vTW2cFZ7HzY+RcXpkuaS4vLo3nG8/jzGueNueq5clzrXaVuDLdEt71uUnddd457g/sDD30PnkeTx4SnqWeq50HPZ17WXiKvDq/XbGf2SvYpb8Tbz7vEe9CH4hPlU+1z31fPN9m31XfKz95vhd8pf7R/kP82/xsBWgHcgOaAqUDHwJWBfUGkoAVB1UEPgs2CRcE9IXBIYMj2kLvzDecL53eFgtCA0O2h98KMw5aFfR+OCQ8Lrwl/GGETURDRv4C6YMmClgWvIr0iyyLvRJlESaJ6oxWjE6Kbo1/HeMeUx0hjrWJXxl6K04gTxHXHY+Oj45vipxf6LNy5cDzBPqE44foi40V5iy4s1licvvj4EsUlnCVHEtGJMYktie85oZwGzvTSgKW1S6e4bO4u7hOeB28Hb5Lvyi/nTyS5JpUnPUp2Td6ePJninlKR8lTAFlQLnqf6p9alvk4LTduf9ik9Jr09A5eRmHFUSBGmCfsytTPzMoezzLOKs6TLnJftXDYlChI1ZUPZi7K7xTTZz9SAxESyXjKa45ZTk/MmNzr3SJ5ynjBvYLnZ8k3LJ/J9879egVrBXdFboFuwtmB0pefK+lXQqqWrelfrry5aPb7Gb82BtYS1aWt/KLQuLC98uS5mXU+RVtGaorH1futbixWKRcU3NrhsqNuI2ijYOLhp7qaqTR9LeCUXS61LK0rfb+ZuvviVzVeVX33akrRlsMyhbM9WzFbh1uvb3LcdKFcuzy8f2x6yvXMHY0fJjpc7l+y8UGFXUbeLsEuyS1oZXNldZVC1tep9dUr1SI1XTXutZu2m2te7ebuv7PHY01anVVda926vYO/Ner/6zgajhop9mH05+x42Rjf2f836urlJo6m06cN+4X7pgYgDfc2Ozc0tmi1lrXCrpHXyYMLBy994f9Pdxmyrb6e3lx4ChySHHn+b+O31w0GHe4+wjrR9Z/hdbQe1o6QT6lzeOdWV0iXtjusePhp4tLfHpafje8vv9x/TPVZzXOV42QnCiaITn07mn5w+lXXq6enk02O9S3rvnIk9c60vvG/wbNDZ8+d8z53p9+w/ed71/LELzheOXmRd7LrkcKlzwH6g4wf7HzoGHQY7hxyHui87Xe4Znjd84or7ldNXva+euxZw7dLI/JHh61HXb95IuCG9ybv56Fb6ree3c27P3FlzF3235J7SvYr7mvcbfjT9sV3qID0+6j068GDBgztj3LEnP2X/9H686CH5YcWEzkTzI9tHxyZ9Jy8/Xvh4/EnWk5mnxT8r/1z7zOTZd794/DIwFTs1/lz0/NOvm1+ov9j/0u5l73TY9P1XGa9mXpe8UX9z4C3rbf+7mHcTM7nvse8rP5h+6PkY9PHup4xPn34D94Tz+49wZioAAAAJcEhZcwAALiMAAC4jAXilP3YAACAASURBVHic7J11nFVV9/+v9YiPoiAKCgoCgoCAlHSHdHdK19DdDEiDdHe3dCnd3Qw9NNIhgiIGv/frrNezf+d7bsydBGR9/pjXnXPP3XvttVd81t4nXn/69KlLoVAoFAqFQqFQKBQKhUIRhXj9WQugUCgUCoVCoVAoFAqFQvHSQVdkFAqFQqFQKBQKhUKhUCiiGroio1AoFAqFQqFQKBQKhUIR1dAVGYVCoVAoFAqFQqFQKBSKqIauyCgUCoVCoVAoFAqFQqFQRDV0RUahUCgUCoVCoVAoFAqFIqqhKzIKhUKhUCgUCoVCoVAoFFENXZFRKBQKhUKhUCgUCoVCoYhq6IqMQqFQKBQKhUKhUCgUCkVUQ1dkFAqFQqFQKBQKhUKhUCiiGroio1AoFAqFQqFQKBQKhUIR1dAVGYVCoVAoFAqFQqFQKBSKqIauyCgUCoVCoVAoFAqFQqFQRDV0RUahUCgUCoVCoVAoFAqFIqqhKzIKhUKhUCgUCoVCoVAoFFENXZFRKBQKhUKhUCgUCoVCoYhq6IqMQqFQKBQKhUKhUCgUCkVUQ1dkFAqFQqFQKBQKhUKhUCiiGroio1AoFAqFQqFQKBQKhUIR1dAVGYVCoVAoFAqFQqFQKBSKqIauyCgUCoVCoVAoFAqFQqFQRDV0RUahUCgUCoVCoVAoFAqFIqqhKzIKhUKhUCgUCoVCoVAoFFENXZFRKBQKhUKhUCgUCoVCoYhq6IqMQqFQKBQKhUKhUCgUCkVUQ1dkFAqFQqFQKBQKhUKhUCiiGroio1AoFAqFQqFQKBQKhUIR1dAVGYVCoVAoFAqFQqFQKBSKqIauyCgUCoVCoVAoFAqFQqFQRDV0RUahUCgUCoVCoVAoFAqFIqqhKzIKhUKhUCgUCoVCoVAoFFENXZFRKBQKhUKhUCgUCoVCoYhq6IqMQqFQKBQKhUKhUCgUCkVUQ1dkwoi///777NmzO3fuvHHjxpkzZ/766y85nihRojJlyiRJkuTNN998thI+zwgODt60adP7779funTpyO5r5cqV169fjxUrVsGC
SSHADE-BandList, the new database of spectroscopy band lists of solids <p><strong>Introduction </strong></p> <p>The SSHADE database infrastructure (1) (http://www.sshade.eu) hosts the databases of about 30 experimental research groups in spectroscopy of solids from 15 countries. It currently provides to all researcher over 4000 spectra of many different types of solids (ices, minerals, carbonaceous matters, meteorites&#8230;) over a very wide range of wavelengths (mostly X-ray and VUV to sub-mm)</p> <p>However, although these data are invaluable for the community, one type of information is still critically missing to easily interpret observations: the list of the characteristics of all the absorption bands of a given solid, called its &#8216;band list&#8217;.</p> <p>This type of database is well developed for gases (see e.g. the VAMDC portal (2)), and it is even frequently the only type of spectral data available. But for solids (and liquids) there is currently almost no database which provide such information (only in some restricted fields, such as Raman spectroscopy of minerals, e.g. the WURM database (3)).<strong> &#160;</strong></p> <p>This critical lack triggered us to develop such a band list database containing the characteristics of electronic, vibration and phonon bands of various solids (ices, simple organics, minerals) of astrophysical interest to help:</p> <ul> <li>identify absorption or emission bands from solids observed in various astrophysical environments or in laboratory simulations</li> <li>determine the environment of the molecule or mineral (composition, isotopes, mixing, phase, T, P, &#8230;)</li> <li>select the best spectra in SSHADE to compare with observation, or to use in models</li> </ul> <p><strong>What is a band list of a solid?</strong></p> <p>A &#8216;band list&#8217; is a list of band parameters and vibration modes of a molecule in a simple molecular constituent (3 species maximum), or of a mineral or a ionic/covalent solid, &#160;with a well-defined phase and composition (fixed or small range). It includes the bands of all isotopes (sub-bandlists) and can be provided for different environments (T, P, &#8230;).</p> <p>The SSHADE 'band list' database provides the band parameters (position, width, peak and/or integrated intensity, and their accuracy, isotopic species involved, mode assignment, ...) of a progressively increasing number of solids and simple compounds (with different compositions) of astrophysical and planetary interest in various phases (crystallines, amorphous, ...) at different temperatures or pressures.</p> <p>We are feeding this database through exhaustive compilations and critical reviews of all data published in various journals for pure ices and molecular solids and their simple compounds (solid solution, hydrates, clathrates, ...), including the own works of the SSHADE consortium partners. We will continue in a second step with band lists of minerals. However, this is a tremendous scientific work, expected to last many years&#8230; For example, the infrared spectrum of pure solid CO in its cubic &#945; phase has been the subject of more than 35 papers scattered in 25 different journals over the period 1961-2020&#8230;</p> <p><strong>SSHADE band list database and interface</strong></p> <p>A specific data model, SSDM-BL (Solid Spectroscopy Data Model &#8211; Band List), has been first developed in order to accurately describe and link all the parameters necessary to describe both the solid constituent and the band list itself. A structured database storing all these data and metadata, has then be set up based on this data model. A data review tool (excel file), a data convertor to a XML import file, as well as a data import tool have been developed to feed the database.</p> <p>Then an efficient search tool allows you to search either a band list or a specific band thanks to a combination between a &#8216;search bar&#8217; and a set of filters on various parameters, such as band position, width and intensity, expected molecular or atomic composition, type of vibration, temperature and pressure. The search result are provided as a table with band list title or the main band parameters allowing the users to select the most relevant ones. He can then display the selected band list graphically, thanks to a simulator of &#8216;band list spectra&#8217;, with various unit and display options. The data can be exported as a table containing the main parameters of all the bands of the band list, as well as detailed metadata of the band list and all its bands. A data reference and a DOI will be associated with each band list.</p> <p><img src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAA4QAAAHKCAIAAADU4RJZAAAgAElEQVR4nOydd3hc1Z33z7lteh/NqI2kUbVk2bKxjdzBMhgMsWMINcSBmOShbMhu2GSBkOUhhSTkzaYQdsMSWNgNCdUUt9i4yN2WsGyrWMXqoxnNaHqv997z/nHwZZCb3JBM7ufJE+Q755x7pt7v/VWIEAIiIiIiIiIiIiIikwEx2RsQERERERERERH5x0UUoyIiIiIiIiIiIpOGKEZFREREREREREQmDVGMioiIiIiIiIiITBqiGBUREREREREREZk0RDEqIiIiIiIiIiIyaYhiVEREREREREREZNIQxaiIiIiIiIiIiMikIYpRERERERERERGRSUMUoyIiIiIiIiIiIpOGKEZFREREREREREQmDVGMioiIiIiIiIiITBqiGBUREREREREREZk0RDEq8o8FQmiytyAiIiIiIiLyGaIYFfkyky098d8QQvy3qEpFRERERESmAqIYFfkyg6Unz/P4b47jfD4fy7IQwrGxsaampu7ublGVioiIiIiITCLUZG9AROQKMjg4uHXr1rq6uoULF/p8vj179mg0mkAgsHTpUpPJZLPZjh07lpeXp9FoJnunIiIiIiIi/6CIYlTkSwvP8wghlmWHhoYWLly4ceNGg8HQ0NDw8ccfb9y4cfXq1clk0mw2kyTJcRzP89iMKiIiIiIyxYEQEgTxBfxoI4QmeJaJjMwekx05drYB2Y+ecf0LHX9BXPoKE0cUoyJfZkpLSysrKz0eDwAgHo8XFRVBCJVKZTKZ1Gg0c+fOZRiGJEmWZYGY2yQiIiLyJQOhTxUVhAhbHCD89CBBjHsUAACJ8bGLEEJBNWb/gRAiCAJkaUohIWGcIgRZilMYA84iQ08fkL3+6dLzQscLp+N5Pnv/p08XxuM/xh0f9xTOK5TPfRCIYlTkSwz+wkgkEr1eDwC45ZZb9uzZc/Dgwf7+/ptuuolhGIZh8Eiapid1pyIiIiIiV4BTcg1kC03h4Bkf/TyJREImkwnzshaA4w6mUimJRDJOaY1TqHg1/HcqlYIQCpchfDCZTJIkiS9JqVSKpmksGXGqA0mS41ZOJpMURVEUhf8WNsBxHADg9PECxKnnezZjKoRQeEbjhOnpqwnpGQRBYJ8kSZLjpmSr59Mhn3322TM+ICJytYM/9/n5+VarlaIorVZbWVkJIZw7d25ubu5k726iIIQABBBABBBA438CREREREROB0uf5NhYqKODUihImSzY2srGYoxeHx8Zifb2yvLzM+FwqLUVAUCr1aGOjrTXyxiNgvby+XwffvhhOBzu7OzUaDQKhWLz5s3xeDw3N7e1tXX37t3Y7dbe3s6y7ObNmwOBQGtrq0Kh0Gg06JTNtb29/eDBg9XV1V6v96OPPgqFQt3d3XK5fP/+/S6Xy+v15uXlYV04MjKyZ8+esbExLOM+/vjj0dHRzs7OTCYDIWxrazty5EgwGLRYLHjlgYGBQ4cO4fGJRGLfvn1Op7OjoyOTyZAkeeTIkePHjwcCAYvFAgCw2+2tra0WiwVfQSKRyLZt2zweT1FRkdfr3bZtG14ZQmiz2bZs2VJaWnrgwIHBwcETJ07o9XqZTLZ//34AgFarbWlpcblcdrv9yJEjhYWFWCIPDQ11dHQUFxcDAFwuVygUUqvVhw4dQghptVr8jkAIh4aGWlpaioqKiNOk/9VhGeU4juM4iqJOfwKXEZZleZ6nafoyXu9xzCIQbW+Th3DfiRCSy+VWqxWc5neYzP2dEwR4CE/dvwIIIECAh2IRDBEREZFzgxCAMOl0ju3axYbDErPZs3cvo9Vy8Xi4szNut0OKSvv9/pYWZWlpuqzMd/AgQdNsJKKwWhmTCQAQDofb2tpyc3MHBwdramqwV81ms+Xm5sZiMZ/Pt3v3boZhHA4HQiiZTC5ZsqS9vf299967//77tVotQRB2u3379u0qlWrXrl3Tpk07fvy42WweGBjAoi0ej+fk5GATJgAgnU6nUimFQiGRSLZt26ZUKq+77jq3271p06aGhoba2tpEIpFKpRBCOMMBIRSLxWQyGcMwmzdvrq6uvvbaa4PB4HvvvXf99dfX1dU1NTVlMplQKMTzvNfr7e7unj9/PgAAOwwVCoXT6eQ4TqVSGY3G/v7++vr6gYGBvr4+n88XDAZxeFskEsH7lEqlBw4coCjq8OHD9913n9vt7uzsxI8GAoGjR4/Onj07EAhACB0ORyQSCQaDbrc7Go3GYjGPx9PZ2XnzzTd3dXUFg8GOjo5Zs2aNe7uuoBjleR7bbC9dQZIkmW1tRghxHDfOZH0OJjgeG7ovLxBCUYZOLuMCaPBBCKHb7e7p6SkvL8/Ly8O1n6YaCCCSIOMptyd4PJHySmhtrn6eTJIzNXcrIiIi8kVybmmBrQxyiyV3xQrA81w8bly4EPA8G4korVZleTmXSJAKRe7y5TzLcomEvr4eQMhGo3wmg+0TJEkqFAqVSiWTyVpbW0dHR2tqasLh8ObNm8vKyhYtWlRYWPg///M/ubm5MpksmUw6HA6n02m1Wg0GA95DU1NTbm5ufn5+d3c3z/M5OTkqlYokyUAgMH36dJvN1tjYaDabjUYjACA3N7empmZgYGD37t0Wi8XpdNpsNq/XazAYJBLJ+++/L5VK58+fLyiZ3Nzc6urq7u7uQ4cO5ebmBgKBkZERv9+v1+spitq0aROEcNGiRW1tbQzDSCQSpVIpqBGSJIuLi+PxOABAIpEUFhb6/f5wOHz48GGtVhuNRru7u5PJZEFBQSqVcjgcUql03rx5brf7lVde+frXv67VarGzEetakiSTyWReXl5nZydFUSzLDgwMuFyuG264QaVS/fKXv7zmmmsqKireeOONu+++e3h4GMvcccGjn12epyZ4u3v27NmyZcu3vvWtadOmCVG3l/EUAAAI4VtvvXXs2LEf/vCHRqPxcqWhhUKhF198Ua/XP/TQQ1fUrCsycfBb43A4WltbaZqur69XKpXYgD2VQBQpGfUd7B75C6SkDKlJpX0AsTOsj5q0MzJsCoCpa9AVERERuaJACCmKOu9lms9k+FQKTwBY7Zz+R9aigOcJiYRgGABAOp3u6upKp9NarVYikcjlcqwah4eHIYRarVatVo+OjqbTaaPR2NPTk8lklEplZWVle3t7OBzmed5oNNbV1QEAQqGQz+eLx+OJREKtVufl5Q0ODrIsq1arU6mU1+sFACgUCoIgEEJYzvb29no8HoqiZs2ahZ37uARhNBpNp9MMw2g0mlgsRpKk1WrV6/U9PT2BQICiqJqammg02tfXJ5PJJBJJTU0NACAWi7nd7kAgEA6HEUKzZ8+mKAqbPCGEsVgsGo2aTKZMJpNOp71er16vD4fDLpdLqVTm5eUFAoGCggKWZW02W1VVFXb5YuEbCAR4nu/v708kEjfccAM+l8Ph0Ol0OTk5AIDu7m6DwZCTk9Pd3S2VSpubm5ctW4Yf+txrfyXEKNaLb7zxxosvvvjDH/7wa1/72kVrOzzxySeffP755//2t7/de++9CKETJ0489thjdXV1v/vd7/Cwsy3OcRxJkjt27PjpT3964403/vu///vpWlYQoytWrNi+fTu2Nl+6GMUn6u/vLy8vLykp6enpEfzFIlOBRCIxNjbGsmxRURHDMFPnrgznekJIeMPtn3T9Wq2uUitLEeIISPrDnYn46LXT/k2nmoYQDwGcKpsWERER+QKZlPAq7GIVJMQZdUImk/F4PKlUimGYgoIC7CI+h9/V6/VGo1GEELawnmNlDJa/BEGYzWaJRHLe8dmPOp3OVCoFAMjNzZVKpROZezo4SGDcFJZlQ6GQTqfLfmhcjj+EMBqN8jyvVqtPP+kVcdPjczQ3Nzc1NfX29l76ggqFQsh9hhA2Nzfv3r27t7f3+eefl0gk55ARWIw2Njbu27cvEomcUYwK6HQ6hmEu70ccQmgymfDtlMiUQiaTlZSUCP+cSpGjCEIinnK39f+3QlmiUZayXBwhxEOoV0/38Knjvf+5cMZPJLQO4EBSEREREZFzgxDAP/KIBzgQXzjyuUdPGU3Hz0Y4QxxkFUU6IxRF5efnC//EkYrjpmRLMaPReF6FkD0+e/EJIszNy8s7x6PZZG84W1Oe0ZyHEKIoSghOGLdCdgdEpVIpnHScHp2oGM1Ozj/b7seBKzhmmwMFQY13L+zp9NXGDeB5Pp1O4yM8z996660//vGPrVYrVqJ42dM/HDzP44/O2rVrEUJz584Fn48yQafAe+A4Lp1Oj3t03PZOP4h3dY6nk8lkpp4LWORTLt0EftlBCADInxx5J8OnjaoqlkvgjxUAiOUSBvWMEde2Qee2aUX3IoCm1tZFREREphJcPJYOBSWGHIJhUh43wTC0RstGI5lYVGbOQ2wm6fXQKjWlUKa8HkAQEr0BgE+1Kcuy/f39sVjMYDDgBHCn06lSqZRKpd/vT6VSeXl5iUQiHo/LZLLBwcFkMqlSqSorKwVVQBBEIpHw+XyFhYWpVGpoaAivVlxc7PV67XY7TdNVVVVYqOAE+WAwaDQai4qK7Hb76OioVCqtrq6mKKqzszOZTFZUVCiVSiEBFweV5ufn5+fn22w2j8fDMAxesLu7m2VZq9WqVqsBAIlEAp8avywQQpfLhRDCLvjh4WGpVFpRUZFOpzs7O/GzGB0ddTqdeDMIIbfbbTAYKIqKRqO4QBVOxse5/6lUyuPx5OfnEwQRDodZltXr9cFgEEIoNDgUknZYlnU6nTjNX2CiUYzCBfscZaLGgdOGss2WuF8ClsnEKYQjn316TlnChVuK7EU4jjObzT/72c/WrVuH90OS5BlvUwiCwE9+2rRpv/jFL26//XaQJUYFCYunj9sqOEuDh3EHBXk97ulMHZ+vyLmZekqUhxCGov1jgaM69TSEeACEzxJEgIOQ0Gqqh13bYwkXBDDrURERERGRT0E8BwCI9J0c/Our/pbDseFB2wdvOXdsSdiHx/ZsH373jdjQQOD4keG3/8+zf1dseGBkw7uOTR9Eersz4RCCEADgcrk2bNiQTqd37tw5MjISCoU2btzY2NgIAOjq6nr99de7u7sdDseePXt6enr27NmjVqtPnjy5ZcuWbGva1q1bN2/eHAwG/X4/Xm3Pnj2dnZ0ffvhhJBLBydl4cGtr6549e5LJpEKhOHLkyL59+7RabSAQ2Lx5c39/v1wuHxkZ2b59u/AEjx49unfvXoSQVCrdvXv30aNHNRpNKBTavHkzFpdDQ0O7du0CACCEhoeHd+zYIWitUCi0ffv25uZmhJDX643FYkePHm1qatq+fXsikejp6dm/f38mkyEI4uOPP7bZbARBtLS0HD58GACwadMmu9
Social change innovations, citizen science, miniSASS and the SDGs The United Nations Sustainable Development Goals (SDGs) describe a course of action to address poverty, protect the planet and ensure prosperity for all (https://sdgs.un.org/goals). More speci ๏ฌ cally, SDG 6 clari ๏ฌ es how water quality, quantity and access are crucial to human well-being, and yet human activities are compromising water resources through over-exploitation, pollution, as well as contributing to the spread of disease. Globally aquatic ecosystems are highly threatened and concerted efforts by governments and civil society to โ€˜ turn the situation around โ€™ are simply not working. Human-created problems require human-centred solutions and these require different ways of thinking and acting to those behaviour patterns that are contributing to the challenges. In this paper, we ๏ฌ rst consider causal approaches to attitude change and behaviour modi ๏ฌ cation that are simply not working as intended. We then explore enabling responses such as citizen science and co-engaged action learning as more tenable alternatives. SDG 6 has a focus on clean water and sanitation for all. The SDGs further clarify how the extent to which this goal can be realized depends, to a large extent, on stakeholder engagements and education. Through stakeholder engagements and educational processes, people can contribute towards SDG 6 and the speci ๏ฌ c indicator and target in SDG 6.b โ€“ Stakeholder participation. Following a three-year research process, that investigated a wide range of participatory tools, this paper explores how the Stream Assessment Scoring System (miniSASS; www.minisass.org) can enable members of the public to engage in water quality monitoring at a local level. The paper continues to demonstrate how miniSASS can contribute to the monitoring of progress towards Sustainable Development Goal Target 6.3, by providing a mechanism for data collection indicator 6.3.2. miniSASS is proving popular in southern Africa as a methodology for engaging stakeholder participation in water quality monitoring and management. The technique costs very little to implement and can be applied by children and scientists alike. As a biomonitoring approach, it is based on families of macroinvertebrates that are present in most perennial rivers of the world. The paper concludes by describing how useful the miniSASS technique can be for addressing data gaps for SDG 6.3.2 reporting, and that it can be applied in most regions of the world. (cid:129) This paper demonstrates how citizen-derived data through the stream assessment scoring system (miniSASS) can be used for SDG 6 reporting on water quality. (cid:129) It explores the expansion of miniSASS to collect water quality data globally. (cid:129) miniSASS supports policy by mobilizing society to engage with water issues. INTRODUCTION Conventional wisdom continues to fail us Conventional wisdom approaches, which often emphasize attitude change with the assumption that behavioural practices will follow have not proved tenable (Beck, 1992(Beck, , 1995(Beck, , 1997Kemmis & Mutton, 2012). Fien (2003) emphasizes that 'among the most successful [environmental education] programmes are those that avoid the belief that awareness leads to understanding, understanding leads to concern, and concern motivates the development of skills and action (our italics)' (Fien, 2003). Causal, linear, top-down or centre-to-periphery approaches that assume behaviour change, following awareness raising, often facilitate a power-gradient from those who feel they know to those who they feel ought to know. This rational logic continues to assume that once informed, the others, often described as a target group 1 , will change accordingly! As reported above, this rational change process fails to meet expectations and, at times, may even alienate the very people it is seeking to change (Taylor, 2010). The sustainable development goals and SDG 6 SDG 6 has a focus on clean water and sanitation for all. The SDGs further clarify how the extent to which this goal can be realized depends, to a large extent, on stakeholder engagements and education. Through stakeholder engagements and educational processes, people can contribute towards SDG 6 and the specific indicator and target in SDG 6.b -Stakeholder participation, namely: 'Support and strengthen the participation of local communities in improving water and sanitation management' The United Nations (UN) are explicit as to why stakeholder participation is important, but a key research question is how this participation can be meaningful and engaging, particularly the aspect of improving water management: 'Target 6.b aims for the participation of local communities in water and sanitation planning and management, which is essential for ensuring that the needs of all people are being met. The involvement of relevant stakeholders is further necessary to ensure: that the technical and administrative solutions decided upon are suitable for specific socioeconomic contexts, the full understanding of the impacts of a certain development decision and the encouragement of local ownership of the solutions when implemented (to ensure sustainability over time). Target 6.b supports the implementation of all SDG 6 targets (targets 6.1-6.6 and 6.a) by promoting the meaningful involvement of local communities, which is also a central component of IWRM' 2 . 1 The use of the term target group, a metaphor from military operations, emphasizes the causal tradition of 'getting the message across.' It is therefore difficult to apply in a more inclusive, dialogue-centred manner. By classifying people as 'the other' it militates against an opportunity for relationship building, mutual learning, deliberation and a commitment to building understanding and human dignity. 2 https://www.sdg6monitoring.org/indicators/target-6b/ A detailed three-year research process, supported by the Water Research Commission, investigated a wide range of public participation approaches to water quality issues and clarified how citizen science can play a meaningful role in water-related issues (Graham & Taylor, 2018). Of all the citizen science tools that were reviewed, miniSASS stood out as a relatively easy technique to use, that can be applied at virtually no cost, the results are immediately available and no laboratory is necessary to develop or interpret the findings. Stakeholder participation through citizen science Citizen science and co-engaged action learning (Bonney et al., 2009;Pocock, et al., 2014;O'Donoghue et al., 2018) are, however, showing the way to more inclusive, enabling and effective social change processes. This work deepens the understanding of water issues in a practical and applied manner and enables actions for more sustainable practices (Graham & Taylor, 2018). Here the democratization of science engages people who often become proud and eager participants in building understanding and working for more sustainable practices rather than being the passive recipients of knowledge from others. For challenges as complex as water management and ecological infrastructure 3 , the importance of context and the involvement of those participating is crucial. Many of these issues and problems require the integration of knowledge from the natural and social sciences, as well as economics, and there is rarely a single 'silver bullet' solution. As Bhaskar et al. (2010) states, 'exemplifying the triangular relationship of critical realism, interdisciplinarity and complex (open-systemic) phenomena' is needed for the investigation of such wicked problems as water resources management. As stated above 'The democratisation of science, through citizen science processes, supported by practical and accessible 'tools of science,' is one area of work that is showing encouraging results' (Graham & Taylor, 2018, page V). Building collective capacity in the context of resource-based risks and uncertainty points towards the broad field of social learning (Wals, 2007) as a useful component of effective social change. These ideas and concepts form the basis of this paper which advocates an 'enabling' orientationrather than a traditional 'causal' approach (i.e. seeking to cause a change in others through top-down methodologies) (Taylor, 2014). miniSASS, biomonitoring and Google Earth We now explore miniSASS as a key enabling tool that may have global relevance in both mobilizing people, popularizing the Sustainable Development Goals (SDGs) and helping provide accessible biomonitoring data, at virtually no cost to the user, towards SDG indicator 6.3.2. This perspective on citizen science is mirrored in a seminal paper by Fritz et al. (2019) on citizen science and the United Nations SDGs. In their paper, Fritz et al. develop a road-map on how citizen science can support the SDGs. In keeping with this theme, we are exploring miniSASS as a complementary, citizen science orientated, research tool. What is the stream assessment scoring system (miniSASS)? miniSASS is a simple tool which can be used by anyone to monitor the health of a river. One simply collects a sample of macroinvertebrates (small organisms large enough to be seen with the naked eye) from a natural river or stream, and depending on which groups are present, one can calculate a River Health Index for the river. This score helps classify the health, or ecological condition of the river, ranging across five categories from natural (blue) to very poor (purple). The results can then be recorded on the miniSASS website Google Earth layer (www.minisass.org). This database is effectively a 'Living data' system where further data can continuously be added. Through miniSASS, one can learn about rivers, monitor their water quality, explore the drivers of water quality deterioration, and, of course, take action to improve the quality of the streams and rivers. Challenges to be considered when applying miniSASS As with any scientific enquiry processes, a number of challenges must be addressed when applying miniSASS, these include: โ€ข Participants must locate a local stream or river and be prepared to go into the water to locate the organisms. โ€ข The organisms are not always easy to catch for identification purposes. A small net, such as one used for a goldfish pond, can be helpful here. โ€ข Once collected, the organisms need to be identified. This is not always easy. The miniSASS website, at www. minisass.org, does, however, provide a simple dichotomous key through which the users can key-out the organism they are seeking to identify. โ€ข In any citizen science, endeavour issues of scientific accuracy are a concern. By offering people training courses in miniSASS, this issue is alleviated to some extent. Training can be undertaken online using a simple, instructional video from the miniSASS website referred to above. The validity of the data will be taken up further in the Results and Discussion section. Where did miniSASS come from? miniSASS (Graham et al., 2004) was derived from the more rigorous South African Scoring System (SASS), a biomonitoring method for evaluating river health (Dickens & Graham, 2002;Dallas & Day, 2004). The SASS method requires the ability to identify up to 90 different aquatic invertebrate families and thus a high level of training is required. There was, therefore, a need for a simpler, more user-friendly tool for biomonitoring that would still yield reliable water quality/river health data. This need gave rise to the development of miniSASS. The development process involved reducing the taxonomic complexity of SASS by creating broad groupings of invertebrates that could act as surrogates for the complete suite of SASS taxa. A rigorous statistical evaluation of a large volume of data from the SASS method was conducted in order to determine whether the miniSASS method would yield viable results, similar to those derived from SASS. This evaluation assessed data collected over a wide geographic and water quality range and provided indications that miniSASS is suitable for predicting SASS scores and is thus an appropriate indicator of biological water quality (Graham et al., 2004). The strength of miniSASS has been evident for several years. Its use is widely applied in South Africa, where it was initially developed, although it has been successfully used in many other southern African countries. It has been effectively applied in India (in the Himalaya mountains at over 18,000 feet in altitude), in Vietnam, Canada (where the ambient air temperature was ร€20ยฐC), Germany and Brazil. GroundTruth, an environmental engineering company, verifies the incoming data and has worked with the Water Research Commission to support this development through the www.minisass.org website. A topical feature of this website is the number of recent additions of data, as shown on the right-hand side of the screen. miniSASS is also found on the Water Research Commission's website www.wrc.org.za. RESULTS AND DISCUSSION The existing and future potential of miniSASS as a global citizen science monitoring tool suitable for SDG 6.3.2 Substantial development of miniSASS has taken place since its early inception. This work includes the development of an online portal, or website described above, that allows participants, ranging from school children to NGOs and other organizations, to upload their self-collected data. Although there may be concern about the validity of these data, largely due to a lack of training of those collecting it, this concern is offset by the fact that the data are 'crowd-sourced', allowing for the shear bulk of evidence to tell a valid story, perhaps even more reliably than the more sparsely collected 'official' data. Holt et al. (2013) describe and demonstrate the potential of citizen science data. They show that, despite biases, it is possible to statistically clean citizen science data so that it matches the quality of professional sampled data. Citizen-derived data can also cover huge geographical regions, whereas professionals can only cover small areas (due to the cost). How universal is miniSASS? Because many macroinvertebrates have part of their life cycle in the water and part in the air, they have been able to move vast distances, carried by water, wind and by ducks and other vectors, and are thus found spread across the world 4 . However, others, such as some crustaceans, have not moved so far and tend to be more confined. For this reason, macroinvertebrate samples collected across the world have many similarities, especially at the taxonomic level of order or family, while at a local level, there is plenty of division into local genera and species. For this reason, indices that are based on the higher taxonomic level of order and family are more globally applicable. This was validated, in South Africa, and was a large part of the success of the SASS index (Dickens & Graham, 2002) which is based largely on family identification, making the index inexpensive and thus affordable for routine and large-scale monitoring. An index that is to be applied at a global level could be improved with local adaptations. This can be done, not only to add or subtract orders or families but also to verify the sensitivity of the groups. Such adaptation could be done at an 'eco-region' level, for example, all northern cold-water European countries are likely to have similar invertebrates, which will need to be separated from the African ones which will also be separate from the Asian ones. We are finding that the differences are relatively minor, however, and a single index that includes the orders and families most important for ALL regions of the world, is viable as a citizen science instrument. A generalized and indicative global index, using the miniSASS methodology, is thus possible for citizen scientists. miniSASS has many strengths, as well as a number of challenges. These are tabulated in Table 1. A summary of SDG indicator 6.3.2 methodology SDG indicator 6.3.2 is reported by countries as the proportion of bodies of water with good ambient water quality. It is one of two indicators of Target 6.3 which aims to improve the quality of water in rivers, lakes and groundwaters by reducing pollution. Level one monitoring of the official indicator methodology (UN Water, 2018) relies on water quality data from in situ measurements and the analysis of samples collected from rivers, lakes and aquifers. Water quality is assessed by measuring physical and chemical parameters that reflect natural water quality, together with major human impacts on water quality. Level two reporting can include any type of water quality data that can be used to classify a body of water. Examples include data from citizen initiatives, Earth observation products, biological approaches or additional water quality parameters are not included in the core Level one list. The indicator methodology stipulates that countries are divided into river basin-based reporting districts, which are further divided into smaller water body units. These small hydrologically defined units are classified based on the results of measurements of five core parameter groups: oxygen, salinity, nitrogen, phosphorus and acidification. These data are compared to numerical target values, and if a compliance rate of 80% or more is achieved, a water body is classified as having 'good' ambient water quality. Case study: using miniSASS data to generate an SDG indicator 6.3.2 score In many parts of the world, there are significant data gaps, both spatially and temporally, in the water quality record which cannot be filled using 'conventional monitoring programmes'. Citizen science projects are one of several approaches currently being explored to see whether they can play a significant role in filling these data gaps. The miniSASS data generated by citizens do not fit the requirements of Level one reporting for SDG indicator 6.3.2 as they are based on a biological assessment, rather than physico-chemical data collection, and are not sampled at fixed monitoring locations. In this study, an indicator 6.3.2 score was generated that could be described as a 'Level two report' based on the current methodology ( UN Water, 2018). In order to calculate an indicator score, a data-rich area was selected as shown in Figure 1. The Department of Water and Sanitation of South Africa defines drainage basins for water resource management purposes. The case study area aligns with Drainage Basin U. River water bodies were defined using the HydroATLAS Level 10 units (Lehner & Grill, 2013) and a total of 129 river water body units were defined for the drainage basin. Each min-iSASS data record was allocated to a water body unit based on spatial location, and the scores compared to the Fair/Good boundary of the ecological categories for the two river types (!5.9 for sandy-type rivers and !6.2 for rocky-type rivers). A water body was classified as 'good' if 80% of samples met this target. In order to calculate the SDG indicator score, the proportion of water bodies within the basin with good water quality were calculated. Of the 129 river water bodies defined, 40 had miniSASS monitoring data and 6 of these were classified as 'good'. This thus yielded an SDG indicator score of 15 for this basin (6/40*100 ยผ 15). Strengths Challenges It supports the global trend to engage citizen science. This is especially relevant for realizing SDG 6 (b). Crowd-sourced data have its own unique validity; while each sample may not have a high level of confidence, the sheer number of data records strengthens the research rigor. The approach engages citizens with the SDG agenda, enabling them to contribute to a global effort. This could become a VERY big promotion for the SDGs. miniSASS data have been used to demonstrate citizen science participation in monitoring compliance with water quality objectives. The miniSASS data management system allows for the clustering of the data into a time-series. This means progress over time can be measured and compared at the same site. miniSASS costs very little to use. Simple apparatus such as a net, which can be home-made, and a reference sheet that is available as a free down-load, on the website, strengthens the miniSASS study. Since the macroinvertebrates are visible to the naked eye, the shape and form are of most importance. This means that advanced competence in languages such as English is not essential. Indeed, nine-year-old isiZulu speaking children are able to master the technique. miniSASS materials are also available in other languages such as French, Afrikaans and isiZulu. Participants need to learn how to do miniSASS. Although there are simple instructions and tutorials on the website, participants learn best in the field with an experienced person. Citizen science by nature requires a level of coordination. At present, the coordination is provided by Ayanda Lepheana and GroundTruth with website support from SAIAB a and SAEON b , both South African government-supported institutions. If the miniSASS data are to be used for SDG reporting globally, it will require additional support. Approval of data by the government may be challenging if the government prefers to 'be in charge' and not receptive to citizen science input and participation. To what extent will governments embrace and support the democratization of science? Each country will have the ability to either embrace citizen science to its fullest, or for government officials to use the method themselves as a low-tech monitoring method. While the latter is not invalid, it is unlikely that it will result in the high number of data that will be collected by a strong citizen science programme. Corrected Proof One key finding of this analysis was that the monitoring effort differed considerably between water bodies. The most data records for any single water body were 183, whereas several had only a single record. This disparity in monitoring effort results in some water bodies being classified with a much higher degree of confidence compared to others. A further development of this method could include specifying a minimum data requirement to ensure that water bodies are classified equally and reliably. In this example, setting a minimum data requirement of five monitoring records per water body resulted in 17 water bodies being monitored, none of which were classified as 'good'. In addition to defining minimum data requirements, future testing could include optimizing monitoring network design to improve data collection coverage. miniSASS as a proxy for formal water quality monitoring programmes Much of the strength of miniSASS lies in its ability to generate large amounts of data through citizen science and crowdsourcing. Although these data may not always be entirely valid, due to the fact that many of the people who collect it may not have received formal training, it has its own unique validity as the sheer number of data records strengthens rigour (Holt, et al., 2013). In order to assess the ability of miniSASS to act as a proxy for formal water quality monitoring programmes, miniSASS data were compared against data collected by the River Ecostatus Monitoring Programme (REMP). The REMP assesses the ecological condition of South Africa's rivers based on a rapid assessment of aquatic macroinvertebrates, using the Macroinvertebrate Response Assessment Index (MIRAI). Corrected Proof The 2017/2018 REMP data were compared against the miniSASS data collected for the same time period. In 2017 and 2018, a total of 522 miniSASS data entries (234 and 288 in each year, respectively) were recorded across South Africa. Of these, 13 were recorded within a 500 m radius of a REMP monitoring point and 98 within a 5 km radius ( Figure 2). However, only 28 REMP sites within 5 km of miniSASS sampling points and 11 within 500 m of miniSASS sampling points had REMP data recorded during this time. Of the 11 miniSASS entries recorded within 500 m of REMP monitoring sites, four yielded the same ecological category as those reported at the REMP sites, three yielded an ecological category one category below those reported at the REMP sites, and the remaining four yielded ecological categories two categories below those reported at the REMP sites. The frequency of samples recorded in each ecological category by the REMP and miniSASS data collection is presented in Table 2. Although the results assessed here do not provide conclusive evidence of the relationship between miniSASS and REMP data, they provide some indication that miniSASS is able to yield results similar to those derived from the REMP, particularly for rivers in a lower ecological condition (C, D and E). This, in combination with the fact that the method has been extensively assessed against SASS data and is strongly rooted in rigorous statistical evaluations, suggests that, where rigorous data collection may not be feasible due to resource constraints, mini-SASS can provide defensible water quality results. Considering that many of South Africa's rivers are in a degraded state, a citizen science tool that can be used to provide evidence of a river's degraded state and draw attention to the plight of the freshwater resources, is of immense value. Only samples collected at miniSASS and REMP sampling points within 500 m of each other are displayed.
Quantitation of Acetyl Hexapeptide-8 in Cosmetics by Hydrophilic Interaction Liquid Chromatography Coupled to Photo Diode Array Detection : Bioactive peptides are gaining more and more popularity in the research and development of cosmetic products with anti-aging effect. Acetyl hexapeptide-8 is a hydrophilic peptide incorporated in cosmetics to reduce the under-eye wrinkles and the forehead furrows. Hydrophilic interaction liquid chromatography (HILIC) is the separation technique of choice for analyzing peptides. In this work, a rapid HILIC method coupled to photodiode array detection operated at 214 nm was developed, validated and used to determine acetyl-hexapeptide-8 in cosmetics. Chromatography was performed on a Xbridge ยฎ HILIC BEH analytical column using as mobile phase a 40 mM ammonium formate water solution (pH 6.5)-acetonitrile mixture 30:70, v/v at ๏ฌ‚ow rate 0.25 mL min โˆ’ 1 . The assay was linear over the concentration range 20 to 30 ยต g mL โˆ’ 1 for the cosmetic formulations and 0.004 to 0.007% ( w/w ) for the cosmetic cream. The limits of quantitation for acetyl hexapeptide-8 were 1.5 ยต g mL โˆ’ 1 and 0.002% ( w/w ) for the assay of cosmetic formulations and cosmetic creams, respectively. The method was applied to the analysis of cosmetic formulations and anti-wrinkle cosmetic creams. Introduction Skin aging is a biological process influenced by various genetic, environmental (pollution, UV radiation), hormonal and metabolic factors. For more than 20 years, bioactive ingredients have been incorporated in cosmetics to smooth out deep wrinkles, improve skin elasticity and reduce the effects of the skin aging [1]. Given that most natural processes within the body are stimulated through the interaction of peptides and proteins with their target partners, bioactive peptides incorporated in cosmetics are one of the most popular ways to reduce wrinkles and fine lines, improve skin appearance and texture, and treat decolorated skin [2]. The role of peptides is crucial in several natural processes related to skin care, such as the modulation of cell proliferation, inflammation, melanogenesis, cell migration, angiogenesis and the synthesis and regulation of proteins. Bioactive peptides are gaining more and more popularity in the research and development of cosmetic products with anti-aging effect [3]. Acetyl hexapeptide-8, also known as acetylhexapeptide-3, is a neurotransmitter inhibitor peptide designed from the N-terminal end of the synaptosomal-associated protein (SNAP25) [4,5]. It competitive inhibits the SNAP25 component of the said vesicle docking and fusion protein complex (SNARE) [6] which triggers the calcium-dependent neurotransmitter release into the synapse, a process necessary for muscle contraction [7][8][9][10]. Acetyl hexapeptide-8 is marketed as Argireline ยฎ [11], and it has been efficiently used in cosmetics for smoothening the under-eye wrinkles and the forehead furrows [12][13][14]. After topical application at specific areas of the face, it inhibits the reactions that cause muscles to move or contract for example when forming facial expressions such as smiling or frowning [15,16]. A clinical trial of daily topical application of acetyl hexapeptide-8 in 24 patients with blepharospasm concluded that topical application of this peptide is safe and promising for prolonging the action of injectable botulinum neurotoxin therapy [17]. The quality control of cosmetic products containing bioactive peptides should be addressed under a more systematic investigation and the concentration of these peptides in cosmetics should be supported by product-specific studies. Therefore, there is a real need to set up analytical methods to quantitate the bioactive compounds in cosmetic products. In the last decade, the increased interest in the separation of peptides has gained momentum due to the emphasis on the chromatographic separation of various proteins in proteome. High performance liquid chromatography (HPLC) has been widely used in the analysis of peptides in various fields of research and development, using different modes of separation. Nowadays, hydrophilic interaction liquid chromatography is the separation technique of choice for the analysis of peptides [18][19][20][21][22]. Usually, bioactive peptides incorporated in cosmetic products are hydrophilic compounds that show little or no retention on conventional RP-HPLC analytical columns. The stationary phases in HILIC are mainly polar such as silica gel, diol-, amino-or cyano-bonded and other zwitterionic packing materials and the typical mobile phases consists of mixtures of a highly polar solvent (typically water) with an organic modifier (typically acetonitrile) [23,24]. The analytes retention is increased by increasing the proportion of the organic modifier in the mobile phase [25,26]. The polar functional groups on the HILIC stationary phases absorb water (0.5%-1.0%) and this way a water-enriched layer is immobilized between the mobile and the stationary phase, especially when the water content of the mobile phase is less than 40% [27]. In HILIC the more hydrophilic analytes are eluted later than the less polar compound. In the literature several works have been published on peptide separation using HILIC columns [28][29][30], but only a few studies were dedicated to peptide quantitation in cosmetics [31][32][33]. In one of these publications, ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) using a TSK-gel Amide-80 ยฎ HILIC analytical column was used to quantitate acetyl hexapeptide-8 in cosmetics after solid phase extraction procedure [34]. The TSK-gel Amide-80 ยฎ HILIC analytical column was also used in LC-MS/MS methods developed to evaluate transdermal absorption of acetyl-hexapeptide-8 [35,36]. Even though the abovementioned LC-MS/MS approaches are selective and sensitive for the quantitation of acetyl-hexapeptide-8 in various matrices, there is a real need for the development of a reliable analytical method without the need for specialized equipment that can be used in routine analyses. In this work a rapid and sensitive hydrophilic interaction liquid chromatographic method with photodiode array (PDA) detection was developed and validated for the quantitation of acetyl-hexapeptide-8 in cosmetics. The appropriate stationary phase, pH, buffer concentration and mobile phase composition, was thoroughly investigated prior to method validation. The combination of HILIC with PDA detection provides an accurate, repeatable and robust method for the quantitative analysis of cosmetic products. To the best of our knowledge, this is the first report of a HILIC-PDA method for the quantitation of acetyl-hexapeptide-8 in cosmetics. In this work a Xbridge ยฎ -HILIC BEH analytical column has been used and combined with a rapid and simple sample pretreatment. All the above in combination with a short run time of less than 10 min, makes the proposed HILIC-PDA method suitable for the routine quality control of cosmetics. Chemical and Reagents HPLC grade solvents were bought from Sigma-Aldrich (St. Louis, MO, USA). Analytical reagent grade ammonium formate was acquired from Alfa-Aesar (Haverhill, MA, USA). HPLC grade water was produced by means of a Synergy ยฎ UV water purification system (Merck Millipore, MA, USA). Whatman nylon membrane filters with pore size 0.45 ยตm and diameter 47 mm were purchased from GelmanSciences (Northampton, UK). Hydrophobic polytetrafluorethylene (PTFE phobic 13 mm, pore size 0.22 ยตm) syringe filters were acquired from Novalab SA, Athens, Greece representative of RephiLe Bioscience Ltd., Europe. The anti-wrinkle cosmetic cream containing 0.005% w/w acetyl hexapeptide-8 was produced in the Laboratory of Chemistry-Biochemistry-Cosmetic Science, Department of Biomedical Sciences, University of West Attica in Athens, Greece by using the Argireline peptide solution C ยฎ . The excipients present in anti-wrinkle cream are aqua, xalifin-15, propylene glycol, sabowax FX-65, squalene, butylated hydroxyl toluene (BHT) and germall 115. Placebo cream containing only the excipients, without acetyl hexapeptide-8, was also prepared for validation purposes. Stock and Calibration Standard Solutions Acetyl hexapeptide-8 stock standard solution at 500 ยตg mL โˆ’1 was prepared in acetonitrile-water mixture (60:40, v/v). The stock standard solution was further diluted in acetonitrilewater mixture (60:40, v/v) to prepare mixed working standard solutions. The solutions were stored in amber bottles at 4 โ€ข C and remained stable for two months. For the quantitation of acetyl hexapeptide-8 in the anti-wrinkle cosmetic cream the calibration spiked cream samples at concentration levels 0.004, 0.0045, 0.005, 0.006 and 0.007 w/w were prepared by spiking placebo cream with appropriate dilutions of acetyl hexapeptide-8 stock standard solution. Quality control (QC) samples were also prepared in a similar manner at concentration levels 0.004, 0.005 and 0.007% w/w. Cosmetic Formulation An accurately weighted amount (0.5 g) of Argireline peptide solution C ยฎ is transferred at a 10 mL volumetric flask and diluted to volume with water/acetonitrile (30:70, v/v). A portion of this solution is then analyzed by the proposed HILIC-PDA method for the quantitation of acetyl hexapeptide-8. Cosmetic Cream An accurately weighted amount (0.1 g) of cosmetic cream is mixed with 200 ยตL of acetonitrile-water mixture (60:40, v/v) and 800 ยตL of 30% 40 mM ammonium formate water solution in acetonitrile. The mixture is shaken for 2 min and then centrifuged at 18,000ร— g for 30 min, at ambient temperature. The upper layer is then filtered through a PTFE hydrophobic syringe filter prior to HILIC-PDA analysis. HILIC-PDA The HPLC-PDA analytical instrument used in this work is consisting of a Waters 717 plus autosampler, a column temperature oven, a Waters 1515 isocratic pump and a Waters 996 photodiode array detector (Milford, MA, USA). The Empower software (Milford, MA, USA) was used for the data acquisition. The chromatographic eluent was monitored over the wavelengths 200 to 400 nm and extracted chromatograms at ฮป 214 nm were used for data analysis. An Xbridge ยฎ -HILIC BEH guard column (20 ร— 2.1 mm, 3.5 ยตm) in line with an Xbridge ยฎ -HILIC BEH analytical column (2.1 ร— 150 mm, 3.5 ยตm) were used for the chromatography. The mobile phase was composed of 30% 40 mM ammonium formate aqueous solution in acetonitrile and pumped at a flow rate of 0.25 mL min โˆ’1 . Prior to the chromatography the mobile phase was filtered through a 0.22 ยตm nylon membrane filter, Membrane Solutions (Kent, WA, USA) and degassed under vacuum. Samples were injected via a 10 ยตL injection loop and acetyl hexapeptide-8 was quantitated in cosmetic products with a chromatographic run time of 10 min. Method Validation and Application to the Analysis of Real Samples The HILIC-PDA method was validated in terms of linearity, limit of detection, limit of quantitation, intra-day and inter-day precision and overall accuracy [37]. The method was applied to the analysis of various batches of a cosmetic formulation namely Argireline peptide solution C ยฎ and to the analysis of various batches of an anti-wrinkle cosmetic cream. To evaluate the linearity, linear regressions were used to construct the calibration graphs after the analysis calibration standards and spiked cream samples at five different concentration levels. Peak area measurements were used for quantitation of acetyl hexapeptide-8. The % coefficient of variations (%CVs) and the % relative recovery were calculated to evaluate precision (intra-and inter-day) and overall accuracy, respectively. HILIC-PDA Method Development Chromatography Acetyl-hexapeptide-8 consists of a six amino acids chain acetylated at the N-terminal residue, N-acetyl-Glu-Glu-Met-Gln-Arg-ArgNH 2 , as shown in Figure 1a HILIC-PDA The HPLC-PDA analytical instrument used in this work is consisting of a Waters 717 plus autosampler, a column temperature oven, a Waters 1515 isocratic pump and a Waters 996 photodiode array detector (Milford, MA, USA). The Empower software (Milford, MA, USA) was used for the data acquisition. The chromatographic eluent was monitored over the wavelengths 200 to 400 nm and extracted chromatograms at ฮป 214 nm were used for data analysis. An Xbridge ยฎ -HILIC BEH guard column (20 ร— 2.1 mm, 3.5 ยต m) in line with an Xbridge ยฎ -HILIC BEH analytical column (2.1 ร— 150 mm, 3.5 ฮผm) were used for the chromatography. The mobile phase was composed of 30% 40 mM ammonium formate aqueous solution in acetonitrile and pumped at a flow rate of 0.25 mL min โˆ’1 . Prior to the chromatography the mobile phase was filtered through a 0.22 ยต m nylon membrane filter, Membrane Solutions (Kent, WA, USA) and degassed under vacuum. Samples were injected via a 10 ยต L injection loop and acetyl hexapeptide-8 was quantitated in cosmetic products with a chromatographic run time of 10 min. Method Validation and Application to the Analysis of Real Samples The HILIC-PDA method was validated in terms of linearity, limit of detection, limit of quantitation, intra-day and inter-day precision and overall accuracy [37]. The method was applied to the analysis of various batches of a cosmetic formulation namely Argireline peptide solution C ยฎ and to the analysis of various batches of an anti-wrinkle cosmetic cream. To evaluate the linearity, linear regressions were used to construct the calibration graphs after the analysis calibration standards and spiked cream samples at five different concentration levels. Peak area measurements were used for quantitation of acetyl hexapeptide-8. The % coefficient of variations (%CVs) and the % relative recovery were calculated to evaluate precision (intra-and inter-day) and overall accuracy, respectively. LogD values of acetyl hexapeptide-8 versus pH are less than -9, indicating that this compound is highly hydrophilic (Figure 1b, bottom). HILIC is the analytical technique of LogD values of acetyl hexapeptide-8 versus pH are less than -9, indicating that this compound is highly hydrophilic (Figure 1b, bottom). HILIC is the analytical technique of choice for the chromatographic analysis of polar compounds, and it was therefore used in the present work. The retention mechanism in HILIC is based on several types of interactions such as adsorption, partition, electrostatic, hydrogen bonding and reversed-phase interactions [38][39][40]. Greater retention is achieved when more than 70% of organic modifier (e.g., acetonitrile) is used in the mobile phase. The chromatographic conditions were optimized to achieve adequate retention and optimum peak shape of acetyl-hexapeptide-8. HILIC-PDA Method Development To find the optimal mobile phase composition we examined various combinations of aqueous buffer solutions and acetonitrile with changed content of each component. The detection wavelength was set to 214 nm, because at this wavelength acetyl heptpeptide-8 shows satisfactory absorption. The flow rate was set to 0.25 mL min โˆ’1 and the experiments were conducted at ambient temperature. The XBridge ยฎ -HILIC BEH analytical column used in this work consists of BEH particles. Some accessible free silanol groups on the surface of these BEH particles are responsible for electrostatic interactions. The addition of aqueous salt solutions in the HILIC mobile phase eluents reduces the electrostatic interactions between the stationary phase and the analyte [41]. Plot of the logk values of acetyl hexapeptide-8 as a function of the concentration of ammonium formate is presented in Figure 2a. Ammonium formate concentration was modified from 2.5 to 100 mM with a constant aqueous component of the mobile phase eluent at 30%, v/v and a constant pH at 6.5. Under these conditions the free silanol groups on the surface of these BEH particles are negatively charged and acetyl hexapeptide-8 is in zwitterion form. The results showed that by increasing the concentration of ammonium formate the retention of acetyl hexapeptide-8 is slightly decreased up to 40 mM and then increased up to 100 mM. These findings indicate that the retention mechanism of acetyl hexapeptide-8 in XBridge ยฎ -HILIC BEH analytical column is complex and comprises both hydrophilic partition with secondary electrostatic interactions. From these experiments, we concluded that by using a 40 mM ammonium formate concentration in the mobile phase peak symmetry and plate numbers are improved and the analyte is adequately retained and well separated from the solvent front. choice for the chromatographic analysis of polar compounds, and it was therefore used in the present work. The retention mechanism in HILIC is based on several types of interactions such as adsorption, partition, electrostatic, hydrogen bonding and reversed-phase interactions [38][39][40]. Greater retention is achieved when more than 70% of organic modifier (e.g., acetonitrile) is used in the mobile phase. The chromatographic conditions were optimized to achieve adequate retention and optimum peak shape of acetyl-hexapeptide-8. To find the optimal mobile phase composition we examined various combinations of aqueous buffer solutions and acetonitrile with changed content of each component. The detection wavelength was set to 214 nm, because at this wavelength acetyl heptpeptide-8 shows satisfactory absorption. The flow rate was set to 0.25 mL min โˆ’1 and the experiments were conducted at ambient temperature. The ฮงBridge ยฎ -HILIC ฮ’ฮ•ฮ— analytical column used in this work consists of BEH particles. Some accessible free silanol groups on the surface of these BEH particles are responsible for electrostatic interactions. The addition of aqueous salt solutions in the HILIC mobile phase eluents reduces the electrostatic interactions between the stationary phase and the analyte [41]. Plot of the logk values of acetyl hexapeptide-8 as a function of the concentration of ammonium formate is presented in Figure 2a. Ammonium formate concentration was modified from 2.5 to 100 mM with a constant aqueous component of the mobile phase eluent at 30%, v/v and a constant pH at 6.5. Under these conditions the free silanol groups on the surface of these BEH particles are negatively charged and acetyl hexapeptide-8 is in zwitterion form. The results showed that by increasing the concentration of ammonium formate the retention of acetyl hexapeptide-8 is slightly decreased up to 40 mM and then increased up to 100 mM. These findings indicate that the retention mechanism of acetyl hexapeptide-8 in ฮงBridge ยฎ -HILIC ฮ’ฮ•ฮ— analytical column is complex and comprises both hydrophilic partition with secondary electrostatic interactions. From these experiments, we concluded that by using a 40 mM ammonium formate concentration in the mobile phase peak symmetry and plate numbers are improved and the analyte is adequately retained and well separated from the solvent front. The chromatography of acetyl hexapeptide-8 was also explored by using various mobile phases where the concentration of ammonium formate in the whole mobile phase was kept constant at 12 mM, while the percentage of water, ฮฆwater varied from 25% to 35%. As shown in Figure 2b, the logk values of the peptide decrease exponentially with increasing ฮฆwater, implying a complex retention mechanism for this analyte on the selected HILIC column. The optimum mobile phase composition consists of 30% 40 mM ammonium formate water solution (pH 6.5) in acetonitrile. As shown in Figure 3, acetyl hexapeptide-8 is eluted at 8.15 min and the proposed HILIC-PDA method allows the isocratic elution of acetyl hexapeptide-8 within 10 min. The chromatography of acetyl hexapeptide-8 was also explored by using various mobile phases where the concentration of ammonium formate in the whole mobile phase was kept constant at 12 mM, while the percentage of water, ฮฆ water varied from 25% to 35%. As shown in Figure 2b, the logk values of the peptide decrease exponentially with increasing ฮฆ water , implying a complex retention mechanism for this analyte on the selected HILIC column. The optimum mobile phase composition consists of 30% 40 mM ammonium formate water solution (pH 6.5) in acetonitrile. As shown in Figure 3, acetyl hexapeptide-8 is eluted at 8.15 min and the proposed HILIC-PDA method allows the isocratic elution of acetyl hexapeptide-8 within 10 min. Selectivity The selectivity of the HILIC-PDA method to the analysis of Argireline peptide solution C ยฎ (cosmetic formulation) is demonstrated in Figure 4, where a chromatogram obtained from the analysis of the cosmetic formulation (red spiked line) is superimposed to a chromatogram of a quality control sample of acetyl hexapeptide-8 prepared in water/acetonitrile (30:70, v/v), both samples contain the analyte at 25 ฮผg mL โˆ’1 (grey line). Moreover, the selectivity of the HILIC-PDA method to the analysis of cosmetic creams incorporated with acetyl hexapeptide-8 is demonstrated in Figure 5, where a chromatogram obtained from the analysis of a placebo cream sample (black line) is superimposed to a chromatogram of a cream sample obtained after the sample preparation described in Section 2.3.2 containing acetyl hexapeptide-8 at 0.005% w/w (blue line). Selectivity The selectivity of the HILIC-PDA method to the analysis of Argireline peptide solution C ยฎ (cosmetic formulation) is demonstrated in Figure 4, where a chromatogram obtained from the analysis of the cosmetic formulation (red spiked line) is superimposed to a chromatogram of a quality control sample of acetyl hexapeptide-8 prepared in water/acetonitrile (30:70, v/v), both samples contain the analyte at 25 ยตg mL โˆ’1 (grey line). Selectivity The selectivity of the HILIC-PDA method to the analysis of Argireline peptide solution C ยฎ (cosmetic formulation) is demonstrated in Figure 4, where a chromatogram obtained from the analysis of the cosmetic formulation (red spiked line) is superimposed to a chromatogram of a quality control sample of acetyl hexapeptide-8 prepared in water/acetonitrile (30:70, v/v), both samples contain the analyte at 25 ฮผg mL โˆ’1 (grey line). Moreover, the selectivity of the HILIC-PDA method to the analysis of cosmetic creams incorporated with acetyl hexapeptide-8 is demonstrated in Figure 5, where a chromatogram obtained from the analysis of a placebo cream sample (black line) is superimposed to a chromatogram of a cream sample obtained after the sample preparation described in Section 2.3.2 containing acetyl hexapeptide-8 at 0.005% w/w (blue line). Linearity Data For the quantitation of acetyl hexapeptide-8 in the cosmetic formulation (Argireline peptide solution C ยฎ ) the calibration curves have been constructed at the range of concentrations 20 to 30 ฮผg mL โˆ’1 . The peak area signal of the peptide, S, versus the corresponding concentrations, C exhibited linear relationships and the results of a typical calibration curve are shown in Table 1. A Student's t-test was also performed to evaluate whether the intercept of the regression equation was significantly different from the theoretical zero value. The test was based on the estimation of the experimental t-value, texperimental = a/Sa, where a is the intercept and Sa is the standard deviation of the intercept of the regression equation, and on the comparison of texperimental with the theoretical t-value. tp. The results presented in Table 1 indicate that the intercept of the regression equation does not differ from the theoretical zero value. Linearity Data For the quantitation of acetyl hexapeptide-8 in the cosmetic formulation (Argireline peptide solution C ยฎ ) the calibration curves have been constructed at the range of concentrations 20 to 30 ยตg mL โˆ’1 . The peak area signal of the peptide, S, versus the corresponding concentrations, C exhibited linear relationships and the results of a typical calibration curve are shown in Table 1. A Student's t-test was also performed to evaluate whether the intercept of the regression equation was significantly different from the theoretical zero value. The test was based on the estimation of the experimental t-value, t experimental = a/Sa, where a is the intercept and Sa is the standard deviation of the intercept of the regression equation, and on the comparison of t experimental with the theoretical t-value. tp. The results presented in Table 1 indicate that the intercept of the regression equation does not differ from the theoretical zero value. For the quantitation of acetyl hexapeptide-8 in cosmetic creams, calibration curves were constructed after the analysis of spiked cream samples over the concentration range 0.004 to 0.007 w/w. The results of a typical calibration curve are presented in Table 2. In all cases correlation coefficient is greater than 0.994 indicating linear relationships between the peak area signal of the analyte, S and the corresponding concentrations, C. A Student's t-test was also performed in analogous manner, and the results ( Table 2) indicate that the intercept of the regression line is not significantly different from zero and thus there is no interference from the cream matrix. Limit of detection (LOD) and limit of quantitation (LOQ) values for acetyl hexapeptide-8 were calculated as the amounts for which the signal-to-noise ratios were 3:1 and 10:1, respectively. This was achieved by the analysis of dilute solutions of the peptide at known concentration prepared by the appropriate sample preparation procedure [42]. LOD and LOQ values for acetyl hexapeptide-8 in cosmetic formulation and in cosmetic cream are reported in Table 1 and in Table 2, respectively. Accuracy and Precision Precision and accuracy were evaluated by one-way analysis of variance (ANOVA) and the results are presented in Table 3. The total precision was between 1.74 to 4.34 for the cosmetic formulation and 2.46 to 3.53% for acetyl hexapeptide-8 in cosmetic cream. The total accuracy was between 98.9 to 99.8% for the analyte in cosmetic formulation and 99.3 to 101.6% for the quantitation in cosmetic cream. Table 3. Accuracy and precision data of the HILIC-PDA method for the quantitation of acetyl hexapeptide-8 in cosmetic formulation and cosmetic creams (n = 3 runs in 5 replicates). Matrix Concentration Levels Cosmetic Formulation Added Application to the Analysis of Real Samples The proposed method was applied to the analysis of three batches of Argireline peptide solution C ยฎ labelled to contain 0.05% w/w acetyl hexapeptide-8, and three batches of anti-wrinkle cosmetic cream labelled to contain 0.005% w/w of the peptide. Results obtained from the analysis of real cosmetic products are presented in Table 4. The % recovery for the quantitation of acetyl hexapeptide-8 by the proposed HILIC-PDA method is ranged from 98.2 to 102.2 % in cosmetic formulation and from 98.2 to 101.8% in cosmetics creams. Conclusions There is a real need to set up analytical methods to quantitate the active compounds in cosmetic products incorporated with bioactive peptides in low content. In this work, a HILIC-PDA method was developed and validated for the determination of acetyl hexapeptide-8 in cosmetics. Over the past two decades the use of biopeptides in cosmetic products is increasingly expanded. The developed method takes full advantage of the benefits of HILIC leading to efficient retention of acetyl hexapeptide-8 with less matrix effect. Validation results demonstrate that the proposed method allows for the quantitation of acetyl hexapeptide-8 in both cosmetic formulations and cosmetics creams. The simplicity of sample preparation procedure and the short chromatographic run time of less than 10 min gives the method the capability for high sample throughput and it can be used to support quality control of cosmetic products containing low content of acetyl hexapeptide-8. There is no doubt that HILIC chromatography enables the determination of bioactive peptides in cosmetic products without the need for specialized detection methods. The proposed method could be extended for future applications in the analysis of various bioactive peptides used for cosmetic purposes.
Multiple Gene Transfer and All-In-One Conditional Knockout Systems in Mouse Embryonic Stem Cells for Analysis of Gene Function Mouse embryonic stem cells (ESCs) are powerful tools for functional analysis of stem cell-related genes; however, complex gene manipulations, such as locus-targeted introduction of multiple genes and conditional gene knockout conditional knockout, are technically difficult. Here, we review recent advances in technologies aimed at generating cKO clones in ESCs, including two new methods developed in our laboratory: the simultaneous or sequential integration of multiple genes system for introducing an unlimited number of gene cassettes into a specific chromosomal locus using reciprocal recombinases; and the all-in-one cKO system, which enables introduction of an EGFP reporter expression cassette and FLAG-tagged gene of interest under an endogenous promoter. In addition, methods developed in other laboratories, including conventional approaches to establishment of cKO cell clones, inducible Cas9-mediated cKO generation, and cKO assisted by reporter construct, invertible gene-trap cassette, and conditional protein degradation. Finally, we discuss the advantages of each approach, as well as the remaining issues and challenges. INTRODUCTION Gene knockout (KO) technology has made a substantial contribution to knowledge of gene function. KO mice and KO cells prepared from them are frequently used for analysis of gene function in mammalian cells; however, generation of KO mice requires extended periods of time and cumbersome processes, including isolation of gene-targeted mouse embryonic stem cell (ESC) clones, production of chimera mice carrying the KO ESCs, establishment of germline-transmitted heterozygous mice, and cross-breeding of the heterozygous mice. Preparation of genetically disrupted cells is an alternative approach for analysis of gene function in vivo; however, it is also challenging, due to the technical difficulties involved in gene targeting. Recent developments in genome editing technologies have addressed these issues and greatly accelerated the molecular analysis of gene function (Gaj et al., 2013;Gupta and Musunuru 2014). CRISPR/Cas9 is the most popular genome editing system because of its high efficiency and easy design/implementation. CRISPR/Cas9 generates a double strand break (DSB) at the target site, which is repaired by the errorprone non-homologous end joining (NHEJ) process, resulting in the introduction of insertion/ deletion mutations and consequent target gene disruption. CRISPR/Cas9-induced gene disruption is relatively efficient in mammalian cells and has greatly reduced the time and cost required for molecular analysis of gene function. Nevertheless, simple mutagenesis by genome editing technology is not suitable for analysis of lethal genes, which are essential for cell growth, survival, or maintenance of the undifferentiated status of stem cells. Gene knockdown with short interference RNA (siRNA or shRNA) is often applied in these cases; however, these knockdown systems often do not completely suppress target gene expression, which can lead to inconclusive results. The conditional knockout (cKO) approach, first reported by Gu et al. (1994), is a useful way to study genes difficult to investigate using other approaches. cKO cells are usually generated using recombinases, such as Cre and FLP, in combination with their respective recognition sequences, loxP and FRT. The coding exon(s) of the target gene is/are flanked by these recognition sequences, and their corresponding recombinases are conditionally expressed to induce gene KO in specific cells. Definitive experimental results are expected, as the genetic disruption induces complete elimination of target gene expression. While cKO cells can represent an ideal option, their construction requires targeting of all alleles in each cell. Therefore, few cell lines are suitable for establishing cKO clones, March 2022 | Volume 10 | Article 870629 2 since most are hyperploid and exhibit low homologous recombination efficiency. To overcome these challenges, various cKO methods have been developed with the aid of genome editing technology. In this review, we provide an overview of recent advances in the development of cKO strategies, particularly the all-in-one cKO system, as well as their advantages and issues that need to be addressed. CKO CELL ESTABLISHMENT USING A CONVENTIONAL STRATEGY Mouse ESCs are useful for preparing cKO cells, since ESCs have a normal karyotype and relatively high homologous recombination rates compared with conventional cell lines. The functions of various genes involved in stemness, cell growth, and survival have been clarified using cKO ESCs (Dejosez et al., 2008;Dovey et al., 2010;Lu et al., 2014). The conventional procedure for preparation of cKO ESCs involves introduction of a targeting vector containing positive-selection marker genes (e.g., a neomycin resistance gene) flanked by FRT sites, a negative-selection marker (e.g., herpes simplex virus-derived thymidine kinase), and a coding exon (or exons) of the target gene, flanked by loxP sites, into ESCs ( Figure 1A). Targeted ESC clones can be isolated by positive-negative selection with 5-10% efficiency (Johnson et al., 1989). After isolation of a targeted clone, the positiveselection marker gene is removed by transient expression of FLP to retain normal expression of the target gene. These targeting processes are then repeated for the other allele. Therefore, a total of four cloning steps are required to establish cKO ESC clones. Due to this time-consuming process, establishment of cKO cells has not been a popular choice for analysis of gene function, despite its numerous advantages. Moreover, prolonged culture of ESCs for repeated cloning may compromise their undifferentiated characteristics. To avoid this concern, one could breed ESC-derived heterozygous mice, although this procedure requires much longer time. Dow et al. (2015) reported a cKO system based on CRISPR/ Cas9-induced gene disruption ( Figure 1B). To enable conditional knockout of the target gene, a guide RNA expression cassette and doxycycline (Dox)-inducible Cas9 cassette were inserted to the safe harbor locus (genetically reliable locus), Col1a1, of ESCs stably expressing reverse tetracycline transactivator (rtTA). Using this system, approximately 70% of cells displayed biallelic frame-shift mutation of the target gene in a Dox-dependent manner. Moreover, this technique can induce simultaneous conditional knockout of two target genes with 40-50% efficiency. CRISPR/Cas9-mediated cKO is also applicable in mice. This system greatly reduces the time and labor required to generate cKO ESCs, as well as mice, since it allows one-step preparation of cKO cells. Nevertheless, cells with in-frame mutations, which may behave as normal cells, cannot be eliminated due to the principles underlying this system, which relies on NHEJ-dependent mutagenesis. REPORTER CONSTRUCT-ASSISTED CKO DNA DSBs at the targeting site greatly enhance rates of homologous recombination (Donoho et al., 1998). Based on this mechanism, several genome editing technology-assisted methods have been developed for efficient cKO cell cloning. Flemr and Bรผhler introduced two homozygous loxP sites simultaneously via transfection of single-strand oligo-DNA, composed of a loxP sequence and 40 bp homology arms, TAL effector nucleases (TALENs) designed to target the site of interest, and a recombination reporter construct, which contained a 5โ€ฒ puromycin-resistance gene fragment and a TALENs target sequence, followed by a full-length puromycin-resistance gene without a start codon ( Figure 1C) (Flemr and Bรผhler 2015). If homologous recombination mechanisms are active in the transfected cells, TALENs-induced DSB of the reporter construct results in generation of an intact puromycinresistance gene via homologous recombination. Using this method, these researchers successfully generated cKO ESCs by targeting two loxP oligo-DNA molecules on both alleles in a single step; however, this approach still requires extensive screening to obtain correct clones, due to the low efficiency (approximately 4%) of targeting of all four sites. Use of CRISPR/Cas9 instead of TALENs may improve the efficiency. CKO VIA INVERTIBLE GENE-TRAP CASSETTE Andersson-Rolf et al. used the Cre-regulated invertible gene-trap cassette (FLIP cassette), which relates to the gene trap tool originally developed by Melchner's laboratory for preparing cKO cells (Schnรผtgen et al., 2005). The FLIP cassette contains a puromycin expression unit, flanked by loxP1 and lox5171 sites, in the middle of an artificial intron sequence ( Figure 1D) (Andersson-Rolf et al., 2017). To produce cKO cells, the FLIP cassette is inserted into a coding exon of a target gene via homologous recombination, with the assistance of CRISPR/ Cas9 designed for the target site. The resulting targeted clones can be cKO cells, as CRISPR/Cas9 both assists in targeting the FLIP cassette and destroys untargeted allele(s) of the target gene via NHEJ-dependent mutagenesis. Transient expression of Cre in the targeted clone induces inversion of the FLIP cassette and knocks out the target gene by switching the splicing donor/ acceptor structure of the artificial intron. The authors produced Ctnnb1 cKO ESC clones using this method, and verified the loss of ESC dome-like morphology on introduction of Cre. The CRISPR/Cas9 vector designed in the coding exon introduces the FLIP cassette in one allele, while it could also disrupt the other allele of the gene. This smart system is applicable in both ESCs and aneuploid cells, as homologous recombination of the FLIP cassette in one allele is sufficient to generate cKO cells. Nevertheless, the system still requires relatively extensive screening, since the efficiency is around 6% (4 FLIP/-clones out of 64 isolated clones in the case of ESC screening for Ctnnb1 cKO cells). Further, gene expression from the targeted allele appears to be compromised, probably due to the selection marker unit inserted in the artificial intron of the FLIP cassette. CKO VIA CONDITIONAL PROTEIN DEGRADATION Conditional induction of target protein degradation is another method for generating cKO cells ( Figure 1E). Several techniques for conditional depletion of target proteins have been developed using mutant FKBP, Halo-tag, and auxin-inducible degron (AID) as tagsequences, and the small chemical compounds, Shld1, HyT13, and auxin [indole-3-acetic acid (IAA)], as regulators of degradation (Banaszynski et al., 2006;Nishimura et al., 2009;Neklesa et al., 2011). Among these approaches, the AID system is the most wellvalidated, and uses an Oryza sativa-derived TIR1 protein, which forms an E3 ubiquitin ligase complex that can induce regulated and rapid degradation of proteins fused with a 7 kDa degron tag derived from Arabidopsis thaliana IAA17, in a manner dependent on the small chemical compound, auxin. Thus, introduction of an AID-tag into a target gene in TIR1-expressing cells enables cKO of the target gene. A recent report described one-step generation of degron-based cKO cells using an improved AID system, which employs mutated TIR1 and an auxin-derivative, 5-Adamantyl-IAA (5-Ad-IAA), to enhance sensitivity (Nishimura et al., 2020). cKO cells are prepared by disrupting the target gene with CRISPR/ Cas9 and inserting a vector encoding the mutated TIR1 and AIDtagged target gene cDNA, connected with an internal ribosome entry sequence (IRES), into the DSB site. The targeting efficiency was approximately 75% when this system was used for conditionally knocking out a single allele gene CENP-H, which locates on the Z chromosome in DT40 cells. This system is superior to the conditional genetics in terms of reversibility and fast kinetics. Therefore it is suitable for analysis of genes that require rapid depletion, such as cell cycle-related genes. A drawback of this approach is that expression of the target gene is controlled by the IRES sequence; thus target gene expression in the cKO cells does not reflect endogenous gene expression. This is also applicable to the genes in which regulatory elements in introns are eliminated. Another concern is that depletion of the target protein could be incomplete, due to the principles underpinning the system (Ng et al., 2019). THE ALL-IN-ONE CKO METHOD The recent study reported a novel cKO method, the all-in-one cKO method, which allows one-step and highly efficient cKO and simultaneous target gene modifications, including epitope tagging and reporter gene knockout/in, via CRISPR/Cas9-assisted homologous recombination of the all-in-one cassette in a coding exon of the target gene ( Figure 2A) (Suzuki et al., 2021). The all-in-one cassette encodes an FRT-flanked promoter-less EGFP gene, followed by a P2A peptide sequence, a FLAG-tag sequence, and the coding sequence of the target gene, upstream of the CRISPR/Cas9 target site. Since the EGFP cassette does not contain a promoter, sorting of EGFP-positive cells enables efficient isolation of cKO mESC clones at a frequency of up to 80%. Moreover, targeting of the all-in-one cassette in the presence of the DNA-PK inhibitor, M3814, which enhances homologous recombination, followed by EGFP sorting, resulted in almost 100% cKO efficiency, even in the recombination-non-proficient human fibrosarcoma cell line, HT1080 (Riesenberg et al., 2019). Given this high efficiency, homozygous targeted cKO clones could be easily isolated, even from HT1080 cells. Target gene expression can be monitored via EGFP expression in cKO cells, and protein expression can be detected using an anti-FLAG antibody; this feature greatly improves the detection sensitivity of target proteins by western blotting or immunocytochemistry, and is useful for conducting chromatin immunoprecipitation (ChIP) and crosslinking immunoprecipitation (CLIP) assays, as well as for affinity purification of binding proteins for mass spectrometric analysis. In addition, to enable instant and strictly regulated induction of KO cells from cKO cells, a TetFE ESC line was established. TetFE ESCs have an rtTA expression cassette and a tetracycline response element-regulated FLPERT2 expression cassette in the Gt (ROSA) 26Sor locus to maintain stable expression of transgenes. A simultaneous or sequential integration of multiple gene loading vectors (SIM) system and modified SIM system were employed to introduce those genes ( Figure 2B). The SIM system was originally developed for efficient sequential introduction of unlimited number of gene loading vectors (GLVs) or simultaneous introduction of multiple GLVs into human/mouse artificial chromosomes (HAC/ MAC) (Suzuki et al., 2014;Uno et al., 2018;Suzuki et al., 2020). Both the SIM system and the modified SIM system uses gene-loading modules called SIM cassettes, which contain recognition sequences for Bxb1 and/or ฯ†C31 recombinases/integrases, to combine a maximum of three GLVs. While the SIM system utilizes the Cre/ loxP reaction for integration of the combined GLVs to the geneloading site of HAC/MAC, the modified SIM system employs CRISPR/Cas9 for integration to a safe harbor locus via NHEJdependent knock-in ( Figure 2B). Co-transfection of the GLVs and recombinases/integrases expression vectors to 3 ร— 10 5 target cells was sufficient for obtaining correctly recombined cell clones (Suzuki et al., 2014). It is important to know that human and mouse genomes contain pseudo-attP sequences, which are recognized by ฯ†C31 integrase (Thyagarajan et al., 2001). Therefore, validation of GLV integration to an intended site is essential for utilizing the modified SIM system. TetFE ESCs stably express rtTA and conditionally express FLP with an ERT2 domain (FLPERT2), which enables 4-hydroxytamoxifen (4-OHT)-dependent nuclear localization in a Dox-dependent manner. Therefore, KO cells can be easily induced via addition of Dox and 4-OHT. This dual regulation system completely prevents spontaneous KO induction caused by leaky activity of FLPERT2 and background expression of Dox-regulated genes. This drug-inducible feature is also advantageous for large-scale preparation of KO cells. Application of the all-in-one cKO system in conjunction with the TetFE ESCs has been demonstrated in functional analysis of FIGURE 2 | The simultaneous or sequential integration of multiple genes (SIM) system and all-in-one conditional knockout (cKO) methods. (A) All-in-one cKO method. Green circle, nucleotides used to induce a frame-shift mutation; empty triangle, FRT site; FL, 3 ร— FLAG-tag sequence. (B) Schematic representation of the SIM system and the modified SIM system. The SIM system and the modified SIM system enable simultaneous loading of multiple gene loading vectors (GLVs) into a human artificial chromosome/mouse artificial chromosome vector and a safe harbor locus, respectively. The procedure to generate TetFE ESCs is shown as an example for loading multiple GLVs via the modified SIM system. Frontiers in Cell and Developmental Biology | www.frontiersin.org March 2022 | Volume 10 | Article 870629 5 the RNA helicase, DDX1, in ESCs. DEAD box RNA-helicases contain characteristic Asp-Glu-Ala-Asp (DEAD) box motifs that are involved in various RNA metabolism processes, including translation, pre-tRNA splicing, and ribosomal biogenesis (Linder and Jankowsky 2011). DDX1 is a DEAD box RNA helicase suggested to be involved in viral replication, tRNA synthesis, and miRNA processing (Fang et al., 2004;Han et al., 2014;Popow et al., 2014). Further, loss of Ddx1 stalls mouse development at the 2 to 4 cell stage, possibly due to mis-regulation of DDX1-bound mRNA, which is crucial for 1 to 4 cell stage embryo development (Hildebrandt et al., 2019); however, the precise molecular mechanisms underlying these functions have not been fully elucidated. To clarify these molecular functions, we prepared Ddx1 cKO TetFE ESCs and found that loss of Ddx1 resulted in a severe growth defect. Furthermore, the number of TUNELpositive apoptotic cells was significantly increased in Ddx1 โˆ’/โˆ’ ESCs. Accordingly, expression of p53, the master-regulator of cell growth and survival, was upregulated in Ddx1 โˆ’/โˆ’ ESCs. These results indicated that loss of Ddx1 activated the p53-signaling pathway. Further analysis revealed that loss of DDX1 led to an rRNA processing defect, resulting in p53 activation via a ribosome stress pathway. Consistent with these findings, the apoptotic phenotype of Ddx1 โˆ’/โˆ’ ESCs was rescued by disruption of the p53 gene. These molecular analyses illustrate the practical utility of the all-in-one cKO method, while most recently developed cKO methods have only been used for proofof-principle experiments. The disadvantage of the all-in-one cKO method is that it is only applicable for genes whose promoters can drive detectable levels of EGFP expression. Further improvements will be essential to allow application of the method for genes with low expression levels. DISCUSSION The recent development of new strategies has greatly reduced the time and labor required for preparation of cKO cells. To encourage wider use of these cKO methods in the academic community, further improvements may be needed. While methods developed to date allow generation of cKO cells in a single step, most still require relatively large-scale screening to isolate a clone due to low cKO efficiencies of approximately 5%. Moreover, some CRISPR/Cas9-assisted methods rely on disruption of untargeted alleles ( Figures 1D,E), which could restrict the utility of the cKO cells, since cKO cells obtained by this method express lower levels of the target gene than the parental cells, as the target gene is only expressed from the targeted allele. To overcome these limitations, improvements in targeting rate are required for efficient preparation of homozygous targeted cKO clones. Application of homologous recombination-enhancing drugs/genes may improve efficiency, as demonstrated in the all-in-one cKO method. Further, production of homozygous targeted cKO clones in aneuploid cell lines should further broaden the utility of these approaches. The procedure for KO induction of cKO cells is also a potential hurdle to general application of these methods. Since stable expression of recombinases is potentially toxic to cells, transient recombinase expression is appropriate for KO induction. Drug-inducible recombinase expression systems would also be convenient, and are used in several methods. The Dox-inducible, 4-OHT-regulated system applied in the all-in-one cKO method may be optimal for strict regulation of KO induction, as it avoids background induction of KO cells, due to leaky expression/activity of KO inducers. Conditional knockout of multiple genes can be useful for analysis of signaling pathways or functions of redundant genes. Although simultaneous cKO of two genes was demonstrated using the inducible Cas9-mediated cKO system, the double KO efficiency was 40-50%, which is insufficient to allow molecular function analysis in most cases. Moreover, the system cannot induce KO of each gene separately. Production of cKO cell lines by applying multiple-recombinases and their recognition sequences via the efficient methods reviewed here could resolve these issues. AUTHOR CONTRIBUTIONS TS and TH designed the study and wrote the manuscript. ST assisted the construction of figures. FUNDING This work was supported in part by the Japan Society for the Promotion of Science (JSPS) KAKENHI (grant numbers JP18K06047 (TS) and JP23390256 (TH)) and by Core Research for Evolutionary Science and Technology (CREST) program of the Japanese Science and Technology Agency (JST) (JPMJCR18S4 to TS).
A workflow to design new directed domestication programs to move forward current and future insect production doi: 10.1093/af/vfab014 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. ยฉ Lecocq, Toomey Feature Article A workflow to design new directed domestication programs to move forward current and future insect production Introduction Domestication has irrevocably shaped the history, demography, and evolution of humans. It is a complex phenomenon which can be seen as a continuum of relationships between humans and nonhuman organisms, ranging from commensalism or mutualism to low-level management (e.g., game keeping or herd management) or, even, direct control by humans over resource supply and reproduction (Terrell et al., 2003;Smith, 2011;Teletchea and Fontaine, 2014;Zeder, 2014Zeder, , 2015. This continuum should not be seen as an obligatory succession of different relationships, which ultimately always ends by human control over reproduction, for all species involved in a domestication process. For instance, most fish domestications do not involve initial commensal relationships (Teletchea and Fontaine, 2014), and African donkeyowners do little to manage reproduction of African wild asses (Marshall et al., 2014). Moreover, it is worth noting that the domestication process 1) does not involve all populations of a particular species (e.g., some fish populations underwent domestication for aquaculture while wild conspecific populations still occur, Teletchea and Fontaine, 2014) and 2) is not irreversible (i.e., feral populations). The complexity of the domestication process is mirrored by the diversity of past domestication histories. For instance, three main patterns of domestication histories can be identified for animal species: the "domestication pathways" (Zeder, 2012a(Zeder, , 2012b(Zeder, , 2015Larson and Fuller, 2014;Frantz et al., 2016). The commensal pathway (e.g., dog and cat domestications) does not involve intentional action from humans but, as people manipulate their environment, some wild species are attracted to parts of the human niche, and commensal relationships with humans can subsequently arise for the tamest individuals of these wild species (Zeder, 2012b;Larson and Burger, 2013). Over generations, relationships with humans can shift from synanthropic interactions to captivity and human-controlled breeding (Larson and Fuller, 2014). The prey pathway (e.g., domestications of large herbivorous mammals) requires human actions driven by the intention to increase food resources for humans. The pathway starts when humans modify their hunting strategies into game management to increase prey availability, perhaps as a response to localized pressure on the supply of prey. Over time and with the tamest individuals, these game management evolve in herd management based on a control over movements, feeding, and reproduction of animals (Zeder, 2012a;Larson and Burger, 2013). At last, the directed pathway (e.g., domestication of transport animals, Larson and Fuller, 2014) is triggered with a deliberate and directed process initiated by humans in order to control movement, food supply, and reproduction of a wild species in captive or ranching conditions (Zeder, 2012a). All pathways lead to animal population evolution shaped by new specific selective pressures of the domestication environment (Wilkins et al., 2014). The divergence from wild ancestors further increases for species for which humans reinforce their control over population life cycle while they decrease gene flow between populations engaged in the domestication process and their wild counterparts (Teletchea and Fontaine, 2014;Lecocq, 2019). This control can ultimately result in selective breeding programs or organism engineering (e.g., genetically modified organisms) that are developed to Implications โ€ข Insect farming is expected to expand in the near future, but domestication is a long and difficult process which is often unsuccessful. Considering hits and misses from past directed domestications of insects and other species, we here provide a workflow to avoid common pitfalls in directed domestication programs. โ€ข This workflow underlines that it is crucial to find relevant candidate species for domestication. Candidate species must address human need/ demand and meet a set of minimal requirements that shape their domestication potential. The domestication potential can be defined through an integrative assessment of key traits involved in biological functions. โ€ข Geographic differentiation of key traits in a candidate species and the maintenance of adaptative potential of farmed populations should also be considered to facilitate domestication and answer to future challenges. intentionally modify some traits of interest (Teletchea and Fontaine, 2014;Lecocq, 2019). Around 13,000 years ago, a first wave of domestication happened. It concerned mainly terrestrial vertebrate and plant species that are those dominating the agricultural world today (Diamond, 2002;Duarte et al., 2007). Noteworthy examples of insects involved in this wave include the silkworm (Bombyx mori, Lepidoptera) and the honeybee (Apis mellifera, Hymenoptera) (see domestication histories reviewed in Lecocq, 2019). Many insect domestication events started recently, in the 20th century (Lecocq, 2019), concomitantly with aquatic species Hedgecock, 2012) and some crop taxa (Leakey and Asaah, 2013), during the so-called new wave of domestication (i.e., refers to the large number of domestication trials since the start of the 20th century). Most domestications of this new wave follow a directed pathway through planned domestication programs Teletchea and Fontaine, 2014;Lecocq, 2019). This new wave has been facilitated by technological advances in captive environment control and animal food production. However, the triggering factor of this wave has been the emergence of new unmet human needs. Indeed, new domestication events appear unlikely when the human needs that could be met by targeted species (e.g., human food supply) are already addressed by wild or already domesticated species (Diamond, 2002;Bleed and Matsui, 2010;Freeman et al., 2015). For instance, many of the recent aquatic species domestications have been triggered by the need to meet the rising human demand for aquatic products while wild fishery catches are no longer sufficient . Similarly, bombiculture (i.e., production of bumblebees, Hymenoptera, Bombus spp.) is an insect example of domestication triggered by an unmet human demand: the development of fruit production (e.g., tomatoes, raspberry) in greenhouses, which required importing insects such as bees to ensure the pollination ecosystem service. However, previously domesticated species, such as honeybees, are quite inefficient pollinators for such crops whereas bumblebees are ideal pollinators for these plants (Velthuis and van Doorn, 2006). This led to domestication of several bumblebee species since the 1980s (Velthuis and van Doorn, 2006). Overall, for insects, as for many other species, recent domestication programs have been triggered by needs to produce biological control agents (e.g., ladybugs, Coleoptera, Coccinellidae), pets (e.g., hissing cockroach, Blattodea, Gromphadorhina portentosa), and laboratory organisms (e.g., fruit flies, Diptera, Drosophila spp.), or for sterile insect technique development, and raw material/food production (reviewed in Lecocq, 2019). New instances of insect domestication can be expected in the near future as several authors and international organizations claim that larger, optimized, and new insect productions will be a part of the solution to ensure human food/sanitary security and to address new demands for pets in the next decades (van Huis et al., 2013;Gilles et al., 2014;Lees et al., 2015;Mishra and Omkar, 2017;Thurman et al., 2017;Saeidi and Vatandoost, 2018). Here, we speculate that these future domestications will mainly follow a directed pathway as observed for other species involved in the new wave of domestication. These future domestication programs will be challenging since, despite technological developments, directed domestication is still a long and difficult process which often ends up being unsuccessful. Even when the life cycle is controlled by humans, major bottlenecks can still hamper the development of largescale production. Although limited amount of information about domestication failure rate is available in literature, past domestication programs of species involved in the new wave of domestication show that many new domestication programs often lasted a couple of years before being abandoned (e.g., for fish: Metian et al., 2019; for insect: Velthuis and van Doorn, 2006). The main causes of these failures are technical limitations, socioeconomic constraints, or intrinsic species features (Liao and Huang, 2000;Diamond, 2002;Driscoll et al., 2009). Potential solutions to facilitate domestication have been investigated for plants and vertebrates (e.g., Diamond, 2002;DeHaan et al., 2016;Toomey et al., 2020a). Conversely, insects have received very little attention to date (Lecocq, 2019). Here, we consider feedbacks from past directed domestication programs of insects and other species to provide a conceptual workflow ( Figure 1) to facilitate future insect domestication programs following a directed pathway (from this point, domestication will refer in the text to the directed pathway). This workflow ranges from the selection in the wild biodiversity of biological units (at species and intraspecific levels) to start new production to the development of selective breeding programs. We considered that technical limitations are not a major issue in insect domestication. Indeed, production systems (i.e., human-controlled environments in which animals are reared and bred) are already available for several phylogenetically distant insect species with different ecology, physiology, and behavior (Leppla, 2008). Thus, future insect domestications could likely be based, with potentially minor adjustments, on already existing production systems. Therefore, we here focus on how avoiding pitfalls due to socioeconomic constraints or intrinsic species features to move forward ongoing and future directed insect domestication programs to response to human demands. Backing the Right Horse by Finding the Right Candidate Species for Domestication Domestication processes which meet needs that can be more easily addressed by other means (e.g., wild catches or other domesticated species), as well as productions with a low productivity and/or profitability, are often doomed to failure (e.g., Diamond, 2002;DeHaan et al., 2016). Therefore, any new planned domestication program should consider how it could respond to an unmet human requirement with a viable and efficient business model. This can be at least partially answered by an evaluation of potential candidate species for domestication before starting large-scale production. First: identifying an unmet human need or demand to define new candidate species Human need or demand can focus on a species of interest (species-targeted domestication). Such domestications happen 1) when a wild species already exploited by humans becomes rare (e.g., for insects see Lecocq, 2019) or protected (e.g., the European sturgeon, Actinopterygii, Acipenser sturio) in the wild, 2) to allow reintroduction for wildlife conservation (e.g., for butterflies, Crone et al., 2007), or 3) to develop sterile insect techniques (see Lecocq, 2019). At this stage, the species of interest is regarded as a candidate species that must be further studied to assess the feasibility of its domestication (Figure 1). The need or demand for a particular ecosystem service can also spark new species farming (service-targeted domestication, see also DeHaan et al., 2016), as exemplified by bumblebee domestication (Velthuis and van Doorn, 2006). Since most ecosystem services can be ensured by numerous taxa, several candidates for domestication could be identified. This raises the need to highlight among available candidates those that maximize the chance of success to go successfully through the domestication process (DeHaan et al., 2016). Second: the importance of an integrative assessment of candidate species Before going any further in the domestication program development, special attention should be paid to international and national regulations regarding sampling, transport, and use of candidate species. Indeed, such regulations can prevent producing or trading a species in some areas (e.g., Perrings et al., 2010;Figure 1. A seven-step workflow to develop a fruitful insect production. 1. Identification of an unmet human demand. 2. Identifying candidate species that could meet the demand through a multifunction and multitrait assessment jointly developed with stakeholders. 3. Decision-making rules established with stakeholders highlight species with high domestication potential (here, one species but several species can be chosen). 4. Investigating the interest of geographic differentiation between wild populations (prospective units) of the species, similar to steps 2 and 3 to highlight units with high domestication potential (two units in this fictive example). 5. Creating the initial stock through pure or cross breeding strategy with attention paid to the genetic diversity of this stock (here, a cross breeding strategy is used). 6. Initial stock improvement through selective breeding programs and/or wild introgression to minimize adverse effects and reinforce beneficial domestication effects. 7. Production evolution according to human demand and environmental changes thanks to its adaptive potential and methods developed in the previous step. When no adaptation can be developed, new domestication could be considered. Wild biodiversity is considered at the species and intraspecific levels. Samways, 2018), making its domestication economically poorly attractive or pragmatically useless. They can thus limit the number of potential candidates or make a species-targeted domestication unfeasible. Wild insect species are not all suitable candidates for domestication. Indeed, each species has a specific "domestication potential" (adapted from Toomey et al., 2020a): a quantification of how much expression of key traits is favorable for domestication and subsequent production. Several behavioral, morphological, phenological, and physiological key trait expressions have been highlighted as relevant to facilitate domestication and subsequent production (e.g., for noninsects, Diamond, 2002;Driscoll et al., 2009). By considering insect specificities, we state that these expressions include high growth rate, high food conversion ratio, generalist herbivorous feeder or omnivorous, high survival rate, short birth spacing, polygamous or promiscuous mating, large environmental tolerance, high disease resistance, gregarious lifestyle, and diet easily supplied by humans. This list should be completed with additional key traits specific to the domestication purpose. For instance, pollination efficiency is relevant for pollination-targeted domestication while nutritional quality is important for edible insect domestication. Moreover, expression of socioeconomical key traits must also be considered for domestication potential assessment such as high yield per unit, high sale value, established appeal for consumers, and useful byproducts (e.g., for silkworm; Lecocq, 2019). At last, potential environmental consequences of future production, such as risks of biological invasions associated to the development of international trade (Lecocq et al., 2016), should be considered through the evaluation of relevant traits (e.g., invasive potential, which corresponds to the ability of a species to trigger a biological invasion out of its natural range). Overall, the set of key traits can be defined thanks to advice or expectations of stakeholders (consumers, environmental managers, policy makers, producers, and socioeconomists) (Figure 1; see similar approach for fish in Toomey et al., 2020a). It is worth noting that key traits 1) are involved in different biological functions (behavior, growth/development, homeostasis, nutrition, reproduction) and 2) are not necessarily correlated among each other, implying that expression of a trait cannot be inferred from other traits (Toomey et al., 2020b). This means that species domestication potential must be assessed by a multifunction and multitrait integrative framework (Figure 1). Moreover, species might present specificities in the wild but those might not be maintained in production systems because expression of key traits, as any phenotypic trait, is determined by genetic divergence and environment, as well as the interaction between these two factors (Falconer and Mackay, 1996). Therefore, an efficient assessment should be performed in experimental conditions as close as possible to the production system. Overall, such an assessment can be seen as heavy-going and time-and money-consuming. However, the complexity of multifunction and multitrait assessment in standardized conditions is offset by the minimization of the risk to start a long and difficult domestication program with the wrong candidate species. Third: reaching a consensus to choose relevant candidate(s) to start domestication Making an integrative assessment of domestication potential should not hide the fact that some key traits can be more important than others. For instance, very low survival rate or low reproduction rate during the assessment will certainly stop ongoing domestication trials because they prevent the completion of the life cycle. Therefore, minimal expression threshold (i.e., minimum threshold for a trait expression which must be met or else the biological unit is not suitable for domestication programs; e.g., a survival rate below which an animal production would not be economically feasible) should be defined, potentially by a panel of stakeholders, for the most important traits relatively to the domestication purpose (see similar approaches in DeHaan et al., 2016;Toomey et al., 2020a). When a species does not meet this threshold, it must be regarded to be void of domestication potential. This threshold must be carefully defined, even in species-targeted domestication programs, to avoid starting large-scale domestication programs with issues that could be costly and slow or impossible to fix later in the process. When comparing key trait expressions between species, it is likely that a candidate displays a favorable expression for a specific key trait (e.g., best nutritional value) but not for another trait (e.g., lowest survival rate). This requires making a consensus between results of key trait assessment to identify the best candidate species for a service-targeted domestication or to objectively assess the relevance of a species-targeted domestication (Figure 1, e.g., for noninsect species, Quรฉmรฉner et al., 2002;Alvarez-Lajonchรจre and Ibarra-Castro, 2013;DeHaan et al., 2016). Scoring solutions could be used, considering weighting coefficients to integrate the potential differential levels of importance of key traits due to socioeconomic factors, absolute prerequisites for domestication, or production constraints. Weighting coefficients can be defined through surveys of stakeholders' expectations ( Figure 1; see examples in Quรฉmรฉner et al., 2002;Toomey et al., 2020a). Since expectations might vary across stakeholders, decision making should be based on a consensus between all parties involved (see strategies to solve complex scientific and socioeconomic issues and consensus solutions in Hartnett, 2011;Wyborn et al., 2019;Toomey et al., 2020a). Ultimately, weighted integrative assessment of candidate species allows highlighting those that would likely foster new fruitful domestication programs for servicetargeted domestication or confirm/infirm the relevance of a species-targeted domestication process. These candidates are thus called species with high domestication potential. Getting Off on the Right Foot Thanks to Intraspecific Diversity Fourth: having the best intraspecific unit to start new domestication programs Once a new species with high domestication potential has been identified, considering geographic differentiation between allopatric groups of conspecific populations (commonly observed in insects; e.g., Araki et al., 2009;Uzunov et al., 2014) can be helpful to further facilitate domestication programs (Toomey et al., 2020a). Indeed, such population groups can present divergent demographic histories, which can shape genetic and phenotypic specificities through 1) gene flow limitation or disruption, 2) random genetic drift, and/or 3) local adaptation (Mayr, 1963;Avise, 2000;Hewitt, 2001;Toomey et al., 2020a). This could ultimately lead to differentiation in key traits and, thus, to divergent domestication potentials between wild population groups. A few past domestication histories show that geographic differentiation can facilitate domestication (e.g., for fishes: Toomey et al., 2020a;for crops: Leakey, 2012;. In insects, the domestication of the buff-tailed bumblebee (Hymenoptera, Bombus terrestris) is one of the few stunning examples where populationspecificity inclusion in domestication programs fostered a fruitful economic development. The buff-tailed bumblebee displays significant differentiation in key traits (e.g., foraging efficiency, colony size, and diapause condition) between differentiated groups of populations corresponding to subspecies (Velthuis and van Doorn, 2006;Kwon, 2008;Lecocq et al., 2016). In the early years of production, European bumblebee breeders tried to domesticate several subspecies. Within a short space of time, one subspecies (B. terrestris dalmatinus) proved to have superior characteristics from a commercial point of view (i.e., largest colonies, efficient highest rearing success rate, high pollination efficiency) and became the dominant taxa in the bombiculture industry (Velthuis and van Doorn, 2006). Similarly, non-African honeybees were favored for domestication and production due to facilitating key traits (e.g., low tendency to swarm, survival in temperate areas, low aggressiveness) for beekeeping over African honeybees (Wallberg et al., 2014). Potential importance of geographic differentiation for insect domestication programs raises the question about how it should be integrated in domestication processes. To this end, a new integrative approach has been recently developed for fish domestication (see Toomey et al., 2020a). This approach provides an integrative assessment of differentiated allopatric population groups through three steps (Figure 1). The first step aims at classifying wild populations of a targeted species in prospective units through phylogeographic or systematic methods. These units are groups of allopatric populations that are likely differentiated in key trait expressions. The second step provides an integrative multifunction and multitrait assessment, similar to interspecific comparison of domestication potential but applied to prospective units. Finally, the last step highlights prospective units with higher domestication potentials (so-called units with high domestication potential, UHDP) through the calculation of a domestication potential score through the help/advice from stakeholders (see Toomey et al., 2020a). Fifth: constituting the best stock to start new domestication programs When several UHDP are highlighted as of interest, the question can be raised regarding which strategy should be adopted to constitute the initial stock ( Figure 1): 1) keeping only one UHDP or breeding several UHDP apart ("pure breeding" strategy) or 2) mixing UHDP ("cross breeding" strategy) (Falconer and Mackay, 1996). Pure breeding consists of starting with one biological unit and continuously improving it through time (e.g., for B. terrestris, Velthuis andvan Doorn, 2006 or A. mellifera, Uzunov et al., 2014). It is an effective strategy when one biological unit presents a much higher domestication potential than others. In contrast, crossbreeding could be an interesting alternative (e.g., see trials with tasar silkworm, Lepidoptera, Antheraea mylitta, Lokesh et al., 2015) when several units present a similar domestication potential or complementary interests. It consists of crossing two or more biological units aiming at having progeny with better performances than parents through complementary of strengths of the two parent biological units and heterosis (i.e., hybrid vigor). However, it is a hit-or-miss strategy since results are hardly predictable (e.g., negative behavioral consequences in A. mellifera crossings, Uzunov et al., 2014). The choice regarding which strategy should be used must made on a case-by-case basis. Further attention should be paid to genetic diversity when constituting the initial stock ( Figure 1). If this stock is constituted with a low number and/or closely related individuals, the resulting low global genetic diversity of farmed populations will quickly lead to inbreeding issues, which can be especially damaging in some insect groups such as Hymenoptera (Gerloff and Schmid-Hempel, 2005). It is even more important in the pure breeding strategy which most likely leads to a lower initial genetic diversity than cross breeding approaches. Therefore, care should be taken that a sufficient number of individuals/families (i.e., sufficient effective size) is considered (i.e., sampling strategy) to 1) have a sufficient initial genetic variability and avoid to sample kin individuals which increase risks of future inbreeding issues, 2) mitigate the risk of sampling suboptimal genotypes which are not representative of the population group, and 3) have a sufficient genetic variability for future selective breeding programs (Toomey et al., 2020a). Going Further in the Domestication Process: The Wise Way Sixth: improving stocks undergoing domestication During domestication, farmed populations undergo new selective pressures from the rearing environment, a relaxation of wild environmental pressures, and other genetic processes, such as founder effect, genetic drift, or inbreeding (Wilkins et al., 2014). These processes lead to genetic, genomic, and phenotypic differentiations (Mignon-Grasteau et al., 2005;Wilkins et al., 2014;Milla et al., 2021), which are overall poorly studied in insects compared with other taxa (Lecocq, 2019). Yet, they can trigger changes in key trait expressions that are often observed in domesticated species (e.g., for insects: higher tameness, lower aggressiveness toward humans and conspecifics (Latter and Mulley, 1995;Adam, 2000;Krebs et al., 2001;Zheng et al., 2009;Chauhan and Tayal, 2017;Xiang et al., 2018). These changes can facilitate domestication or lead to an improvement of performances (i.e., beneficial changes) that enhances the profitability of the production sector (e.g., higher silk production in silkworm; Lecocq, 2019). However, some changes can also be unfavorable for domestication and subsequent production (i.e., adverse changes) as shown in other taxa (e.g., reproduction issues in fish, Milla et al., 2021). Selective breeding programs are widely used as a solution to overcome adverse changes or reinforced beneficial changes shaped by domestication (Figure 1). The efficiency of such programs was demonstrated for several taxa (e.g., broiler chicken, Gallus gallus domesticus, Galliformes, Tallentire et al., 2016, Atlantic salmon, Salmo salar, Salmoniformes, Gjedrem et al., 2012, including insects (e.g., Adam, 2000;Simรตes et al., 2007;Zanatta et al., 2009;Bourtzis and Hendrichs, 2014;Niรฑo and Cameron Jasper, 2015). Despite the success of numerous breeding programs, they can also lead to negative-side effects. This is well known in livestock (Rauw et al., 1998) but it was also investigated in insects (e.g., Oxley and Oldroyd, 2010). An alternative solution to solve deleterious changes shaped by domestication relies on introgression of wild individuals in farmed populations (Figure 1, Prohens et al., 2017). For instance, in insects, a hybridization was performed between wild African and domesticated European A. mellifera populations to create an Africanized strain which would be better adapted to tropical conditions and present a higher honey production (Spivak et al., 2019). However, despite its efficiency for honey production, its defensive behavior quickly became an issue and is considered nowadays as a matter of concern in Americas ( Spivak et al., 2019). Overall, the development of selective breeding programs or wild introgression in insect domestication could be of great interest but attention should be paid to traits selected and to potential negative consequences. Seventh: keeping one step ahead by maintaining the adaptive potential of production The relevance of an insect production depends on the socioeconomic and environmental contexts which can change over time. First, the triggering factor of domestication events, the human demand/need, can change with time and/or additional demands can appear aside from the original ones due to market fluctuations, new regulations, or technological development. Second, ongoing global changes (e.g., global warming, pollution) can impact production systems (i.e., outdoor production) and/or availability of important resources for farming (Decourtye et al., 2019). This places a premium on maintaining the adaptive potential of insect production over time, jointly with stakeholders, through species intrinsic features, selective breeding programs, wild individual introgressions, or new domestication program developments (Figure 1). Insect farming can face these changes thanks to species intrinsic features such as large climatic tolerance or generalist diet. In the context of global changes, the ability to cope with environmental changes is thus a valuable information that should be considered early in the process, during the assessment of candidate species domestication potential (see examples of species-specific responses to climate change or abiotic parameters between closely related species in (Oyen et al., 2016;Martinet et al., 2020). Alternatively, insect productions can evolve to deal with socioeconomic and environmental changes through selective breeding programs (i.e., continuous adaptation) to improve farmed populations (through trait selection or wild introgression) or create new specialized strains (Decourtye et al., 2019). However, selective breeding programs often drive to a loss of genetic diversity, which can trigger a lower resilience of farmed stocks (Gering et al., 2019). Indeed, genetic variability defines a biological unit's ability to genetically adapt to future challenges and contributes to global species biodiversity, which maximizes species survival chances in the long term (Sgrรฒ et al., 2011). This appears even more important considering that some rearing practices can quickly lead to a loss of genetic variability (e.g., beekeepers specializing in queen breeding and consequently a large amount of progeny originate from a few queen mothers, Meixner et al., 2010). Moreover, genetic variability can also be important for the population fitness (e.g., this variability is essential for disease resistance and homeostasis in A. mellifera, Meixner et al., 2010). Overall, the maintenance of genetic variability is capital ( Figure 1) and could be facilitated by wild introgressions (Prohens et al., 2017). Finally, in extreme cases in which farmed stocks cannot face/be adapted to new socioeconomic and environmental contexts, it will be necessary to start new domestication programs using new candidates (new wild species or population groups). Conclusion Insect farming is expected to expand in the future but remains challenging because of the difficulty to domesticate new species. We proposed a conceptual workflow to avoid major problems commonly encountered during domestication programs. We underlined the importance of 1) considering how new species production could respond to an unmet human demand with a viable and efficient business model and 2) assessing the domestication potential of candidate species through an integrative assessment. We argued that geographic differentiation between wild populations of a candidate species can be valuable. At last, we emphasized the importance of maintaining the adaptive potential of productions to answer to current and future challenges.
Response of the multiple-demand network during simple stimulus discriminations The multiple-demand (MD) network is sensitive to many aspects of task difficulty, including such factors as rule complexity, memory load, attentional switching and inhibition. Many accounts link MD activity to top-down task control, raising the question of response when performance is limited by the quality of sensory input, and indeed, some prior results suggest little effect of sensory manipulations. Here we examined judgments of motion direction, manipulating difficulty by either motion coherence or salience of irrelevant dots. We manipulated each difficulty type across six levels, from very easy to very hard, and additionally manipulated whether difficulty level was blocked, and thus known in advance, or randomized. Despite the very large manipulations employed, difficulty had little effect on MD activity, especially for the coherence manipulation. Contrasting with these small or absent effects, we observed the usual increase of MD activity with increased rule complexity. We suggest that, for simple sensory discriminations, it may be impossible to compensate for reduced stimulus information by increased top-down control. Introduction Diverse studies examining a range of cognitive demands have found of a set of frontalparietal regions that are consistently involved in a variety of tasks, ranging from response inhibition to working memory to decision making (e.g., Duncan & Owen, 2000;Fedorenko, Duncan, & Kanwisher, 2013;Niendam et al., 2012;Stiers, Mennes, & Sunaert, 2010). Included in this pattern are regions of the dorsolateral prefrontal cortex, extending along the inferior/middle frontal gyrus (IFG/MFG), and including a posterior-dorsal region close to the frontal eye field (pdLFC), parts of the anterior insular cortex (AI), pre-supplementary motor area and adjacent anterior cingulate cortex (pre-SMA/ACC), and intraparietal sulcus (IPS). Activity in the MD network increases with increases in many kinds of task difficulty or demand, such as with additional subgoals (e.g., Farooqui et al., 2012), greater working memory demand (Dara et al., 1997), resisting strong competitors (e.g., Baldauf & Desimone 2014), task switching (e.g., Wager et al., 2004), or a wide range of other task demands (e.g., Crittenden & Duncan, 2014;Jovicich et al., 2001;Marois, Chun, & Gore, 2004;Woolgar, Afshar, Williams, & Rich, 2015). Increased activity in more difficult conditions can also be accompanied by stronger information coding, shown by multivoxel pattern analysis (e.g., Woolgar, Afshar, et al., 2015;Woolgar, Hampshire, Thompson, & Duncan, 2011;. Reflecting these widespread effects of demand, the MD network has been suggested to implement top-down attentional control, optimally focusing processing for the requirements of a current task (Miller & Cohen, 2001;Duncan, 2013; see also Norman & Shallice, 1980). One simple way to manipulate task difficulty is through the quality of stimulus information. Some experiments have shown clear MD responses as stimulus discriminability decreases (e.g., Crittenden et al., 2014;Deary et al., 2004;Holcomb et al., 1998;Jiang & Kanwisher, 2003;Sunaert, Van Hecke, Marchal, & Orban, 2000;Woolgar et al., 2011), but this has not always been the case (Cusack, Mitchell, & Duncan, 2010;Dubis et al., 2016;Han & Marois, 2013;Muller-Gass & Schroger, 2007). For example, Cusack et al. (2010) contrasted hard and easy trials of a task in which participants had to detect a barely perceptible ripple in an oscillating dot field and found no neural activation differences between the two sensory difficulty levels, despite substantial differences in behavioral performance, and robust BOLD contrast to a different task manipulation (attention switching). In an important study, Han and Marois (2013) investigated activity in parts of the MD system during a task in which three letter targets were to be identified in a rapid stream of digit nontargets. In the baseline condition, the three letters occurred in immediate succession; to increase demand, they either inserted a nontarget into the series of three targets, or reduced exposure duration. While activity in frontal-parietal areas increased with the addition of a distractor, exposure duration had little effect. To interpret their findings, Han and Marois (2013) appealed to the distinction made by Norman and Bobrow (1975), between datalimited and resource-limited behavior. Norman and Bobrow (1975) proposed that, for any task, some function (the performance-resource function or PRF) relates performance to investment of attentional resources. When this function is increasing, behavior is said to be resource-limited, and additional investment is repaid by improved performance. When the function asymptotes, further investment has no positive effect, and performance is said to be data-limited. In line with a link of MD activity to attentional investment, Han and Marois (2013) used these ideas of data-and resource-limitation to explain their findings. They proposed that, in their task, brief exposure duration created data limits, which could not be offset by increased fronto-parietal recruitment, while adding a distractor introduced resource limits by calling for increased attentional focus. In general it is not known when performance will be resource-or data-limited, but within this general framework, many patterns of results are possible. Figure 1A illustrates a case in which, as difficulty level varies, there is no reason to expect increased attentional allocation. In this case, PRFs asymptote at different performance levels for the different levels of task difficulty, but across difficulty levels, the asymptote occurs at the same level of allocated resource. Figure 1B illustrates an opposite case, with increased task difficulty potentially offset by increased resource allocation. This uncertainty over the role of attentional investment in different cases of perceptual discrimination could help to explain disparate results in the literature, with some cases (e.g. Han and Marois (2013), manipulation of exposure duration) more resembling Figure 1A, and others (e.g. Han and Marois (2013), distractor manipulation) more resembling Figure 1B. In our first experiment, we sought to strengthen the evidence that, for simple sensory discriminations, MD activity can be rather independent of task difficulty, providing an exception to the "multiple demand" pattern. For this purpose we used a motion discrimination task with two kinds of difficulty manipulation -motion coherence and salience of taskirrelevant dots. For the strongest possible effect, we manipulated both variables over a wide range, moving performance from close to ceiling to close to chance. In the task demand literature, several studies have shown that, as opposed to a monotonic increase of MD activity with task difficulty, there was an inverted U-shape response (Callicott et al., 1999;Linden et al., 2003), or a plateau after a certain difficulty level (Marois & Ivanoff, 2005;Todd & Marois, 2004;Mitchell & Cusack, 2008). A possible interpretation is that MD activity initially increases with task demands, but plateaus or even declines once the task becomes impossible even with maximal attention. In our study we examined MD activity over the full range of possible task difficulties. In addition to manipulating both aspects of difficulty over a wide range, between participants we varied whether difficulty levels were mixed or blocked. In the mixed design, levels of difficulty were presented in random order, without advance cueing of the level to be experienced on a given trial. In contrast, difficulty level was known in advance in the blocked design. With this manipulation, we asked whether MD activity is driven more proactively, by expectancy of forthcoming demand, or more reactively, when high demand is experienced on a current trial. Finally, in modelling our fMRI data, we attempted to remove effects of decision time, expected to increase with either sensory or selection difficulty. In two prior studies of motion coherence, trials were modelled simply as events, without regard for their duration (Kayser et al., 2010a(Kayser et al., , 2010b. In this case, greater brain activity associated with decreasing motion coherence may simply have reflected longer processing times. To diminish such effects, our fMRI model explicitly included decision time for each trial. Though PRF shapes are generally unknown, our use of two different demand manipulations afforded the possibility of different outcomes. In particular, we expected that top-down control could be especially important in the irrelevant-dots condition, leading to larger effects of demand on MD activity. Though Experiment 1 showed results in line with this expectation, they occurred against a background of generally weak effects, and no significant difference between the two manipulations. In Experiment 2 we reexamined coherence and irrelevantdots conditions in a new group of participants, and compared these sensory demands with a manipulation of rule complexity. Participants Participants were randomly assigned to either the blocked or mixed group, with this variable manipulated between participants to minimize carryover effects. A total of 40 participants took part in the experiment. Twenty-one participants (9 male, 12 female, ages 19-31, mean = 25.7) took part in the blocked group, and nineteen participants (11 male, 9 female, ages 19-36, mean = 23.9) took part in the mixed group. Participants were recruited from the volunteer panel of the MRC Cognition and Brain Sciences Unit and paid to take part. An additional 16 participants were excluded (10 participants had excessive motion > 5mm, and another 6 had poor performance with accuracies more than three median absolute deviations below the median in at least one condition). All participants were neurologically healthy, right-handed, with normal hearing and normal or corrected-to-normal vision. Procedures were carried out in accordance with ethical approval obtained from the Cambridge Psychology Research Ethics Committee, and participants provided written, informed consent before the start of the experiment. Experimental Setting and Design Each participant performed two conditions of a motion coherence task, referred to here as the coherence condition and the irrelevant-dots condition. Each condition spanned six levels of difficulty. Difficulty type and level served as within-subject factors. This resulted in a group (blocked vs. mixed) ร— difficulty type (coherence vs. irrelevant-dots) ร— difficulty level (level 1 ~ level 6) design. Stimuli and Procedures The task structure was similar for both blocked and mixed designs (see Figure 2). On each trial, participants were presented with a random dot kinematogram (RDK) displayed for 200 ms, with an interval of 2-3 s between RDKs of successive trials. Participants were asked to judge the direction of the dominant dot motion, leftward or rightward. They were given 2 s to press one of two response buttons (up or down) to indicate their decision. The mapping between stimulus (left or right) and response (up or down) varied randomly between blocks, ensuring that, across the whole experiment, there were equal numbers of left-up/right-down and left-down/right-up trials for each difficulty level in each condition. Trials were run in blocks of six. At the beginning of each block, the response mappings were displayed on the screen during a 2 s instruction period preceding the first trial, and remained on the screen throughout the entire block. The response mappings were indicated by two arrows above and below each other within the dot field aperture, with one arrow pointing right and the other pointing left. In the blocked version, information about the difficulty level of the following block was also shown on the screen (e.g., 'Difficulty Level is 1') during the instruction period; while in the mixed version, participants were shown the instructions 'Prepare for Next Block'. The difficulty level of the six trials in a block remained the same in the blocked design, covering all six difficulty levels every six blocks; while in the mixed design each block contained one trial of each difficulty level randomized within the block. Participants were given two practice blocks of each condition (coherence and irrelevant-dots) before entering the scanner. Within the scanner, each condition constituted a separate run, each lasting ~18 minutes. The order of the two conditions was counterbalanced between participants. In each run there were 60 blocks in total, thus 60 trials per difficulty level, and 30 trials of each left/right motion direction in each difficulty level. Figure 2C illustrates coherence and irrelevant-dots conditions across the six difficulty levels. In each RDK there were 64 red dots (RGB channels [112.5, 0, 0]) moving dominantly either left or right for 200 ms (circular aperture with diameter of visual angle ~8.5โˆ˜, dot size = 12 pixels diameter, dot speed = 5 pixels/s, dot lifetime = 5 frames). In the coherence condition, only red dots were present, and difficulty was manipulated by decreasing the percentage of the dots that were moving coherently. The six difficulty levels corresponded to 85%, 60%, 40%, 25%, 15%, or 10% of the dots moving either left or right, while the remaining dots moved in random directions. In the irrelevant-dots condition, the red dots were fixed at 85% coherence, but an additional distractor dot field was present. The distractor dot field consisted of 576 yellow dots, all of which moved randomly, with a net direction of zero (dot size = 12 pixels, dot speed = 7 pixels/s, dot lifetime = 5 frames). Participants were asked to ignore the yellow dots, and judge the dominant motion direction of the red dots. Six levels of difficulty were created by increasing the salience of the yellow distractors (RGB channels [21.25,21.25,0],[42.5,42.5,0], [85, 85, 0], [127.5, 127.5, 0], [191.25, 191.25, 0], and [255, 255, 0]). These values were selected from previous pilot testing. Visual stimuli were displayed on a 1920ร—1080 (visual angle 25.16ร—14.31โˆ˜ screen with a refresh rate of 60 Hz, which the participants viewed through a mirror placed on top of the head coil. The distance between the participant and screen was approximately 1565 mm. Stimulus presentation was controlled using the Psychophysics Toolbox (Brainard, 1997) in Matlab (2014a, Mathworks, Natick, WA). The data were preprocessed and analyzed using the automatic analysis (aa) pipelines and modules (Cusack et al., 2014), which called relevant functions from Statistical Parametric Mapping software (SPM 12, http://www.fil.ion.ucl.ac.uk/spm) implemented in Matlab (The MathWorks, Inc., Natick, MA, USA). EPI images were realigned to correct for head motion using rigid-body transformation, unwarped based on the field maps to correct for voxel displacement due to magnetic-field inhomogeneity, and slice time corrected. The T1 image was coregisted to the mean EPI, and then coregistered and normalized to the MNI template. The normalization parameters of the T1 image were applied to all functional volumes. The functional data were high-pass filtered with a cutoff at 1/128 Hz, and spatial smoothing of 10 mm FWHM was applied. Statistical analyses were performed first at the individual level using general linear modeling (GLM). For correct trials, separate regressors were created for each combination of condition and difficulty level. As errors can be a strong driver of MD activity (Kiehl et al., 2000;Ullsperger & Cramon, 2004), error trials and no-response trials were removed using separate regressors. A separate regressor was created for the cue period at the start of each block. All regressors were created by convolving the interval between stimulus onset and response with the canonical hemodynamic response function. mixed), difficulty type (coherence vs. irrelevant-dots), difficulty level (level 1 ~ level 6), and ROI (7 MD ROIs). We also performed the same ANOVA on the motion-sensitive visual area MT, using the ROI defined in the SPM anatomy toolbox. An additional whole-brain voxelwise analysis was also performed, to detect any regions that showed a significant positive linear trend for difficulty level, separately for each difficulty type. Activation maps were thresholded at p < 0.05 (FDR corrected). Experiment 2 A separate set of participants (n = 24, 18 female, ages 19-30, mean = 24.4) was recruited to perform the coherence and irrelevant-dots conditions along with an additional rule condition. 5 additional participants were excluded (1 because the top of the brain was not acquired, 2 who had head movements > 5 mm, and 2 who had outlier reaction times more than three median absolute deviations above the median). All participants performed the blocked design. The coherence and irrelevant-dots conditions used a subset (levels 1, 3, and 5) of the same stimuli as previously described, except this time ( Figure 3A) the dots went in one of four directions (15ยฐ, 65ยฐ, 115ยฐ, 165ยฐ). The stimuli and response-mappings for the rule condition are illustrated in Figure 3. Participants responded using the index and middle finger of each hand on the four buttons of a response pad (left hand middle finger for the first button, left hand index finger for the second button, right hand index finger for the third button, and right hand middle finger for the fourth button), with a direct spatial mapping between stimulus direction and response key ( Figure 3B, level 1. The dot fields in the rule difficulty condition had high coherence (85%) and did not include yellow dots. However, rule complexity was manipulated using three different mappings between stimulus direction and response key ( Figure 3B). At the beginning of each block, participants were presented with a cue indicating the difficulty level of the block. After processing the cue, they were able to press a button to begin a consecutive 6 trials of that condition. Each RDK was presented for 200 ms. Participants had up to 10 s to respond, and after a button was pressed, a 2 s ISI preceded the next trial. At the cue for the next block, participants were given feedback of their performance accuracy as well as mean reaction time for the previous block. Participants were given two practice blocks of each condition (coherence, irrelevant-dots, and rule) before entering the scanner. Within the scanner, each condition constituted a separate run. The order of the three conditions was counterbalanced across participants. In each run there were 24 blocks in total, thus 48 trials per difficulty level. 115ยฐ, 165ยฐ). Additional yellow dots (not shown) were added only in the irrelevant-dots condition. (B) Participants were asked to use a response pad to indicate the direction of the moving dots by pressing the corresponding button. A direct mapping (level 1) was used for coherence and irrelevant-dots conditions, while all 3 rules were used in the rule difficulty condition. Behavioral results As shown in Figure 4A, accuracy decreased while reaction times increased with difficulty in both groups. Overall, proportion correct decreased from a mean of 93.5%, to a mean of 63.0% from the easiest to the hardest difficulty level. The Greenhouse-Geisser correction was used to correct for non-sphericity. For accuracy, there were significant main effects of condition, F(1,38) = 4.1, p = 0.049, with slightly higher accuracy for irrelevant-dots, and difficulty level, F(5,190) = 240.36, p < 0.001. There was no main effect of group, F(1,38) = 0.3, p = 0.61, and no interactions. Analysis of reaction times showed a significant main effect of difficulty level, F(5,190) = 69.6, p < 0.001. The reaction time analysis also showed a significant but small interaction of condition ร— difficulty level, F(5,190) = 3.8, p < 0.01, but no other effects. Results averaged over bilateral MD regions are shown in Figure 4B, separately for each condition and group. As a first step, we used 2-way ANOVAs (group ร— difficulty level) to examine the effect of difficulty separately in each condition, averaged across all MD ROIs. In the coherence condition, the effect of difficulty was not close to significance, F(5,190) = 1.7, p = 0.14. For the irrelevant-dots condition, in contrast, increased activity across difficulty levels was significant, F(5,190) = 3.1, p = 0.01. There were no significant interactions with group. fMRI results Next we tested linear increases with difficulty level in each condition separately. Figure 4). Though absolute activation levels differed over MD ROIs, trends were largely similar across ROIs ( Figure 4B). The ANOVA showed a significant main effect of ROI, F(6,228) = 37.3, p < 0.001, along with a significant interaction of ROI and difficulty level, F(30,1140) = 2.6, p = < 0.001. To check that our ROI analysis did not miss important effects elsewhere in the brain, we also carried out a standard whole-brain analysis, combining data for blocked and mixed groups, and testing for a linear increase across difficulty levels (see Methods). For the coherence condition, no significant voxels were found anywhere in the brain. For the irrelevant-dots condition, beyond the expected large increases in visual cortex, the test showed scattered, small regions close to our MD ROIs, including preSMA/ACC, AI, and regions of lateral frontal cortex. Lastly, we tested for significant linear increases or decreases in individual participants (see Methods), again combining blocked and mixed groups, and using a whole-MD ROI. With 40 participants and an alpha level of .05, for each test we should expect 2 participants to be judged significant by chance. For the coherence condition, there were 10 significant increases but also 8 significant decreases. For the irrelevant-dots condition, there were 17 significant increases and 3 significant decreases. Overall, these results are broadly similar to those from standard random-effects analysis, but further, they suggest a significant degree of heterogeneity between participants. ROI results for MT are shown in Figure 4C. In line with prior results (Rees, Friston, & Koch, 2000), MT activity generally declined with increasing difficulty of extracting the motion signal. A 3-way ANOVA with factors group (blocked vs. mixed), condition (coherence vs. irrelevant-dots), and difficulty level (level 1 -level 6) showed a significant main effect of difficulty level, F(5,190) = 11.7, p < 0.001. There was also a group by level interaction, F(5,190) = 2.9, p = 0.03, reflecting a stronger difficulty effect in the blocked group. No other significant effects were observed. Tests of within-subjects contrasts on difficulty level showed a significant negative linear trend, F(1,38) = 58.6, p < 0.001. Behavioral results In Experiment 2, participants performed the coherence, irrelevant-dots, and rule conditions. Behavioral data are shown in Figure 5A. Separate condition (sensory, selection, and rule) ร— difficulty level (level 1 ~ level 3) ANOVAs were performed on accuracy and reaction time. fMRI results Results for MD regions are shown in Figure 5B. In this experiment, neither coherence nor irrelevant-dots conditions showed any trend towards increasing activity with increasing difficulty, in contrast to results from the rule condition. In the MD regions, we found significant 2-way interactions of condition and difficulty level, F(4,92) = 2.9, p < .05, and ROI and difficulty level, F(12,276) = 3.6, p < .01, as well as a 3-way interaction of condition, difficulty level, and ROI, F(24,552) = 2.2, p < .05. Further we tested for linear increases across difficulty levels in each condition separately. Difficulty level showed a significant linear trend in the rule condition, F(1,23) = 4.5, p < .05; however, there were no linear trends for either coherence, F(1,23) = 1.1, p = 0.32 or irrelevant -dots F(1,23) = 0.1, p = 0.76. Whole-brain analysis showed no voxels with a significant linear increase across difficulty levels, either for coherence or irrelevant-dots conditions. For the rule condition, significant effects were found in lateral parietal and lateral frontal cortex bilaterally. Tests on individual participants showed 9/24 increases and 6/24 decreases in the coherence condition; the same in the irrelevant-dots condition; and in the rule condition, 14/24 increases and 2/24 decreases. Discussion The characteristic of fronto-parietal "multiple-demand" regions is increased activity for many different kinds of task difficulty. Here, we pursued mixed previous findings suggesting a partial exception -in some cases, MD activity seems largely insensitive to the difficulty of sensory discriminations. To obtain the strongest possible test of such insensitivity, using a motion coherence task, we manipulated two aspects of sensory difficulty over the widest possible range, from very easy to close to chance. We also compared results with difficulty fixed or variable across a block of trials. To ensure good power, across two experiments we tested a total of 64 participants. Clear results were obtained for a manipulation of motion coherence. Across experiments and mixed or blocked variations of difficulty level, there was no hint of consistently increased MD activity as performance changed from close-to-ceiling to close-to-chance. Following Han and Marois (2013), these results can be interpreted in terms of the distinction drawn by Norman and Bobrow (1975) between data and resource limitations. For motion coherence, the results suggest a scenario similar to that depicted in Figure 1A: decreasing coherence simply decreases the quality of sensory data, and this cannot be offset by increased attentional focus or top-down control. An intriguing result was revealed by examining participants individually. Even for motion coherence, there was a clear suggestion of some participants showing a linear increase of MD activity across difficulty levels. These participants were matched, however, by similar numbers showing decreases. It is worth noting that, under the framework of Norman and Bobrow (1975), altered resource allocation is an option whether or not performance is actually resource-limited. When resource allocation has little effect on performance, it may vary idiosyncratically between participants. Results were less clear for the salience of irrelevant dots. Following Han and Marois (2013), we expected that MD activity might increase with the salience of irrelevant dots, reflecting stronger attempts to focus only on the relevant red dots. Results of Experiment 1 were somewhat in line with this prediction, though even with 40 participants, the effect was not sufficiently strong to differ significantly from the null effect for coherence. In Experiment 2, even irrelevant-dots showed no hint of an overall difficulty effect. Across experiments, results for individual participants again showed a mixture of increases and decreases. Though such variable results lead to no strong interpretation, a reasonable speculation concerns variable strategies. In one extreme case, the participant could make no attempt to separate red and yellow dots, in which case yellow dots simply decrease motion coherence, and results should resemble those of a direct coherence manipulation. Under our presentation conditions, effective top-down separation of the two dot fields may have been hard or impossible to achieve, making unselective processing the dominant strategy. In some cases, however, increasing the salience of yellow dots could have increased top-down attempts to ignore them, reflected in increased MD activity. Across many different kinds of cognitive demand, it seems that scenarios like that of Figure 1B are by far the most common. In most cases -including the rule complexity case we used in Experiment 2 -increased cognitive demand calls for increased top-down input, ensuring optimal focus on task-relevant processing. The results show, however, that this is not a universal rule. In line with the ideas of Norman and Bobrow (1975), increased attentional focus may be ineffective in overcoming simple limitations of sensory data. As reviewed in the Introduction, the literature contains mixed findings concerning the MD response to reduced stimulus discriminability. In speeded tasks, for example, strong increases in MD activity have been reported as stimuli to be discriminated are made more similar (Jiang & Kanwisher, 2003;Woolgar et al., 2011). As we have indicated, in general we do not know the shapes of PRFs for different tasks. Rapidly deciding which of four lines is shortest, for example, may have very different attentional requirements from a global judgment of motion direction as used in the present work. While many studies in the literature show increasing MD activity with increasing task demands, there have also been studies that have showed decreased MD activity (Bor et al., 2003), an inverted U-shape response (Callicott et al., 1999;Linden et al., 2003), or a plateau after a certain difficulty level (Marois & Ivanoff, 2005;Todd & Marois, 2004;Mitchell & Cusack, 2008). For example, Linden et al. (2003) and Callicott et al. (1999) found that the frontal-parietal network initially showed increased activation with increased working memory load, but decreased in the highest load condition close or beyond the limit of capacity. In our data there was no suggestion of an inverted-U profile; if anything, in some conditions there was a decrease in MD activity over the first few difficulty levels (e.g. Experiment 2, coherence condition). Our data suggest no decrease of MD activity as sensory limits make a task impossible to perform well. One factor affecting MD recruitment might be advance knowledge of difficulty level. The neural basis of expectation in perceptual tasks has been shown to involve top-down modulation of frontal and parietal cortices to enhance sensory processing in the visual cortex (Giesbrecht, Weissman, Woldorff, & Mangun, 2006;Kastner, Pinsk, De Weerd, Desimone, & Ungerleider, 1999;Kok, Jehee, & de Lange, 2012;Sylvester, Shulman, Jack, & Corbetta, 2009). Furthermore, Manelis and Reder (2015) recently demonstrated using MVPA that the regions involved in a working memory task are the same regions that contain information about the upcoming task difficulty during task preparation. It is therefore possible in our study that participants in the blocked group could have decided to increase attentional effort in an attempt to compensate for anticipated perceptual difficulty. Our data, however, suggested little notable difference between mixed and blocked conditions. Activity across the MD network is increased by many different kinds of cognitive demand (Duncan & Owen, 2000;Fedorenko et al., 2013). In contrast to this "multiple demand" pattern, the present results show little or no consistent effect of sensory manipulations in a simple motion coherence task. As suggested by Han and Marois (2013), results may reflect the degree to which performance can be improved by increasing attentional investment. When simple sensory decisions are largely data-limited, decreased accuracy cannot be compensated by increased attention, leading to little or no enhancement of MD activity.
CT Imaging of Interstitial Lung Diseases Until today, computed tomography (CT) is the most important and valuable radiological modality to detect, analyze, and diagnose diffuse interstitial lung diseases (DILD), based on the unsurpassed morphological detail provided by high-resolution CT technique. In the past decade, there has been a shift from an isolated histopathological diagnosis to a multidisciplinary acquired diagnosis consensus that is nowadays regarded to provide the highest level of diagnostic accuracy in patients with diffuse interstitial lung diseases. The 2002 ATS/ERS statement on classification of idiopathic interstitial pneumonias assigned a central role to high-resolution CT (HRCT) in the diagnostic workup of idiopathic interstitial pneumonias (ATS/ERS consensus classification 2002). The more recent 2013 ERS/ATS statement reinforced that combined clinical data (presentation, exposures, smoking status, associated diseases, lung function, and laboratory findings) and radiological findings are essential for a multidisciplinary diagnosis (Travis et al., Am J Respir Crit Care Med 188(6):733โ€“748, 2013). The traditional HRCT consisted of discontinuous 1 mm high-resolution axial slices. The primary focus was on visual pattern analysis demanding for the highest possible spatial resolution. Because of the intrinsic high structural contrast of the lung, it has been possible to substantially reduce dose without losing diagnostic information. This development has been supported by new detection and reconstruction techniques. Not only detection of subtle disease and visual comparison of disease stage but also disease classification and quantification nowadays take advantage of continuous volumetric data acquisition provided by multidetector row (MD) CT technique. The following book chapter will focus on acquisition technique with special emphasis on dose and reconstruction, advantages, and new diagnostic options of volumetric MDCT technique for interstitial lung diseases. Based on evidence from the literature, certain diseases will be covered more specifically, but it has to be noted that for the pattern analysis of the various interstitial lung diseases, the plethora of other publications and books is recommended. Introduction Until today, computed tomography (CT) is the most important and valuable radiological modality to detect, analyze, and diagnose diffuse interstitial lung diseases (DILD), based on the unsurpassed morphological detail provided by high-resolution CT technique. In the past decade, there has been a shift from an isolated histopathological diagnosis to a multidisciplinary acquired diagnosis consensus, that is nowadays regarded to provide the highest level of diagnostic accuracy in patients with diffuse interstitial lung diseases. The 2002 ATS/ERS statement on classifi cation of idiopathic interstitial pneumonias assigned a central role to highresolution CT (HRCT) in the diagnostic workup of idiopathic interstitial pneumonias (ATS/ERS consensus classifi cation 2002). The more recent 2013 ERS/ATS statement reinforced that combined clinical data (presentation, exposures, smoking status, associated diseases, lung function, and laboratory fi ndings) and radiological fi ndings are essential for a multidisciplinary diagnosis (Travis et al. 2013 ). The traditional HRCT consisted of discontinuous 1 mm high-resolution axial slices. The primary focus was on visual pattern analysis demanding for the highest possible spatial resolution. Because of the intrinsic high structural contrast of the lung, it has been possible to substantially reduce dose without losing diagnostic information. This development has been supported by new detection and reconstruction techniques. Not only detection of subtle disease and visual comparison of disease stage but also disease classifi cation and quantifi cation nowadays take advantage of continuous volumetric data acquisition provided by multidetector row (MD) CT technique. The following book chapter will focus on acquisition technique with special emphasis on dose and reconstruction, advantages, and new diagnostic options of volumetric MDCT technique for interstitial lung diseases. Based on evidence from the literature, certain diseases will be covered more specifi cally, but it has to be noted that for the pattern analysis of the various interstitial lung diseases, the plethora of other publications and books is recommended. Acquisition Technique Traditionally CT and HRCT were strongly differentiated with the latter being defi ned by a section thickness of <1.5 mm and the use of an edge-enhancing high-resolution reconstruction kernel. Since the advent of MDCT technique, which allows for covering the whole chest in thin section technique within one breath-hold, essentially each chest CT is a HRCT. Reconstruction of 1, 3, or 5 mm thick slices determines the number of axial images to be evaluated. Besides for a given acquisition dose, a 3 mm thick slice has a better signal-to-noise ratio than a 1 mm thick slice. A 1 mm thick slice, however, offers a higher spatial resolution and thus superior morphological detail. These trade-offs therefore determine the choice of reconstruction depending on the clinical indication: analysis of an (advanced) tumor stage is mostly done with thicker slices, while analysis of diffuse interstitial lung disease or focal nodular disease requires maximum detail resolution and therefore thin slices. Before the advent of MDCT, HRCT consisted of a thin section CT obtained with 1 mm slice thickness at 10 or even 20 mm gaps. The rationale behind such a protocol was that analysis of a diffuse parenchymal process does not require continuous coverage. Secondly, discontinuous scanning warranted the relatively high-dose use to achieve the excellent signal-to-noise, image quality, and thus detail resolution. Several developments have contributed to the fact that over the last years, discontinuous HRCT acquisition has been increasingly replaced by volumetric data acquisition, and they will be discussed more extensively below. In short, these developments refer to: (a) Modern MDCT scanners allow for acquisition of volumetric HRCT with high image quality at acceptable dose levels, which furthermore have been continuously decreasing over the last decades due to improved detector technology and advanced reconstruction algorithms (see also "iterative noise reconstruction"). (b) Modern scanners perform faster, allowing for a single continuous scan in deep breath-hold instead of acquiring discontinuous slices with multiple scans that require repetitive breath-hold maneuvers. (c) Volumetric 2D and 3D display techniques such as multiplanar reconstructions (MPR), maximum and minimum intensity projections (MIP and MinIP), as well as advanced volumetric quantifi cation techniques became only possible with continuous volumetric data acquisition. (d) Volumetric scans allow for an easier and also more precise comparison of disease development over time in follow-up studies because slices exactly matching to each other can be compared. (e) Volumetric scans will also capture subtle and focal disease, potentially missed when data are acquired with large gaps in between. Nevertheless a questionnaire among members of the European Society of Thoracic Imaging carried out in 2013 revealed that a subgroup of radiologists (15 %) still uses discontinuous HRCTs for analysis of interstitial lung diseases (Prosch et al. 2013 ). To which extent various aspects such as the scanner technology availability, expected image quality, or unwillingness to give up the old, but familiar techniques play a role in this remains unclear. Scan Collimation and Slice Reconstruction There are two essential factors that constitute a "high-resolution" CT study: fi rstly, thin axial slices using narrow detector width (0.5-1.25 mm) and reconstruction of 1-1.5 mm thick slices ( Fig. 1 ) and, secondly, reconstruction of the scan data with a high-spatial-frequency (sharp or highresolution) algorithm (Muller 1991 ). Whether the whole lung can be covered within one breath-hold with 1 mm collimation width depends on the speed of data acquisition. While a four-slice CT scanner still needed more than 25 s to cover a chest length of 30 cm, a 16-slice scanner provided already the technical base to cover the thorax (30 cm length) in less than 15 s when a table feed of 1.5 mm was used. A 64-slice scanner allowed for a scan time below 10 s. The most modern scanners (128-slice scanners and beyond) allow for coverage of the chest in less than 3 s while acquiring data with isotropic submillimeter resolution. Thus, speed of data acquisition within one breath-hold does not represent a limitation anymore. The ability of HRCT to provide high morphological detail of normal and abnormal lung parenchyma is based on high-quality examinations. With optimal scan technique, the spatial resolution is as low as 0.5 mm. Due to the high contrast within the lung parenchyma, even structures as small as 0.2 mm can be visualized (Murata et al. 1989 ). Thus, pulmonary artery branches down to the 16th and bronchi down to the 8th generation can be depicted. Since partial volume averaging effects on the margins of such small structures are minimized, HRCT provides a very accurate image of their true size. This represents the base for CT-based quantifi cation, e.g., of bronchial wall thickness and airway lumen in COPD patients. Since this high resolution is available isotropically, meaning in all three directions, diameters of vessels, lung nodules, or obliquely oriented bronchi are accurately refl ected, irrespective of their location in or near the scan plane or even perpendicular to the scan plane (Fig. 2 ). Isotropic resolution in all three dimensions allows for axial, coronal, and sagittal reconstructions with equally high detail resolution. While pattern analysis is done on axial slices, the multiplanar reconstructions (MPR) nicely demonstrate the subpleural and craniocaudal distribution of disease in this patient with systemic sclerosis Spatial resolution is increased by the application of a high-spatial-frequency reconstruction algorithm (Mayo et al. 1987 ). Standard algorithms lead to smoothening of the image in respect that visible image noise is reduced and contrast resolution increased. Sharp, high-spatialfrequency, or high-resolution algorithms, on the other hand, reduce image smoothening and increase spatial resolution. Anatomic margins and tissue interfaces, such as the fi ssure, pleura, or septa, appear sharper. Small vessels and bronchi are seen superiorly compared to a standard algorithm. Reducing the fi eld of view results in smaller pixel sizes and thus increases spatial resolution. In general, the fi eld of view should be adjusted to the size of the lungs, usually resulting in a spatial resolution of 0.5-0.3 mm. To ensure that the fi eld of view does not cut off any parts of the lung, it is usually limited by the diameter of the external cortex of the ribs. To further increase spatial resolution, targeting of the fi eld of view to a single lung or particular lobes or regions can be performed. Such an approach may be used for minute evaluation of the parenchyma or peripheral bronchi beyond the regular evaluation of images demonstrating both lungs. However, with this approach, the spatial resolution will be limited by the intrinsic resolution of the detectors and whether it will gain diagnostic information will largely depend on personal preferences. There is no literature reference that generally recommends this approach, not even for certain indications. In addition, it requires additional reconstruction time, the raw data scan must be saved until targeting is performed, and it precludes the ability to compare both lungs on the same image. Dose Aspects One of the major arguments not to change from discontinuous HRCT (10 or 20 mm gap) to continuous volumetric data acquisition was the associated increase in dose. Since 1 mm thin sections need to have a certain signal-to-noise level to provide acceptable diagnostic image quality, it was inevitable that volumetric data acquisition would deliver a higher dose to the patient. 3D dose modulation -automatically adapting the delivered dose in the transverse plane ( xyaxis) and along patient length ( z -axis) -is an effective means to reduce dose by about 30 % and is strongly recommended (Kubo et al. 2014 ) ( Table 1 ). The abovementioned survey in 2013 confi rmed that 90 % of the respondents indeed apply it. Additional options are to adapt the protocol to patient weight, patient age, or scan indication. Principally a tube voltage of 120 kV is recommended, but in young patients or patients of lower body weight, a tube voltage of 100 kV can be applied, further contributing to dose saving. The tube current is mostly set around 100 mAs. Ultimately a dose between 1.5 and 4 mSv should be aimed for (Fig. 3 ). There is a multitude of publications evaluating low-dose protocols (40-60 mAs), most of them for the detection of nodules or within a screening setting. Those results cannot be directly transferred to the diagnostic workup of DILD, in which detection of ground-glass opacities and fi ne septal thickenings is required. Christe A et al. systematically evaluated the effi ciency of a low-dose protocol for the detection of common patterns of pulmonary diseases evaluating 1 mm slices, reconstructed with fi ltered back projection (FBP) and a high reconstruction kernel. They concluded that a 120 kVp/40 mAs protocol was feasible for detecting solid nodules, air space consolidations, and airway and pleural diseases; however, pathologies consisting of ground-glass opacities and interstitial opacities required higher tube current or iterative noise reconstruction (Christe et al. 2013 ). New iterative reconstruction algorithms (IR) allow for greater noise reduction than standard fi ltered back projection (FBP) and subsequently more effective dose reduction. While increased spatial resolution is directly correlated with increase of image noise in standard fi ltered back projection, iterative reconstruction allows for decoupling of spatial resolution and image noise to a certain extent. Once an image has been reconstructed from measured projections, this image itself is used as "scan object" in a simulated CT measurement of the same projection, resulting in an image of calculated projections. The differences between the measured and calculated projections result in correction projections which are subsequently used to update the originally measured projections. This process is repeated until the difference between the calculated projections and measured projections is smaller than a predefi ned limit. With each update to the original image, image-processing algorithms enhance spatial resolution in higher contrast areas of the image and reduce noise in low contrast areas. While the fi rst generations of IR produced images of lower noise, they were criticized for modifying the visual appearance of images, either being smoothed or pixelated especially with increased weighting of iterative noise reconstruction (Pontana et al. 2011 ;Prakash et al. 2010 ) (Fig. 4 ). The second generation of socalled model-based IR -active in the raw data space -aims for reducing noise and maintaining image sharpness, thus having less impact on the visual image impression. Some studies evaluated the visualization of certain elements of lung infi ltration and diffuse lung disease: an improved detection of groundglass opacities, pulmonary nodules, and emphysema had already been reported with fi rst-generation IR in vivo (Pontana 2011 ) and by experimental studies (Christe et al. 2012 ). Similarly an excellent inter-method agreement comparing IR and FBP images for the detection of emphysema, GGO, bronchiectasis, honeycombing, and nodules has been described (Ohno et al. 2012 ). No study evaluating the impact on the diagnostic evaluation of diffuse interstitial lung disease has been published yet, but nevertheless it can be anticipated that modern IR allows for substantial (around 50 %) dose reduction of volumetric HRCT for DILD, which represents an important step forward in terms of radiation protection, especially for young patients and patients with multiple follow-up studies. Prone Position In the normal lung with the patient supine, there is a gradual increase in attenuation and vessel size from ventral to dorsal lung regions. This attenuation gradient is caused by the effect of a b Fig. 3 Impact of dose: the right-sided image ( b ) was obtained with double acquisition dose compared to the left-sided image ( a ) (4.5 mGy versus 6.8 mGy, no iterative noise reconstruction has been applied) gravity on blood fl ow and gas volume as well as some non-gravity-dependent effects. This density gradient is accentuated on expiration (Verschakelen et al. 1998 ). Hypoventilation and atelectasis in the dependent lung can cause areas of dependent density or subpleural lines, which can mimic early lung fi brosis. With the patient in prone position, these hydrostatic densities will immediately disappear, while abnormalities caused by real pathology will remain. Therefore, in some cases it may be necessary to obtain images in prone position to differentiate actual disease from physiologically dependent densities or atelectasis. This is especially true when the detection of ground-glass opacities or curvilinear subpleural lines are of diagnostic relevance (e.g., in patients with subtle disease or in patients with asbestosis). Publications dating from more than 15 years ago -thus obtained with slower scanners and discontinuous HRCT technique -found that prone scanning was useful in almost 20 % of patients (Volpe et al. 1997 ). This proportion is certainly too high, given the fast scanning technique available today. Prone scanning is not indicated routinely anymore. Some colleagues may prefer it in selected cases, e.g., patients with questionable, subtle disease exclusively in the dorsobasal area of the lung which would be decisive for the presence or absence of disease. A questionnaire obtained in 2013 did not reveal a real consensus about the use of additional CT acquisitions in the prone position, though it was recommended to be performed on demand only (Prosch et al. 2013 ). This, however, requires the scans to be closely monitored or that the patient is called back for additional scanning. In patients with emphysema, airway disease, or diffuse obstructive lung disease, prone scans are usually not needed. Expiratory Scans Scans are routinely obtained in full inspiration with the lungs fully expanded, which optimizes the contrast between low-attenuation aerated air space and high-attenuation lung structures and various abnormalities. At the same time, full inspiration minimizes the frequency of confounding densities due to transient atelectasis. Additional expiratory HRCT scans have proved useful in the evaluation of patients with a variety of obstructive lung diseases. Focal or diffuse air trapping may be detected and can be essential in the differentiation of large or small airway disease and emphysema (Kauczor et al. 2011 ). The guidelines of the British Thoracic Society recommend the routine use of expiratory scans in patient's initial HRCT evaluation (Wells and Hirani 2008 ). The rationale behind this is the potential value of air trapping for the differential diagnosis that can be appreciated on an expiratory scan, even in the absence of inspiratory scan abnormalities and the fact that the functional cause of respiratory disability is not always known, especially during the initial diagnostic phase. Within the context of management of interstitial lung disease, the British Thoracic Society suggest that after a diagnosis has been set, additional expiratory CT acquisitions should only be performed to evaluate inconclusive fi ndings on inspiratory CT. Because of dose considerations, most investigators obtain some expiratory scans on predetermined levels or discontinuous clusters (Prosch et al. 2013 ). There are various ways how to plan these scans: either in areas of pathology seen in inspiration or on specifi c predefi ned levels following anatomic landmarks (e.g., carina, aortic arch) in order to facilitate reproducibility in follow-up scans. How many levels need to be covered is unclear and ranges from 2 to 5. Alternatively also the expiratory scan can be obtained with volumetric data acquisition. It has the advantage of decreasing the risk of motion artifacts frequently seen in discontinuous expiratory scans, allows for quantifi cation on a 3D basis, and allows for more precise, levelmatched follow-up. Since the only purpose of these scans is the detection, localization, and quantifi cation of air trapping, these scans can be obtained with a drastically reduced dose. Multiple studies have shown that tube current levels as low as 40 mAs are suffi cient for the diagnostic purpose intended. Bankier et al. carried out a systematic comparison of expiratory CT scans obtained with 120 kV, 80 mAs, and simulated 60, 40, and 20 mAs scans and found that -though diagnostic confi dence went down with decreasing acquisition dose and interreader variability went up -diagnostic accuracy was not affected (Bankier et al. 2007 ). Similar results had been published also by other authors (Nishino et al. 2004 ). Scans after full expiration are obtained to display lobular areas of air trapping (Fig. 5 ). Air trapping refers to lobular demarcated areas of hypertransparency caused by air trapped in expiration by a check valve mechanism of small airways. Consequently the involved secondary lobule will not decrease in volume and increase in attenuation during expiration compared to the surrounding uninvolved lung parenchyma. Areas of air trapping are more easily visible in expiration than in inspiration. In some patients air trapping may be seen exclusively in expiration. The fi nding of air trapping as indirect sign of small airway disease is an important diagnostic fi nding in all diseases with an obstructive or combined obstructive/restrictive lung function impairment. Diseases in which the fi nding of air trapping and thus expiratory CT scans represent an important part of the diagnostic workup are exogenous allergic alveolitis, collagen vascular diseases such as Sjรถgren's disease and rheumatoid arthritis, but also sarcoidosis and diseases with predominant airway pathology such as asthma and cystic fi brosis. Sharply demarcated areas of air trapping need to be differentiated from ill-defi ned areas of varying parenchymal density in expiration. The latter is seen frequently and has been interpreted as "inhomogeneous emptying" of the lung in expiration. Correlation of scans during inspiration and expiration illustrating the unaltered volume and density in lobuli with air trapping represents the diagnostic clue. Expiratory CT scans are also used for interpretation of a mosaic pattern seen in inspiration. Mosaic pattern is defi ned as areas of varying density, sharply demarcated by interlobular septa. To differentiate whether the area of increased attenuation represents ground glass, e.g., caused by an acute alveolar or interstitial process, or that the area with hypoattenuation represents air trapping caused by bronchiolitis, expiratory CT scans are very helpful. An increased contrast between hypo-and hyperattenuated areas in expiration demarcates air trapping as pathology and bronchiolitis as underlying disease. There are less vascular structures visible in the "black" areas of hypoattenuation due to the Euler-Liljestrand refl ex, causing vascular constriction in areas of lower ventilation. Importantly there are no signs of pulmonary hypertension. A decreased contrast between hypo-and hyperattenuated areas in expiration demarcates ground glass as pathology and an acute alveolar/ interstitial process as underlying disease. There is no difference in vascular calibers in the areas of different attenuation. Thirdly, pronounced differences in vascular diameter between the areas of different attenuation indicate true mosaic perfusion. Additional signs of pulmonary hypertension, such as dilated central pulmonary arteries and pathological arterio-bronchial ratio, may be present. The underlying disease is recurrent pulmonary emboli. Though discrimination of the different underlying diseases is also possible by analyzing the vascular diameters only, many radiologists consider the information of increased contrast differences caused by air trapping the most valuable and reassuring. Instead of acquiring the CT scans after full expiration in suspended respiration, other authors have proposed to acquire the data during forced expiration, more specifi cally to perform data acquisition during the dynamic process of forced expiration. While especially end-expiratory air trapping might be seen with higher sensitivity, the risk for considerable breathing artifacts hampering image quality has to be outweighed against the potentially increased diagnostic information. Dynamic scans were fi rstly introduced using an electron beam CT; however, it can also be performed with any MDCT scanner with a gantry rotation time of 1 s or less. Because images can be reconstructed at any time point during the scan, the temporal resolution is even higher than with the electron beam CT. Mostly continuous imaging is performed on a single axial level for 6-8 s as the patient expires rapidly. Acquisition dose is drastically reduced (usually 40 mAs). Lucidarme et al. ( 2000 ) found a signifi cantly higher density difference between the different areas of attenuation and a higher extent of air trapping in dynamic scans as opposed to static scans, but the number of patients for which dynamic scan acquisition changed the diagnosis has been found to be small (Gotway et al. 2000 ). A detailed instruction of the patient is important to avoid motion artifacts and to assure that the scan acquisition takes part in the desired respiratory status. A visual qualitative control for the inspired or expired state is possible by analyzing the shape of the trachea: in full inspiration the trachea demonstrates a round shape, while in expiration there is bulging of the dorsal membranous part of the trachea ventrally and intraluminally to a various extent. A too strong deformation producing a considerable reduction of the tracheal area and demonstrating a moon-like (luna) shape is associated with tracheomalacia which can be a hint toward severe obstructive lung disease (O'Donnell et al. 2014 ). Motion Artifacts and ECG Gating Motion artifacts caused by non-suspended respiration are common and can severely hamper a meaningful interpretation of the images. Respiratory motion leads to blurring of normally sharp details, pseudo-ground-glass opacities, and linear streaks or star artifacts from edges of vessels and other high-density structures. On lung window setting, gross respiratory motion artifacts are normally easily recognizable, and while they degrade image quality, they will not cause misinterpretation because of their obviousness. Subtle motion-related unsharpness and ground-glass opacities, however, may mimic an interstitial process; doubling of vascular structures can mimic thickened interlobular septa or walls of a dilated bronchus. A dedicated and detailed instruction of the patient how to deeply breathe in and out and especially to hold the breath before data acquisition is therefore an important step toward high image quality (Vikgren et al. 2008 ). Cardiac pulsation artifacts typically affect the paracardiac regions of the left lower lobe and to a lower extent of the lingula and middle lobe. Aortic pulsation may affect lung areas adjacent to the aortic arch or the descending thoracic aorta, being the segments six and ten of the left lower lobe. In selected cases these pulsation artifacts may be misleading and can cause false positive fi ndings. Typical artifacts are double contours of the bronchial walls mimicking bronchiectasis and hyperlucencies close to arteries mimicking emphysema. Usually they do not cause diagnostic problems if correlated to the blurred heart contour. Options to reduce pulsation artifacts are reduced gantry rotation time, segmented reconstruction, or ECG triggering of scan acquisition. Reduced gantry rotation time and segmented reconstruction are means to increase scanning speed, but they go along with decreased dose delivery and thus increase of image noise. Prospective ECG triggering leads to a signifi cant prolongation of measurement time, which interferes with the breath-hold capabilities of many patients. Retrospective cardiac gating would therefore be the method of choice to avoid disturbing pulsation artifacts at the expense of a markedly increased acquisition dose. In principal, techniques for coronary CT imaging can be used, without contrast administration, adaptation of the FOV to cover both lungs, and adaptation of acquisition dose. Similarly to motion-free imaging of the coronaries, motion-free imaging of the lung parenchyma would be achieved. Nevertheless, the few studies that compared image quality with and without ECG triggeringcarried out with relatively small groups of patients -found a signifi cantly increased image quality based on artifact reduction, however, without relevant impact on diagnostic performance or confi dence (Boehm et al. 2003 ). With the fast rotation time of most scanners used today, cardiac pulsation artifacts are signifi cantly reduced and there appears no indication for ECG triggering for diagnostic workup of interstitial lung diseases. 3 Image Display and Processing Windowing There is no single correct or ideal window setting for evaluating the lung parenchyma. Window settings have to be optimized with regard to the settings of scanners and monitors. Several combinations of window level and window width may be appropriate, and individual modifi cations based on personal preferences play a role. Nevertheless, it is advisable to use a chosen lung window setting consistently in all patients to optimize comparison between different patients and between sequential examinations of the same patient. It is also very important to use window settings constantly to develop a visual default and thus understanding of normal and abnormal fi ndings. Additional window settings may be useful in specifi c cases, depending on what abnormality is in question. For the assessment of a routine lung examination, window level settings ranging from โˆ’600 to โˆ’700 HU and window widths of 1000-1500 HU are recommended. Wider window width (i.e., 2000 HU) may be applied for the evaluation of overall lung attenuation and high-contrast interfaces, especially of peripheral parenchymal abnormalities along the pleural interfaces. For example, wide windows are therefore advised for the assessment of asbestosis. Low window level settings (e.g., โˆ’800 to โˆ’900) and narrow window width (500 HU) facilitate the detection of subtle attenuation differences and are therefore suited for the detection of emphysema, air trapping, or air-fi lled cystic lesions. The window setting has a substantial effect on the accuracy of size measurements. This is particularly important for the assessment of bronchial lumen diameter and bronchial wall thickness. It has been demonstrated that an intermediate window width between 1000 and 1400 HU together with a window level between โˆ’250 and โˆ’700 HU refl ects best the true size of the bronchi and especially the thickness of the bronchial wall (Bankier et al. 1996 ) (Fig. 6 ). As a result, it might be necessary to use different window settings for identifying pathological changes of the parenchyma or the pleura on one side and for the measurement of bronchial diameter and wall thickness on the other side. Window level settings of 40-50 HU and window width settings of 350-450 HU are generally recommended for evaluation of the mediastinum, the hili, and the pleura. Multiplanar Reformations Multidetector-HRCT produces an isotropic dataset that allows for contiguous visualization of the lung parenchyma in three dimensions and to create multiplanar two-dimensional (2D) reconstructions in any arbitrary plane. Mostly planar coronal and sagittal reconstructions are routinely performed and considered standard for reconstructed series of any HRCT dataset (Prosch et al. 2013 ;Beigelman-Aubry et al. 2005 ;Walsh et al. 2013 ) (Fig. 2 ). Coronal reformations facilitate the display of disease distribution, e.g., the craniocaudal gradient representing a key fi nding in idiopathic pulmonary fi brosis (IPF) or the upper lobe predominance in sarcoidosis or Langerhans cell histiocytosis (LCH). Coronal MPRs are also preferred by many clinicians because they facilitate anatomic orientation (Eibel et al. 2001a ) and produce images more easily comparable to chest radiographs. Advantages of sagittal MPR include the sharper delineation of the interlobar fi ssures and thus an improved anatomic localization of lesions close to the lobar border or for lesions with transfi ssural extent (Eibel et al. 2001a , b ). Sagittal images also ease the evaluation of the thoracic spine and the dorsal costophrenic angle. For the dedicated evaluation of the relationship between parenchymal lesions and airways, curved MPR can be useful following the course of the branching tubular airways (Lee and Boiselle 2010 ). Both coronal and sagittal MPR are superior to the axial scans alone to illustrate the location and extent of abnormalities situated in the central tracheobronchial system. Though there is a high level of concordance between reading coronal and axial slices with regard to the identifi cation of parenchymal disease, there is a general agreement that coronal or sagittal MPRs have a complimentary role, but are not regarded as the principle scan plane used for diagnostic evaluation. Maximum Intensity Projection Maximum intensity projection (MIP) is a 3D display technique, displaying the voxel of maximum intensity along the path of X-rays. To retain spatial information, MIPs are usually reconstructed with a thickness between 5 and 10 mm, dependent on the indication. Since they largely facilitate the differentiation between tubular (vascular structures) and nodular densities, their major advantage lies in demonstrating the distribution of nodules (Remy-Jardin et al. 1996 ;Beigelman-Aubry et al. 2005 ). The analysis of nodule distribution pattern in relation to the secondary lobule (e.g., perilymphatic, centrilobular, or random) is one of the key elements for the differential diagnosis; however, especially in cases with subtle fi ndings and low numbers of nodules or on the contrary in cases with a very high number of nodules, it can be diffi cult to assess their distribution on axial thin slices alone, since vascular structures and nodules have the same appearance (Fig. 7 ). Similarly, MIPs are very useful for the detection of (solitary) nodular densities, e.g., metastases. MIPs were found to show a large advantage especially in the central parts of the lungs; it proved to be signifi cantly superior to regular MPR with over 25 % additional fi ndings and increased diagnostic confi dence (Peloschek et al. 2007 ) (Fig. 8 ). In one study 103 patients with suspicion or evidence of pulmonary nodules underwent MDCT with a collimation of 1 mm. MIP and MPR were reconstructed in all three planes. The MIP were superior in the depiction of pulmonary nodules at a statistically signifi cant level. Additional lesions were identifi ed with MIP that were missed with transaxial slices and MPR. The improvement by MIP was based on the identification of nodules smaller than 5 mm in diameter. The improvement by MIP also led to an increase in diagnostic confi dence (Eibel et al. 2001b ). In a a b MIPs in a patient with diffuse tree-in-bud due to infectious bronchiolitis nodular densities (here small granulomas) in MIPS different study, MIPs led to the detection of additional fi ndings in 27 % of patients with nodular disease (Gavelli et al. 1998 ). Finally, MIP sections of variable thickness allow assessing the size and location of pulmonary vessels. Recognizing enlarged pulmonary veins is useful in differentiating the diagnosis of pulmonary edema from other causes of diffuse groundglass attenuation. In case of a mosaic attenuation pattern, MIP contributes to the differentiation of ground-glass attenuation (normal vascular diameter) and mosaic perfusion (altered vascular diameter). Eventually, MIP can help to differentiate between constrictive bronchiolitis and mixed emphysema (Beigelman-Aubry et al. 2005 ). Minimum Intensity Projection Minimum intensity projections (MinIPs) are less commonly used, but have been shown to facilitate the assessment of lung disease associated with decreased attenuation. MinIPs are created by projecting the voxel with the lowest attenuation value for every view throughout the volume onto a 2D image. It was demonstrated that MinIP enhances the visualization of air trapping as a result of small airway disease, yielding not only increased observer confi dence but also increased interreader agreement as compared to HRCT alone (Fig. 9 ). MinIPs revealed additional fi ndings in 8 % of patients with emphysema and in 25 % of cases with ground-glass opacities (Gavelli et al. 1998 ). These results have been confi rmed by another study where MinIP improved the detection of pulmonary cysts and their differentiation from honeycombing (Vernhet et al. 1999 ). There is a subtle difference in density between the endobronchial (pure) air and the lung parenchyma (HU difference 50-150). This allows visualization of the bronchi below the subsegmental level (Beigelman-Aubry et al. 2005 ) (Fig. 10 ). Recently, more attention has been paid to the options of MinIPs in facilitating the differentiation of bronchiectasis from honeycombing. The presence of honeycombing represents a key fi nding for the diagnosis of UIP/IPF (Raghu et al. 2011 ), but a relatively large interreader disagreement, even between experienced radiologists, has been described (Watadani et al. 2013 ). Quantitative CT as Imaging Biomarker CT is increasingly being used to stage and quantify the extent of DILD both in clinical practice and in treatment trials. The continuous data acquisition of CT with high isotropic resolution provides unique options for computer-supported quantifi cation of diffuse lung disease. There has been a history in quantifying emphysema and airway disease in COPD patients with the ultimate goal to identify different phenotypes in the spectrum of โ€ข Mostly, DILD often consists of a mix of various patterns that show a considerable overlap between different DILD, rendering it diffi cult to clearly distinguish one from the other. โ€ข The characterization of the pattern is infl uenced by inspiration depth, motion artifacts, and overlying other diseases such as infection. โ€ข The extent of architectural distortions in all three spatial dimensions and within the lobular anatomy is diffi cult to assess. Multiple studies have shown that type and extent of parenchymal changes, including accompanying airways pathology, correlate with lung function, disease progression, response to therapy, and last but not least, disease prognosis. It has to be stated however that the extent of correlation is very variable for the different diseases and highly dependent on the type of pathology and thus the appreciated HRCT pattern. Goh and colleagues proposed an interesting concept of combining a relatively simple visual quantifi cation with pulmonary function (Goh et al. 2008 ). Firstly, the disease extent was differentiated into minimal or severe (less or more than 20 % involved lung area). For the subgroup of patients in which the disease extent was not readily classifi able on HRCT (so-called indeterminate), the distinction between limited and extensive was based on FVC threshold values below or beyond 70 % predicted (FVC = forced expiratory vital capacity). Using this relatively simple two-step approach, the authors were able to discriminate two groups of patients with signifi cantly different outcomes and thus prognosis. A similar approach of combining extent of disease on HRCT with pulmonary function tests has been successfully applied to patients with sarcoidosis and connective tissue diseases. For example, severity of traction bronchiectasis, extent of honeycombing, and DLco were found to be strongly associated with mortality in connective tissue disease related fi brotic lung disease (Walsh et al. 2014a , b ). Similarly an extent of fi brosis exceeding 20 %, the diameter ratio between the aorta and main pulmonary artery, and a composite score of pulmonary function tests formed a signifi cantly more effective prediction for mortality in sarcoidosis patients than any of the factors individually (Walsh et al. 2014a ). There has been an increasing interest for the use of automatic CT quantifi cation as imaging biomarker to document disease response to treatment, disease progression, and eventually prognosis. Such a biomarker should be measurable but also reproducible and linked to relevant clinical outcome parameters (Goldin 2013 ). To meet the fi rst two demands, rigorous standardization of imaging protocols over several sites and time points is needed, including centralized scanner and image quality checks. The underlying technique of automatic, texture analysis-based CT quantifi cation is beyond the scope of this book chapter. There has been a number of studies evaluating the use of a computer-derived quantitative lung fi brosis score (QLF) for assessment of baseline disease extent and changes over time in patients with IPF (Kim 2015 ) and scleroderma patients treated with cyclophosphamide (Kim 2011 ). While this algorithm focuses solely on fi brotic changes, a different approach is used by the so-called CALIPER software (computer-aided lung informatics for pathology evaluation and rating), which associates a certain group of neighboring voxels to one of the basic patterns (normal, emphysema, ground glass, honeycombing, and reticular) for the complete lung volume (Bartholmai 2013 ). A recently published position paper by the Fleischner society (Hansell et al. 2015 ) acknowledges that CT has not only the potential to select the most appropriate patients to be included in treatment trials but also to represent a study end point in conjunction with other markers. This recent development has been fueled by the increasing precision with which image-processing software can quantify diffuse lung diseases. Image Analysis A widely accepted approach for analysis of HRCT in the context of DILD is based on the four patterns of pathological fi ndings. It provides a structured and in certain ways standardized HRCT analysis and provides, together with distribution of the parenchymal changes, clinical history (acute versus chronic), and associated clinical (e.g., bronchoalveolar lavage) and imaging fi ndings (e.g., lymphadenopathy, pleural effusion), the base for a differential diagnosis. The four patterns differentiate between: โ€ข Nodular opacities (I) โ€ข Septal/reticular changes with parenchymal distortion (II) โ€ข Diseases with increased density (III), including ground glass and consolidations โ€ข Diseases with decreased density, mosaic pattern, and cysts (IV) For communication with clinicians, it is important to use a cohesive terminology of signs and patterns describing the fi ndings, so that they will know how a specifi c differential diagnosis has been made and what the confi dence level of this diagnosis is. It has to be noted again that the fi nal diagnosis in patients with DILD should be made within a multidisciplinary conference involving clinicians, radiologists, and pathologists and should thus be the result of a multidisciplinary approach. A number of interstitial lung diseases are associated with such a characteristic pattern on HRCT, that diagnosis is strongly promoted by image fi ndings, e.g., Langerhans cell histiocytosis, lymphangioleiomyomatosis, end-stage lung disease of usual interstitial pneumonia (UIP), and certain stages of sarcoidosis. Other diseases show a combination of fi ndings that considerably overlap with a number of possible diagnoses, which is often the case in advanced disease with considerable parenchymal destruction. For example, there can be considerable overlap between sarcoidosis stage IV, UIP/IPF, and chronic exogenous allergic alveolitis. Normal Anatomy Localization and extent of parenchymal abnormalities are described in relation to readily appreciable anatomical structures, such as lobes and segments. The secondary lobule is the smallest anatomic unit bordered by connective tissue in the lung and represents one of the key anatomic structures in HRCT analysis that can be analyzed in its threedimensional architecture. In general, the secondary lobule measures between 1 and 2.5 cm in diameter, being bigger and rectangular in the periphery and smaller and more hexa-or polygonal in the central lung. The centrilobular core structures are formed by the pulmonary artery branch, the bronchiolus terminalis, lymphatics, and some connective tissue. It is important to note that under normal conditions, only the central pulmonary artery is visible on HRCT. The bronchiolus terminalis is below the spatial resolution and not visible under normal circumstances. Interlobular or perilobular septa represent the outer margin of the secondary lobule. They contain lung veins and lymphatics. In the peripheral subpleural 2 cm of lung parenchyma, only pulmonary arteries and some septal boundaries in an irregular distribution are visible. As soon as interlobular septa are regularly spaced or small airways become visible, these fi ndings represent pathology. It is readily understandable that especially subtle pathological fi ndings, such as an increased number of septal lines, is more easily appreciated in continuously scanned parenchyma that can be evaluated in all three projections as warranted. Nodular Pattern Pulmonary nodules are spherical or ovoid. They are categorized according to size, attenuation, margination, and localization. A general categorization of size results in micronodules (<3 mm), nodules (3-30 mm), and masses (>30 mm). Measurements of attenuation are problematic, but the appearance can be distinguished in being solid or of ground-glass opacity. Solid nodules may be sharply or poorly marginated, whereas ground-glass nodules are often poorly marginated. Values higher than 150 HU are typical for calcifi cations and indicate benign, postinfl ammatory granulomata. A surrounding area with increased ground-glass density is called "halo." The halo sign may be caused by acute infl ammation or hemorrhage. In general, the location or anatomic distribution of nodules is of great importance for the differential diagnosis. According to localization with regard to the anatomy of the secondary lobule, three main distribution categories of nodules are differentiated: perilymphatic, random, and centrilobular (Fig. 11 ). Once the predominant distribution pattern has been determined, the overall distribution within the complete lung is considered in the differential diagnosis (Fig. 12 ). Perilymphatic nodules occur predominantly in relation to the lymphatic pathways within the lung, both histologically and on HRCT. They appear along the bronchovascular bundles, the interlobar fi ssures, the interlobular septa, and the subpleural regions. Central, perihilar nodules along the bronchovascular bundle, individually or clustered, are most typical for granuloma in sarcoidosis. The nodules usually measure less than 5 mm, are well defi ned, and regularly marginated. They have a predominance for the perihilar regions and the upper lobes, which is particularly well depicted on coronal reformations. Confl uence of granuloma may result in larger irregularly marginated nodules, ground-glass opacities, or consolidations (Criado et al. 2010 ). There might be an overlap between irregularly thickened interlobular septa caused by lymphangitis carcinomatosa and a perilobular sarcoidosis with predominantly nodules along the interlobular septa. While lymphangitis carcinomatosa is frequently associated with pleural effusion and strong thickening of the central bronchovascular interstitium, it is never characterized by lung distortion. Sarcoidosis on the other hand can show heavy distortion and usually no pleural effusion. Typical for a random distribution is its uniformity throughout the lung without a preference for certain anatomical structures. The involvement tends to be bilateral and symmetrical. A nodular pattern with a random distribution in relation to leading structures of the secondary lobule is indicative for a disease process with hematogenous spread as seen in hematogenous spread of malignant disease, miliary tuberculosis, fungal infections, cytomegaly, or herpes virus infections. Centrilobular nodules are limited to the centrilobular regions and are never in contact with fi ssures or the pleura. They either originate from the respiratory bronchioles or bronchioli terminales or from the peripheral pulmonary artery branches. The nodules have a distance of at least several millimeters from interlobular septa and fi ssures and 5-10 mm from the pleural surface and pleura resulting in the characteristic subpleural sparing. They present as ill-defi ned, centrilobular ground-glass opacities in exogenous allergic alveolitis or respiratory bronchiolitis. A bit larger nodules, mostly ill-defi ned or consisting of solid components with a small halo around, are caused by bronchiolitis or small foci of bronchopneumonia. In silicosis, nodules can have a centrilobular as well as subpleural distribution pattern with a predominance in the posterior aspect of the upper lobes. Typically, they are smoothly marginated. The nodules range between 2 and 5 mm in diameter and can be calcifi ed. A tree-in-bud pattern-representing smallbranching opacities with nodular endings -also show a centrilobular distribution; they represent small airway disease with wall thickened and/or secretion-fi lled small airways. As described earlier, maximum intensity projections are most helpful in analyzing the distribution of nodules (see Sect. 3.3 ). Septal/Reticular Pattern With or Without Signs of Parenchymal Distortion Thickening of the lung interstitium caused by fl uid, fi brous tissue, or cellular infi ltration usually results in an increase in reticular or linear opacities on HRCT. Interlobular septal thickening is differentiated from intralobular septal thickening, although mostly seen together. The normal interlobular septa contain venous and lymphatic structures. They measure approximately 0.1 mm in thickness and are only occasionally or partially seen on HRCT under normal conditions. Thickening of interlobular septa results in outlining the margins of the secondary lobules in part or completely; a regular network becomes apparent, and the centrilobular arteries are easily identifi ed as small dots in the center of the secondary lobules. For differential diagnosis it is most important whether increased reticular margins are associated with signs of parenchymal distortion (e.g., traction bronchiectasis) suggesting the diagnosis of fi brosis, or whether there are no signs of distortion as, e.g., seen in crazy paving (Fig. 13 ). Any diseases causing alveolar fi lling-in and subsequent fi lling of intralobular and interlobular lymphatics will cause a pattern of crazy paving which describes the combination of ground-glass and reticular densities. Such conditions are seen in edema, bleeding, and pneumonic infi ltrations, but also in alveolar proteinosis, storage diseases, or lymphoproliferative disorders. When caused by fi brosis, intralobular interstitial thickening results in traction bronchiectasis and bronchiolectasis, as well as displacement of fi ssures. This reticular pattern of thickened intraand interlobular septa, as well as irregular thickening of the bronchovascular interstitium (interface sign) is an important diagnostic fi nding in the heterogeneous group of interstitial lung diseases (ILD). The most recent 2013 classifi cation of the American Thoracic Society (ATS) and European Respiratory Society (ERS) distinguishes usual interstitial pneumonia (UIP), the specifi c histopathological pattern seen in idiopathic pulmonary fi brosis (IPF), from six non-IPF subtypes: acute interstitial pneumonia (AIP), respiratory bronchiolitis interstitial lung disease (RB-ILD), desquamative interstitial pneumonia (DIP), organizing pneumonia (OP), lymphoid interstitial pneumonia (LIP), and nonspecifi c interstitial pneumonia (NSIP) (Walsh and Hansell 2010 ). Usual interstitial pneumonia (UIP) carries a particularly poor prognosis: its 5-year survival is approximately 15-30 %. Because of prognos-tic and lately also therapeutic differences, UIP/ IPF is separated from the remaining ILDs. The defi nition of IPF is a specifi c form of chronic, progressive fi brosing interstitial pneumonia of unknown cause that occurs primarily in older adults and is limited to the lungs and associated with the histopathologic and/or radiologic pattern of UIP. Other forms of interstitial pneumonia, including other idiopathic interstitial pneumonias and ILD associated with environmental exposure, medication, or systemic disease, must be excluded. UIP is characterized by the presence of reticular opacities associated with traction bronchiectasis. Honeycombing is critical for making a defi nite diagnosis following the criteria by Raghu et al. (Raghu et al. 2011 ) (Fig. 14 ). For the HRCT-based diagnosis of UIP, ground glass can be present, but is less extensive than reticulation. The distribution of UIP is characteristically predominantly basal and subpleural. Coexistent pleural abnormalities (e.g., pleural plaques, calcifi cations, pleural effusion) suggest an alternative etiology for the UIP pattern. Also, fi ndings such as micronodules, air trapping, cysts, extensive ground-glass opacities, consolidation, or a peribronchovascularpredominant distribution represent fi ndings inconsistent with UIP and suggest an alternative diagnosis (Fig. 15 ). A UIP pattern on HRCT is highly accurate for the presence of UIP pattern on surgical lung biopsy. In the absence of honeycombing, but imaging features otherwise meeting the criteria for UIP, the HRCT fi ndings are regarded as representing a "possible UIP," and surgical lung biopsy is necessary if a defi nitive diagnosis is required (Raghu et al. 2011 ). Nonspecifi c interstitial pneumonia (NSIP) forms the second group of lung fi brosis, having a very variable clinical, radiological, and histological presentation. It may be idiopathic but is more commonly associated with collagen vascular diseases, hypersensitivity pneumonitis, druginduced lung disease, or slowly healing DAD. The typical HRCT features are ground-glass opacities, irregular linear (reticular) opacities, and traction bronchiectasis. It has a peripheral and basal predominance, with typically (but not always present) relative sparing of the immediate subpleural space in the dorsal regions of the lower lobes. A more acute infl ammatory (cellular) type of NSIP representing with predominant ground glass is differentiated from the more fi brotic type representing with reticulation and traction bronchiectasis (Fig. 16 ). Opposite to UIP, NSIP can also demonstrate a very patchy distribution. NSIP is typically characterized by a more uniform pattern, indicating the same stage of evolution of disease, distinct from the multitemporal and morphological heterogeneity of UIP. It has to be noted that UIP is not specifi c for IPF. UIP is also seen in a number of other DILD, such as asbestosis (pleural plaques and calcifi cations), sarcoidosis (upper lobe disease, heavy parenchymal distortion, associated nodular disease in perilymphatic distribution), or exogenic allergic alveolitis (bronchovascular-centered fi brosis, air trapping, relative sparing of the posterior costophrenic recesses). Increased Density Acute interstitial pneumonia (AIP) is histologically characterized by hyaline membranes within the alveoli and diffuse, active interstitial fi brosis also described as "diffuse alveolar damage" (DAD). There is clinical, pathological, and radiological overlap with ARDS, and patients often present with respiratory failure developing over days or weeks. Typical HRCT features are bilateral ground-glass opacities (patchy or diffuse) and consolidations. Focal sparing of lung lobules frequently result in a geographic distribution. In later stages architectural distortion with traction bronchiectasis, reticulation, and even honeycombing develops. Respiratory bronchiolitis interstitial lung disease (RB-ILD) , desquamative interstitial pneumonia (DIP) , and, last but not least, nonspecifi c interstitial lung disease (NSIP) belong to the spectrum of smoking-induced lung changes. RB-ILDs consist of centrilobular acinar groundglass nodules that may be confl uent to areas of ground-glass opacities and thickening of the bronchial walls. A small percentage has a mild reticular pattern mainly in the lower lung zones. Desquamative interstitial pneumonia (DIP) is uncommon in its idiopathic form and mostly associated with smoking. HRCT features are similar to those encountered in RB-ILD, although the distribution is diffuse in DIP and more bronchiolocentric in RB-ILD. The typical HRCT feature of DIP is diffuse ground-glass opacity, sometimes in a geographic distribution; reticulation is uncommon. Organizing pneumonia (OP) is a common reaction pattern secondary to pulmonary infection, connective tissue diseases, infl ammatory bowel disease, inhalation injury, hypersensitivity pneumonitis, drug toxicity, malignancy, radiation therapy, or aspiration but can also be idiopathic. Organizing pneumonia can present with a wide variety of HRCT fi ndings with increased density, ranging from a more nodular pattern to geographically demarcated ground-glass or focal consolidations. Suggestive for organizing pneumonia (as opposed to an infectious pneumonia) are sharply demarcated consolidations in a peripheral subpleural distribution or following the bronchovascular bundle (Fig. 17 ). Mostly the areas with increased density show dilated airfi lled bronchi without signs of underlying distortion. A patchy distribution is described as atoll sign, demonstrating islands with a peripheral rim-like consolidation around central areas of ground glass (reversed halo). Densities along the periphery of the secondary lobules are described as perilobular pattern. Decreased Density/Cysts A cystic pattern results from a heterogeneous group of diseases, all having in common the presence of focal, multifocal, or diffuse parenchymal lucencies and lung destruction. For the differential diagnosis, the presence of walls around the lucencies and the form of the lucencies (bizarre shaped or uniform round) is important. The term "cyst" by itself is nonspecifi c and refers to a well-circumscribed round or irregular lesion with a visible wall. The wall is usually thin (<2-3 mm), but can have a uniform or variable thickness. Most cysts are fi lled with air, but can also contain liquid, semisolid, or solid material. Cysts can be very small and diffuse, but also large and confl uent resulting in polygonal bizarre shapes. The presence of a defi nable wall demonstrated on CT differentiates cysts from emphysema. Cysts are the leading pattern of specifi c lung diseases, such as Langerhans cell histiocytosis or lymphangioleiomyomatosis (LAM). Other more rare diseases with cysts are LIP and Birt-Hogg-Dube (Fig. 18 ). Incidental lung cyst without other HRCT abnormalities and without a history of lung disease or signs of architectural distortion have been reported as a normal fi nding in elderly patients, ranging from 5 to 22 mm and located in all lobes. In the early phase, Langerhans cell histiocytosis (LCH) has a typical nodular pattern with multiple, <10 mm small centrilobular nodules with irregular margins. Later these nodules increase in a c b Fig. 17 Patterns with increased density: ( a ) organizing pneumonia with sharply demarcated areas of consolidations and ground glass, ( b ) crazy pacing in alveolar pro-teinosis, ( c ) ground glass with air trapping in subacute exogenic allergic alveolitis size and have a tendency to cavitate and to develop into cystic lesions with a diameter of up to 2 cm. The cysts are often confl uent, have bizarre or irregular shapes, and are usually thin walled, but may be thick walled. LCH is predominantly located in the upper lobes, with sparing of the costophrenic sinus, which is better depicted on coronal reformations than on cross-sectional images. The thin walls of the cysts are prone to rupture, with an increased risk of pneumothorax. Concomitant nodules are usually irregular, measure 1-5 mm, and often have a centrilobular distribution. The interspersed pulmonary parenchyma is typically normal, without evidence of fi brosis or septal thickening. Lymphoid interstitial pneumonia (LIP) is uncommon and considered as part of a spectrum of pulmonary lymphoproliferative disorders, ranging from benign accumulation to malignant lymphomas. Mostly it is associated with Sjรถgren's disease, HIV infection, or other immunological disorders. On HRCT, typical fi ndings are sharply demarcated cysts in a subpleural distribution with intraluminal septae associated with diffuse ground glass of the lung parenchyma. Poorly defi ned centrilobular nodules, thickening of the bronchovascular bundles, patchy ground-glass opacities, and focal consolidations also belong to the spectrum. In cases with consolidations, a lymphoma needs to be excluded. Lymphangioleiomyomatosis (LAM) usually occurs in young women and is frequently associated with recurrent pneumothorax and pleural effusion. The cysts are distributed bilateral diffusely throughout the lung, involving both upper and lower lobes, with also involvement of the lung bases. They are thin walled, round, and measure 0.2-5 cm. Their wall is thin, ranging from a c b Fig. 18 Patterns with cystic parenchymal disease density: ( a ) Lymphangiomyomatosis with uniform cysts and enlarged pleural space after recurrent pneumothoraces, ( b ) diffuse ground glass and subpleural cysts in lympho-cytic interstitial pneumonia (LIP), ( c ) bizarre-shaped cysts predominantly located in the upper lobes in Langerhans cell histiocytosis (LCH) barely visible to 4 mm in thickness. Since the thin walls of the cysts are prone to rupture, there is an increased risk of pneumothorax. Pleural effusion may also be seen. The surrounding parenchyma is typically normal. Nodules are uncommon, but may be associated with the cysts. Sometimes it may be diffi cult to distinguish lymphangioleiomyomatosis and Langerhans cell histiocytosis from emphysema. A helpful fi nding is that the cystic spaces in LAM and LCH do not have any central nodular opacities, whereas the cystic spaces seen with centrilobular emphysema contain a small central nodular opacity representing the centrilobular artery. Honeycomb cysts constitute the irreversible fi nal stage of parenchymal destruction in patients with interstitial fi brosis (end-stage lung) of UIP. It is seen in IPF but also in collagen vascular diseases, asbestosis, hypersensitivity pneumonitis, or drug-related fi brosis. The cystic spaces are usually round or ovoid and measure between several millimeters and 1 cm in diameter, although more rarely they can be several centimeters in size. Within one individual, however, they are mostly uniform. They have clearly defi nable walls which are 1-3 mm thick. Honeycomb cysts seem to develop from alveolar disruption and massively dilated alveolar ducts and bronchioles. Honeycombing is associated with other fi ndings of lung fi brosis, such as septal and reticular pattern, architectural distortion, and traction bronchiectasis. Most patients demonstrate several layers of irregular cystic spaces (honeycombs), which are separated from each other by irregular thick walls and intralobular septa and lines. However, note that also a singular layer of cystic spaces with thickened wall are described as honeycombing and fulfi ll the criteria for a defi nite UIP under appropriate conditions. Though the defi nition of honeycombing is quite straightforward, a substantial interreader variability has been described for the diagnosis of honeycombing, which needs attention given the importance of honeycombing for the diagnosis of UIP. Even on HRCT it is not always possible to securely separate the walls of the honeycomb cysts from the thickened intralobular fi brous bands. Secondly, it is important to differentiate honeycombing from tubular bronchiectasis reaching all the way to the pleura. MinIPs have been described as helpful to differentiate between tubular bronchiectasis and focal cysts (see Sect. 3.3 ). Summary Continuous volumetric HRCT offers a number of advantages over discontinuous axial thin section CT slices, of which the most important ones refer to improved detection of subtle disease, better and more accurate comparability, and superior quantitative measures. A number of processing options such as MIP and MinIP ease analysis of HRCT patterns. Most recent developments of detector and reconstruction technology have made up for the disadvantage of continuous versus discontinuous data acquisition of the past, allowing for substantial dose savings. All these arguments have led to the fact that HRCT is routinely based on volumetric data acquisition. Today basically each chest CT offers the spatial resolution of a HRCT in all three dimensions, and it depends on the indication of the examination (e.g., malignancy versus diffuse parenchymal disease) whether contrast media is injected and which slice thickness and reconstruction techniques (e.g., MIP and MinIP) are applied.
Bending Study of Six Biological Models for Design of High Strength and Tough Structures High strength and tough structures are beneficial to increasing engineering components service span. Nonetheless, improving structure strength and, simultaneously, toughness is difficult, since these two properties are generally mutually exclusive. Biological organisms exhibit both excellent strength and toughness. Using bionic structures from these biological organisms can be solutions for improving these properties of engineering components. To effectively apply biological models to design biomimetic structures, this paper analyses strengthening and toughening mechanisms of six fundamentally biological models obtained from biological organisms. Numerical models of three-point bending test are established to predict crack propagation behaviors of the six biological models. Furthermore, the strength and toughness of six biomimetic composites are experimentally evaluated. It is identified that the helical model possesses the highest toughness and satisfying strength. This work provides more detailed evidence for engineers to designate bionic models to the design of biomimetic composites with high strength and toughness. Introduction Material strength expresses the ability to withstand maximum stress before fracture [1], and material toughness represents the resistance to fracture [2]. High strength and tough materials can prolong product life spans and are, thus, required in many applications [3]. Nonetheless, increasing strength does not always promote its toughness. For instance, to increase strength by increasing stiffness, the toughness can be severely reduced [4]. One practical way to server both strength and toughness is using composite structures [5]. Nevertheless, the mechanical properties of current composite structures are not yet satisfying due to a lack of effective structure models [6]. As a result, it is essential to explore new structure models [7]. Inspired by biological organisms that possess excellent properties of high strength and large toughness, their biological models were adopted in the design of bionic structures [8]. In achieving the biological properties, the structure-mechanics characteristics for increasing strength and toughness were analyzed for several biological models [9]. Beniash et al. [10] found that the dislocation of adjacent crystals in the columnar arrangement of enamel leads to crack deflection, thus, increasing strength and toughness. Suksangpany et al. [11] proposed a theoretical model to elaborate on the toughening effect of crack twisting in the helical model. Liu et al. [12] established quantitative criteria for evaluating the cracking propagations of the sutured interface and revealed the strengthening and toughening effect by the suture interface. Moreover, numerical approaches for predicting mechanical behaviors of biological models were discussed [13]. Gustafsson et al. [14] performed a comprehensive material parameter study by using a 2D extended finite element method (XFEM) interface damage model and simulated crack propagation around an osteon at the microscale. Additionally, biomimetic synthetic structures were fabricated and the mechanical behaviors of several bionic models were experimentally tested. Jia et al. [15] fabricated five biomimetic composites with bioinspired microstructures and characterized their crack resistance features. Agarwal et al. [16] fabricated a biomimetic tough helicoidal structure inspired by mantis shrimp's dactyl club and experimentally investigated its mechanical properties; these works demonstrated that high strength and toughness can be combined in bionic structures [17]. Although the above research provides significant knowledge in designing bionic structures, bionic composites with high strength and toughness are not always successful. This is largely attributed to the mechanisms for obtaining high strength and toughness of biological models that are not clearly understood [18]. In addition, the numerical illustrations of the strengthening and toughening mechanisms of the biomimetic composites are rarely performed [19]. In addition, experimental verifications of the mechanisms of biological models are not completely investigated [20]. To effectively design bionic structures with high strength and toughness, it is essential to investigate the strengthening and toughening mechanisms of the fundamentally biological models through comprehensive studies, including both numerical and experimental tests [21,22]. This paper provides analysis on the mechanisms of six fundamental biological models using numerical and experimental approaches. In Section 2, the six biological models that can have high strength and toughness are extracted from biological organisms; Section 3 illustrates the numerical approaches for predicting their strengthening and toughening mechanisms. In Section 4, the experimental tests of the bionic structures are presented. Finally, conclusions are presented in Section 5. Biological Models Biological organisms of high strength and toughness have varied structures and function mechanisms with respect to different size scales [13]. From micro-and meso-scales, this section summarizes six fundamental structure models, namely, layered, columnar, tubular, sutured, helical and sandwich. The corresponding strengthening and toughening mechanisms are also explained. These six models are considered to comprise most common structure features for high strength and toughness in biological organisms. These models can also be integrated for the structure design and applications. Layered The layered model in biological organisms is composed of multiple overlapped stiff platelet material and soft viscoelastic matrix ( Figure 1) [23]. Layered structures appear in the microstructure of many biological organisms, such as nacre shells (Figure 2a) [24], insect exoskeletons (Figure 2b) [25], fish scales ( Figure 2c) [26] and deep-sea sponges (Figure 2d) [27]. For the mechanistic study of this model, Gu et al. [28] presented a systematic investigation to elucidate the effects of the volume fraction of stiff platelet materials on the mechanical response of nacre-inspired additive manufactured layered composites. Experimental and finite element simulation of tensile tests on single edgenotched samples were carried out. Results reveal the importance of the stiff phase in carrying the load and the soft phase in transferring the load through the platelet by shearing. Narducci et al. [29] designed and tested a bio-inspired carbon fibre/epoxy composite with a layered microstructure. Analytical models were developed to predict the energy dissipation and crack deflection properties. In addition, three-point bend tests are carried out to observe the fracture behaviors. The results showed that the layered structure is capable of deflecting the crack, avoiding sudden failure in the most highly loaded cross section of the specimen. Liu et al. [30] investigated the interfacial strength and fracture mechanism of the 'brick-mortar' structure using micro-sized cantilever beam and bend samples. The crack propagation path was also investigated via experiment and finite element modelling. The results were compared with the fracture mechanics. It confirmed that crack deflection to the aragonite/biopolymer interface contributed to a high overall toughness. layered model via experiments, simulations and numerical analysis. When are subjected to tensile loads, the matrix transfers the loads by shearing inter the force for pulling out platelets increases, and thus enhancing strength an of the model [31]. In addition, crack deflection and crack bridging at the in improve toughness. [24] Copyrightยฉ200 Materials Science (b) beetle shell [25] Copyrightยฉ2008, Advanced Engineering Ma scale [26] Copyrightยฉ2012, Journal of Materials Research (d) concentric layer of de [27] Copyrightยฉ2008, Journal of Materials Research. Columnar The columnar model consists of stiff columns wrapped in a soft matrix in Figure 3. In this model, the parallel columns can be placed at intervals o This columnar structure is often found in mineralized organisms, such as in (Figure 4a) [32]. They are also widely observed in non-mineralized soft biolo als, such as bamboo fibers (Figure 4b) [33], silk (Figure 4c) [34] and spider sil [35]. To assess the strength and toughness of this model, Bajaja et al. [32] q studied the crack growth resistance and fracture toughness of the columnar human tooth enamel by three-point bending tests. The results revealed that organic matrix promoted crack closure. In addition, microcracking due to columns can lead to energy dissipation. Yeom et al. [36] fabricated enamel lumnar nanocomposites, consisting of columns with polymeric matrix. The Y ulus and hardness of the samples were experimentally and numerically meas forming nano-indentation tests. The results confirmed that high performanc lumnar structure were attributed to efficient energy dissipation in the interf between the columns and the organic matrix. As the researchers demonstrated above, the high viscoelasticity between and the matrix improves tensile strength [37]. When the column fractur with the fracture mechanics. It confirmed that crack deflection to the aragonite/biopolymer interface contributed to a high overall toughness. These pieces of research revealed the strengthening and toughing mechanism of the layered model via experiments, simulations and numerical analysis. When the platelets are subjected to tensile loads, the matrix transfers the loads by shearing interfaces, so that the force for pulling out platelets increases, and thus enhancing strength and toughness of the model [31]. In addition, crack deflection and crack bridging at the interfaces also improve toughness. [24] Copyrightยฉ2009, Progress in Materials Science (b) beetle shell [25] Copyrightยฉ2008, Advanced Engineering Materials (c) fish scale [26] Copyrightยฉ2012, Journal of Materials Research (d) concentric layer of deep-sea sponge [27] Copyrightยฉ2008, Journal of Materials Research. Columnar The columnar model consists of stiff columns wrapped in a soft matrix, as is shown in Figure 3. In this model, the parallel columns can be placed at intervals or compacted. This columnar structure is often found in mineralized organisms, such as in tooth enamel ( Figure 4a) [32]. They are also widely observed in non-mineralized soft biological materials, such as bamboo fibers ( Figure 4b) [33], silk (Figure 4c) [34] and spider silk (Figure 4d) [35]. To assess the strength and toughness of this model, Bajaja et al. [32] quantitatively studied the crack growth resistance and fracture toughness of the columnar structure of human tooth enamel by three-point bending tests. The results revealed that bridging the organic matrix promoted crack closure. In addition, microcracking due to loosening of columns can lead to energy dissipation. Yeom et al. [36] fabricated enamel-inspired columnar nanocomposites, consisting of columns with polymeric matrix. The Young's modulus and hardness of the samples were experimentally and numerically measured by performing nano-indentation tests. The results confirmed that high performances of the columnar structure were attributed to efficient energy dissipation in the interfacial portion between the columns and the organic matrix. As the researchers demonstrated above, the high viscoelasticity between the columns and the matrix improves tensile strength [37]. When the column fractures are being [24] Copyrightยฉ2009, Progress in Materials Science (b) beetle shell [25] Copyrightยฉ2008, Advanced Engineering Materials (c) fish scale [26] Copyrightยฉ2012, Journal of Materials Research (d) concentric layer of deep-sea sponge [27] Copyrightยฉ2008, Journal of Materials Research. These pieces of research revealed the strengthening and toughing mechanism of the layered model via experiments, simulations and numerical analysis. When the platelets are subjected to tensile loads, the matrix transfers the loads by shearing interfaces, so that the force for pulling out platelets increases, and thus enhancing strength and toughness of the model [31]. In addition, crack deflection and crack bridging at the interfaces also improve toughness. Columnar The columnar model consists of stiff columns wrapped in a soft matrix, as is shown in Figure 3. In this model, the parallel columns can be placed at intervals or compacted. This columnar structure is often found in mineralized organisms, such as in tooth enamel ( Figure 4a) [32]. They are also widely observed in non-mineralized soft biological materials, such as bamboo fibers (Figure 4b) [33], silk (Figure 4c) [34] and spider silk (Figure 4d) [35]. To assess the strength and toughness of this model, Bajaja et al. [32] quantitatively studied the crack growth resistance and fracture toughness of the columnar structure of human tooth enamel by three-point bending tests. The results revealed that bridging the organic matrix promoted crack closure. In addition, microcracking due to loosening of columns can lead to energy dissipation. Yeom et al. [36] fabricated enamel-inspired columnar nanocomposites, consisting of columns with polymeric matrix. The Young's modulus and hardness of the samples were experimentally and numerically measured by performing nano-indentation tests. The results confirmed that high performances of the columnar structure were attributed to efficient energy dissipation in the interfacial portion between the columns and the organic matrix. pulled, the slip between column and matrix and the microcrack deflec column interface contribute to energy absorption and dissipation, thu ture toughness. Tubular The tubular model refers to parallel stiff tubes staggered in soft m in Figure 5. These structural elements are commonly found in biolog resist impact and compression such as horse hooves (Figure 6a) [38], ram [39], dentin ( Figure 6c) [40] and whale baleen (Figure 6d) [41]. To reve of this model, Giner et al. [42] studied the elasticity and toughness pro ing experimental tests with finite element simulations. In particular, t tests have been performed and the growth of cracks were simulated based on a damage model of maximum principal strain criterion. The s that cracks were frequently arrested or deflected when they encounter agreed with the experimental results. Wang et al. [41] fabricated a tub sisting of non-mineralized filament matrix and tubules composed of m by using 3D printing. Three-point bending tests were conducted both and longitudinal orientations of the specimens to obtain the J-integr toughness. The results revealed that the cracks growing in the transv pulled, the slip between column and matrix and the microcrack deflection at the matrixcolumn interface contribute to energy absorption and dissipation, thus, enhancing structure toughness. [33] Copyrightยฉ 2020, Advanced Materials (c) silk [34] Copyrightยฉ2018, ACS Nano (d) spider silk [35] Copyrightยฉ2020, Advanced Materials. Tubular The tubular model refers to parallel stiff tubes staggered in soft matrix, as is shown in Figure 5. These structural elements are commonly found in biological materials that resist impact and compression such as horse hooves (Figure 6a) [38], ram horns ( Figure 6b) [39], dentin ( Figure 6c) [40] and whale baleen ( Figure 6d) [41]. To reveal the mechanisms of this model, Giner et al. [42] studied the elasticity and toughness properties by comparing experimental tests with finite element simulations. In particular, three-point bending tests have been performed and the growth of cracks were simulated through 2D XFEM based on a damage model of maximum principal strain criterion. The simulation revealed that cracks were frequently arrested or deflected when they encountered a tubule, which agreed with the experimental results. Wang et al. [41] fabricated a tubular structure consisting of non-mineralized filament matrix and tubules composed of mineralized lamellae by using 3D printing. Three-point bending tests were conducted both on the transverse and longitudinal orientations of the specimens to obtain the J-integral of the structure toughness. The results revealed that the cracks growing in the transverse direction were arrested and redirected along the tubules. As a result, the resistance to fracture was enhanced and the J-integral was increased. The research above inferred the behaviors of the tubulars for the strengthening and toughening mechanisms. The stiff tubes can resist highly compressive external forces. The organized pores of tubes absorb compression energy and enables crack deflection as fracture occurs [43]. Moreover, they can arrest crack growth through removing the stress singularity at the crack tip or by collapsing the tubules when compressed to improve fracture toughness [44]. As the researchers demonstrated above, the high viscoelasticity between the columns and the matrix improves tensile strength [37]. When the column fractures are being pulled, the slip between column and matrix and the microcrack deflection at the matrix-column interface contribute to energy absorption and dissipation, thus, enhancing structure toughness. Tubular The tubular model refers to parallel stiff tubes staggered in soft matrix, as is shown in Figure 5. These structural elements are commonly found in biological materials that resist impact and compression such as horse hooves (Figure 6a) [38], ram horns ( Figure 6b) [39], dentin ( Figure 6c) [40] and whale baleen (Figure 6d) [41]. To reveal the mechanisms of this model, Giner et al. [42] studied the elasticity and toughness properties by comparing experimental tests with finite element simulations. In particular, three-point bending tests have been performed and the growth of cracks were simulated through 2D XFEM based on a damage model of maximum principal strain criterion. The simulation revealed that cracks were frequently arrested or deflected when they encountered a tubule, which agreed with the experimental results. Wang et al. [41] fabricated a tubular structure consisting of non-mineralized filament matrix and tubules composed of mineralized lamellae by using 3D printing. Three-point bending tests were conducted both on the transverse and longitudinal orientations of the specimens to obtain the J-integral of the structure toughness. The results revealed that the cracks growing in the transverse direction were arrested and redirected along the tubules. As a result, the resistance to fracture was enhanced and the J-integral was increased. [38] Copyrightยฉ1997, Journal of Experimental Biology (b) ram's horn [39] Copyrightยฉ 2010, Acta Biomaterialia (c) dentin [40] Cop-yrightยฉ2012, Non-Metallic Biomaterials for Tooth Repair and Replacement (d) whale's baleen [41] Copyrightยฉ2019, Advanced Materials. Helical The helical model in biological organisms can be described as successive fiber layers in a weak matrix ( Figure 7) [45]. The fibers in each layer have relative rotational angles (ฮ”ฮธ) and can also have angle deflection in vertical directions [46]. A common type of the helical structure is a periodically assembled uniaxial fiber layers, which is referred to as Bouligand structure [47]. Helical model is commonly seen in mantis shrimp dactyl club ( Figure 8a) [48], fish scales ( Figure 8b) [49], beetle exoskeletons ( Figure 8c) [50], deep-sea sponges ( Figure 8d) [51] and other similar organisms. To evaluate the strengthening and toughening mechanisms of this model, Suksangpanya et al. [46] carried out a theoretical and experimental combined approach to estimate the structure toughness using a 3D printed biomimetic composite material. Three-point bending tests were conducted and the effects of structural parameters were mathematically modeled. It was found that crack twisting driven by the fiber architecture, crack branching and delamination, were the main reasons for avoiding catastrophic failure. Yin et al. [49] adopted a fracture model of an anisotropic phase-field to predict the toughness of Bouligand structures. Moreover, the Bouligand structure was fabricated and tension tests were performed. The results revealed the advantages of Bouligand structures in promoting the isotropy and enhanced fracture toughness properties. The above research indicated that the helical arrangement can result in the spatial variation of the driving force, which provides significant resistance to multi-directional loads [52]. Moreover, the dislocation of the fibrous layers decomposes the cracks and enables them to deflect, twist or bifurcate, which are responsible to the increase in structure toughness [53]. Helical The helical model in biological organisms can be described as successive fiber la in a weak matrix ( Figure 7) [45]. The fibers in each layer have relative rotational an (ฮ”ฮธ) and can also have angle deflection in vertical directions [46]. A common type o helical structure is a periodically assembled uniaxial fiber layers, which is referred t Bouligand structure [47]. Helical model is commonly seen in mantis shrimp dactyl ( Figure 8a) [48], fish scales ( Figure 8b) [49], beetle exoskeletons ( Figure 8c) [50], deep sponges ( Figure 8d) [51] and other similar organisms. To evaluate the strengthening toughening mechanisms of this model, Suksangpanya et al. [46] carried out a theore and experimental combined approach to estimate the structure toughness using a printed biomimetic composite material. Three-point bending tests were conducted the effects of structural parameters were mathematically modeled. It was found that c twisting driven by the fiber architecture, crack branching and delamination, were main reasons for avoiding catastrophic failure. Yin et al. [49] adopted a fracture mod an anisotropic phase-field to predict the toughness of Bouligand structures. Moreover Bouligand structure was fabricated and tension tests were performed. The result vealed the advantages of Bouligand structures in promoting the isotropy and enhan fracture toughness properties. The above research indicated that the helical arrangement can result in the sp variation of the driving force, which provides significant resistance to multi-directi loads [52]. Moreover, the dislocation of the fibrous layers decomposes the cracks and ables them to deflect, twist or bifurcate, which are responsible to the increase in struc toughness [53]. [38] Copyrightยฉ1997, Journal of Experimental Biology (b) ram's horn [39] Copyrightยฉ 2010, Acta Biomaterialia (c) dentin [40] Copyrightยฉ2012, Non-Metallic Biomaterials for Tooth Repair and Replacement (d) whale's baleen [41] Copyrightยฉ2019, Advanced Materials. The research above inferred the behaviors of the tubulars for the strengthening and toughening mechanisms. The stiff tubes can resist highly compressive external forces. The organized pores of tubes absorb compression energy and enables crack deflection as fracture occurs [43]. Moreover, they can arrest crack growth through removing the stress singularity at the crack tip or by collapsing the tubules when compressed to improve fracture toughness [44]. Helical The helical model in biological organisms can be described as successive fiber layers in a weak matrix ( Figure 7) [45]. The fibers in each layer have relative rotational angles (โˆ†ฮธ) and can also have angle deflection in vertical directions [46]. A common type of the helical structure is a periodically assembled uniaxial fiber layers, which is referred to as Bouligand structure [47]. Helical model is commonly seen in mantis shrimp dactyl club ( Figure 8a) [48], fish scales ( Figure 8b) [49], beetle exoskeletons ( Figure 8c) [50], deep-sea sponges ( Figure 8d) [51] and other similar organisms. To evaluate the strengthening and toughening mechanisms of this model, Suksangpanya et al. [46] carried out a theoretical and experimental combined approach to estimate the structure toughness using a 3D printed biomimetic composite material. Three-point bending tests were conducted and the effects of structural parameters were mathematically modeled. It was found that crack twisting driven by the fiber architecture, crack branching and delamination, were the main reasons for avoiding catastrophic failure. Yin et al. [49] adopted a fracture model of an anisotropic phase-field to predict the toughness of Bouligand structures. Moreover, the Bouligand structure was fabricated and tension tests were performed. The results revealed the advantages of Bouligand structures in promoting the isotropy and enhanced fracture toughness properties. Biomimetics 2022, 7, x FOR PEER REVIEW 6 of 19 Sutured The sutured model contains stiff suture teeth and a compliant soft interface layer, generally possessing wavy or interdigitating interfaces ( Figure 9). The sutured model usually appears in the interfaces of biological organisms, where it is necessary to adjust intrinsic strength and flexibility, for instance, deer's skulls (Figure 10a) [54], boxfish scales ( Figure 10b) [55], turtle shell ( Figure 10c) [56] and the pelvis of three spine sticklebacks (Figure 10d) [57]. To understand the mechanisms for the high strength and toughness of the model, Cao et al. [57] carried out numerical and experimental study based on 3D printed bionic suture joint specimens. The tensile failure behavior of specimens was systematically studied and the failure mechanisms of the joints were explored by studying the influences of critical geometric parameters. It revealed that the tooth-shaped or sinusoidal curve of the suture interface can improve the strength and toughness of the structure. Rivera et al. [58] fabricated a series of biomimetic composites with sutured structure, including ellipsoidal geometry and laminated microstructure. Tensile tests and 2D finite element simulations were conducted. The results revealed that the suture structure with ellipsoidal geometry can provide mechanical interlocking, which increases strength and toughness significantly. The above studies implied that the sutured interfaces can transfer and distribute loads, thus, reducing concentrating stresses and increasing structure strength [59]. The interlocking of two stiff components occurs at the interface under tensile load, which can improve strength and toughness [12]. Furthermore, suture tooth fracture and interfacial shear failure dissipate energy. Sutured The sutured model contains stiff suture teeth and a compliant soft interface layer, generally possessing wavy or interdigitating interfaces ( Figure 9). The sutured model usually appears in the interfaces of biological organisms, where it is necessary to adjust intrinsic strength and flexibility, for instance, deer's skulls (Figure 10a) [54], boxfish scales ( Figure 10b) [55], turtle shell ( Figure 10c) [56] and the pelvis of three spine sticklebacks (Figure 10d) [57]. To understand the mechanisms for the high strength and toughness of the model, Cao et al. [57] carried out numerical and experimental study based on 3D printed bionic suture joint specimens. The tensile failure behavior of specimens was systematically studied and the failure mechanisms of the joints were explored by studying the influences of critical geometric parameters. It revealed that the tooth-shaped or sinusoidal curve of the suture interface can improve the strength and toughness of the structure. Rivera et al. [58] fabricated a series of biomimetic composites with sutured structure, including ellipsoidal geometry and laminated microstructure. Tensile tests and 2D finite element simulations were conducted. The results revealed that the suture structure with ellipsoidal geometry can provide mechanical interlocking, which increases strength and toughness significantly. The above studies implied that the sutured interfaces can transfer and distribute loads, thus, reducing concentrating stresses and increasing structure strength [59]. The interlocking of two stiff components occurs at the interface under tensile load, which can improve strength and toughness [12]. Furthermore, suture tooth fracture and interfacial shear failure dissipate energy. The above research indicated that the helical arrangement can result in the spatial variation of the driving force, which provides significant resistance to multi-directional loads [52]. Moreover, the dislocation of the fibrous layers decomposes the cracks and enables them to deflect, twist or bifurcate, which are responsible to the increase in structure toughness [53]. Sutured The sutured model contains stiff suture teeth and a compliant soft interface layer, generally possessing wavy or interdigitating interfaces ( Figure 9). The sutured model usually appears in the interfaces of biological organisms, where it is necessary to adjust intrinsic strength and flexibility, for instance, deer's skulls (Figure 10a) [54], boxfish scales ( Figure 10b) [55], turtle shell ( Figure 10c) [56] and the pelvis of three spine sticklebacks (Figure 10d) [57]. To understand the mechanisms for the high strength and toughness of the model, Cao et al. [57] carried out numerical and experimental study based on 3D printed bionic suture joint specimens. The tensile failure behavior of specimens was systematically studied and the failure mechanisms of the joints were explored by studying the influences of critical geometric parameters. It revealed that the tooth-shaped or sinusoidal curve of the suture interface can improve the strength and toughness of the structure. Rivera et al. [58] fabricated a series of biomimetic composites with sutured structure, including ellipsoidal geometry and laminated microstructure. Tensile tests and 2D finite element simulations were conducted. The results revealed that the suture structure with ellipsoidal geometry can provide mechanical interlocking, which increases strength and toughness significantly. Sandwich The sandwich structure refers to inner cellular structure wrapped shell. The inner cellular structure can be connected or disconnected, havi dimensional periodic cores or foam and one such is three-periodic (TPMS) structure ( Figure 11) [60,61]. Sandwich structures are commonly lightweight biological organisms; typical examples are toucan beaks ( skeletons (Figure 12b) [51], antlers (Figure 12c) [63], horseshoe crab exoskele [64]. To show the strengthening and toughening mechanisms of this m [65] conducted experimental and 3D simulation studies on the compres sandwich based on the compression tests. The simulation results were co [55] Copyrightยฉ2015, Acta Biomaterialia (c) turtle shell [56] Copy-rightยฉ2009, Advanced Materials (d) spiny fish pelvic bone [57]. Copyrightยฉ2019, Journal of the Mechanical Behavior of Biomedical Materials. Sandwich The sandwich structure refers to inner cellular structure wrapped by dense outer shell. The inner cellular structure can be connected or disconnected, having two-or threedimensional periodic cores or foam and one such is three-periodic minimal surface (TPMS) structure ( Figure 11) [60,61]. Sandwich structures are commonly found in stiff but lightweight biological organisms; typical examples are toucan beaks (Figure 12a) [62], skeletons (Figure 12b) [51], antlers (Figure 12c) [63], horseshoe crab exoskeletons (Figure 12d) [64]. To show the strengthening and toughening mechanisms of this model, Bang et al. [65] conducted experimental and 3D simulation studies on the compression property of sandwich based on the compression tests. The simulation results were consistent with the experiments, confirming that the porous sandwich structure has excellent energy absorption capability. Pathipaka et al. [66] fabricated honeycomb and foam sandwich structures and demonstrated the excellent energy absorption capabilities due to their porous characteristics. As the researchers illustrated above, the dense shell of the sandwich structure is rigid, which contributes to high strength whilst the inner core or foam effectively absorbs energy under bending and compression forces. In particular, the inner porous material deflects cracks by forcing them to pass through pore or foam surfaces as failure occurs, which enhances structure toughness [67]. The above studies implied that the sutured interfaces can transfer and distribute loads, thus, reducing concentrating stresses and increasing structure strength [59]. The interlocking of two stiff components occurs at the interface under tensile load, which can improve strength and toughness [12]. Furthermore, suture tooth fracture and interfacial shear failure dissipate energy. Sandwich The sandwich structure refers to inner cellular structure wrapped by dense outer shell. The inner cellular structure can be connected or disconnected, having two-or three-dimensional periodic cores or foam and one such is three-periodic minimal surface (TPMS) structure ( Figure 11) [60,61]. Sandwich structures are commonly found in stiff but lightweight biological organisms; typical examples are toucan beaks (Figure 12a) [62], skeletons (Figure 12b) [51], antlers (Figure 12c) [63], horseshoe crab exoskeletons (Figure 12d) [64]. To show the strengthening and toughening mechanisms of this model, Bang et al. [65] conducted experimental and 3D simulation studies on the compression property of sandwich based on the compression tests. The simulation results were consistent with the experiments, confirming that the porous sandwich structure has excellent energy absorption capability. Pathipaka et al. [66] fabricated honeycomb and foam sandwich structures and demonstrated the excellent energy absorption capabilities due to their porous characteristics. Numerical Modelling Despite many studies having experimentally demonstrated the high strength and toughness of the above six bionic models, the simulations of crack propagations using 3D models are also yet to be conducted. The numerical modelling of mechanical behaviors can reveal strengthening and toughening mechanisms of the bionic models and can, thus, improve structure design. The XFEM in Abaqus [68,69] can model the crack propagation of 3D composite structures. To demonstrate strengthening and toughening mechanisms, this section numerically illustrates the mechanical behaviors of the six bionic models. Simulation Setup The analytical modelling of the three-point bending test of the six bionic models can be used to predict crack propagation behaviors [46]. The six bionic models are composed of soft matrix and stiff material. For layered, columnar, tubular, sutured and helical models, the soft and stiff parts are individually imported to Abaqus for assigning different materials. The 'Retain' function is applied to merge these two materials into a single part. For the sandwich model with distinguished boundary from soft to stiff phases, its materials are conveniently applied after the whole single part is imported. The support and loading pins are both structural steels. The simulation parameters for soft and stiff materials are referred to in the experimental samples that are given in Table 1. Numerical Modelling Despite many studies having experimentally demonstrated the high strength and toughness of the above six bionic models, the simulations of crack propagations using 3D models are also yet to be conducted. The numerical modelling of mechanical behaviors can reveal strengthening and toughening mechanisms of the bionic models and can, thus, improve structure design. The XFEM in Abaqus [68,69] can model the crack propagation of 3D composite structures. To demonstrate strengthening and toughening mechanisms, this section numerically illustrates the mechanical behaviors of the six bionic models. Simulation Setup The analytical modelling of the three-point bending test of the six bionic models can be used to predict crack propagation behaviors [46]. The six bionic models are composed of soft matrix and stiff material. For layered, columnar, tubular, sutured and helical models, the soft and stiff parts are individually imported to Abaqus for assigning different materials. The 'Retain' function is applied to merge these two materials into a single part. For the sandwich model with distinguished boundary from soft to stiff phases, its materials are conveniently applied after the whole single part is imported. The support and loading pins are both structural steels. The simulation parameters for soft and stiff materials are referred to in the experimental samples that are given in Table 1. As the researchers illustrated above, the dense shell of the sandwich structure is rigid, which contributes to high strength whilst the inner core or foam effectively absorbs energy under bending and compression forces. In particular, the inner porous material deflects cracks by forcing them to pass through pore or foam surfaces as failure occurs, which enhances structure toughness [67]. Numerical Modelling Despite many studies having experimentally demonstrated the high strength and toughness of the above six bionic models, the simulations of crack propagations using 3D models are also yet to be conducted. The numerical modelling of mechanical behaviors can reveal strengthening and toughening mechanisms of the bionic models and can, thus, improve structure design. The XFEM in Abaqus [68,69] can model the crack propagation of 3D composite structures. To demonstrate strengthening and toughening mechanisms, this section numerically illustrates the mechanical behaviors of the six bionic models. Simulation Setup The analytical modelling of the three-point bending test of the six bionic models can be used to predict crack propagation behaviors [46]. The six bionic models are composed of soft matrix and stiff material. For layered, columnar, tubular, sutured and helical models, the soft and stiff parts are individually imported to Abaqus for assigning different materials. The 'Retain' function is applied to merge these two materials into a single part. For the sandwich model with distinguished boundary from soft to stiff phases, its materials are conveniently applied after the whole single part is imported. The support and loading pins are both structural steels. The simulation parameters for soft and stiff materials are referred to in the experimental samples that are given in Table 1. Crack initiation adopts the principle that local stress exceeds the material maximum principal stress [68], which can be expressed as: in which ฯƒ 0 max is the allowable maximum stress. The symbol "ใ€ˆใ€‰" represents the Macaulay bracket with the usual interpretation (i.e., < ฯƒ max โ‰ฅ 0 if ฯƒ max < 0 and <ฯƒ max โ‰ฅ ฯƒ max if ฯƒ max โ‰ฅ 0).When the maximum stress of the material ฯƒ max is greater than the threshold ฯƒ 0 max , crack initiates in the modeled 3D structure. The stresses (both normal and shear) are in three directions, which give three normal stresses and three shear stresses. The maximum principal stress ฯƒ max = Max(ฯƒ , ฯƒ , ฯƒ ), where, ฯƒ , ฯƒ and ฯƒ are the three normal stresses at their own orientations. These three principal stresses can be obtained by solving the following cubic equation, [70] Equation (2) gives three roots of the three principal stresses for the given three normal stresses (ฯƒ x , ฯƒ y and ฯƒ z ), which are: in which: Referring to the soft material used in experiments (Table 1), the maximum principal stress for soft matrix is set at 6.5MPa. For stiff materials, a range of 350-550 Mpa is applied in simulation models considering the variation of mechanical properties of experimental samples. In addition, it is necessary to designate the displacements at failure to represent the total displacements triggering material damages. A small value of this parameter means that the material is brittle, which is easier to damage, compared to soft material under the same load. According to the elongations for the used materials in this work, the displacement at failure of soft matrix is set at 0.2 mm, while it is 0.01 mm for stiff material. The viscosity coefficient can be determined at 0.005, according to the properties of the two materials. These parameters and values are also listed in Table 2. Figure 13 shows the simulation setup for the six bionic models after meshing in Abaqus. The bionic model is symmetrically tied to two bottom supporters of semicircular columns. The materials for supporters, soft and stiff materials, are described as grey, white and yellow. Pre-crack is utilized for initiating initial propagation. The pre-crack is set to middle bottom of each bionic model. A specified displacement is applied to the top semicircular column, which enables subsequent crack propagations. The viscosity coefficient can be determined at 0.005, according to the properties of the two materials. These parameters and values are also listed in Table 2. Figure 13 shows the simulation setup for the six bionic models after meshing in Abaqus. The bionic model is symmetrically tied to two bottom supporters of semicircular columns. The materials for supporters, soft and stiff materials, are described as grey, white and yellow. Pre-crack is utilized for initiating initial propagation. The pre-crack is set to middle bottom of each bionic model. A specified displacement is applied to the top semicircular column, which enables subsequent crack propagations. Figure 14 presents the simulations of crack propagations by modelling the threepoint bending tests of the six bionic models. The color scale denotes the deformations during propagations. The left column displays the results of the whole models at the final step, and the right column zooms in the deformation or fracture features of the single material. For the layered model of Figure 14a, it shows that crack has extended straightly to the stiff platelet. However, it does not affect the crack of the stiff platelet, as predicted by other researchers [71]. This means that the maximum stress of the stiff material is lower than the threshold. Figure 14b shows that the crack propagates a little further in the soft matrix after it has approached the column. It inferred that the solid stiff material can resist higher stress, and thus is effective for resisting crack propagation. By contrast, Figure 14c resembles that crack extends into stiff part, indicating that the maximum tensile stress is higher than the fracture strength of the stiff part. As the tubular factures, the crack of the whole model can be restrained at stiff phases, which presents the failure of whole model. Figure 14d demonstrates that crack defections occur in the soft matrix due to the helical Figure 14 presents the simulations of crack propagations by modelling the three-point bending tests of the six bionic models. The color scale denotes the deformations during propagations. The left column displays the results of the whole models at the final step, and the right column zooms in the deformation or fracture features of the single material. For the layered model of Figure 14a, it shows that crack has extended straightly to the stiff platelet. However, it does not affect the crack of the stiff platelet, as predicted by other researchers [71]. This means that the maximum stress of the stiff material is lower than the threshold. Figure 14b shows that the crack propagates a little further in the soft matrix after it has approached the column. It inferred that the solid stiff material can resist higher stress, and thus is effective for resisting crack propagation. By contrast, Figure 14c resembles that crack extends into stiff part, indicating that the maximum tensile stress is higher than the fracture strength of the stiff part. As the tubular factures, the crack of the whole model can be restrained at stiff phases, which presents the failure of whole model. Figure 14d demonstrates that crack defections occur in the soft matrix due to the helical arrangement of helical columns. The distortions of the stiff fibers during crack propagations are observed. Figure 14e shows that crack propagation defects as it extends to the soft phase. This is beneficial to absorbing kinetic energy and also restraining cracks within the soft phase. Figure 14f depicts slight crack growth in the minimum surface structure. This is because the minimum surface structure of the porous core can enable large deformation to disperse the concentrated stress, which alleviates stress transition and absorbs energy. arrangement of helical columns. The distortions of the stiff fibers during crack propagations are observed. Figure 14e shows that crack propagation defects as it extends to the soft phase. This is beneficial to absorbing kinetic energy and also restraining cracks within the soft phase. Figure 14f depicts slight crack growth in the minimum surface structure. This is because the minimum surface structure of the porous core can enable large deformation to disperse the concentrated stress, which alleviates stress transition and absorbs energy. To sum up, this section presents numerical investigations on the fracture behaviors of the six biological models using XFEM in Abaqus. By analyzing crack propagation behaviors in the simulations of three-point bending tests, the strengthening and toughening mechanisms of the six bionic models are further revealed. The results are consistent with the analysis in the previous section. Experimental Verifications To enhance application of bionic structures to engineering components, six biomimetic structures based on the six bionic models are fabricated. Experimental three-point bending tests are conducted to the fracture behaviors of crack propagations. In addition, the strength and toughness of the six bionic structures are evaluated. To sum up, this section presents numerical investigations on the fracture behaviors of the six biological models using XFEM in Abaqus. By analyzing crack propagation behaviors in the simulations of three-point bending tests, the strengthening and toughening mechanisms of the six bionic models are further revealed. The results are consistent with the analysis in the previous section. Experimental Verifications To enhance application of bionic structures to engineering components, six biomimetic structures based on the six bionic models are fabricated. Experimental three-point bending tests are conducted to the fracture behaviors of crack propagations. In addition, the strength and toughness of the six bionic structures are evaluated. Sample Fabrications For the six samples, the soft and stiff materials are, respectively, Ajilus30 and hard resin VeroBlackPlus, both produced by the company of Stratasys. The 3D printing technology is PolyJetT, and the provider of the samples is DEE 3D. The size errors between 3D models and fabricated samples are between 0.1-0.2 mm. The deviations for the mechanical properties shown, such as Young's modulus and tensile strength, are around 18%. These deviation have not influenced the results according to simulations for the tested parameter ranges. Referring to standard test model by ASTM [72], the sample sizes for three-point bending test are illustrated in Figure 15. The length, width and height are, respectively, 135 mm, 15 mm and 30 mm for all specimens. The initial cracks are applied for enhancing crack extensions. The notch sizes are 5.55 mm in height and 1.8 mm in width and have a crack closing angle of 60 โ€ข . The bionic specimen designed from these six bionic models are shown in Figure 16. Using 3D printing technology, these six bionic structures were fabricated, as presented in Figure 17. The materials properties for the soft and hard parts of each specimen are listed in Table 1. The mass of layered, columnar, tubular, sutured and helical samples are around 70.5 g, and the sandwich sample is 30.1 g. Sample Fabrications For the six samples, the soft and stiff materials are, respectively, Ajilus30 and hard resin VeroBlackPlus, both produced by the company of Stratasys. The 3D printing technology is PolyJetT, and the provider of the samples is DEE 3D. The size errors between 3D models and fabricated samples are between 0.1-0.2mm. The deviations for the mechanical properties shown, such as Young's modulus and tensile strength, are around 18%. These deviation have not influenced the results according to simulations for the tested parameter ranges. Referring to standard test model by ASTM [72], the sample sizes for three-point bending test are illustrated in Figure 15. The length, width and height are, respectively, 135 mm, 15 mm and 30 mm for all specimens. The initial cracks are applied for enhancing crack extensions. The notch sizes are 5.55 mm in height and 1.8 mm in width and have a crack closing angle of 60ยฐ. The bionic specimen designed from these six bionic models are shown in Figure 16. Using 3D printing technology, these six bionic structures were fabricated, as presented in Figure 17. The materials properties for the soft and hard parts of each specimen are listed in Table 1. The mass of layered, columnar, tubular, sutured and helical samples are around 70.5 g, and the sandwich sample is 30.1 g. Figure 15. Sizes of the six specimens for three-point bending test. Sample Fabrications For the six samples, the soft and stiff materials are, respectively, Ajilus30 and hard resin VeroBlackPlus, both produced by the company of Stratasys. The 3D printing technology is PolyJetT, and the provider of the samples is DEE 3D. The size errors between 3D models and fabricated samples are between 0.1-0.2mm. The deviations for the mechanical properties shown, such as Young's modulus and tensile strength, are around 18%. These deviation have not influenced the results according to simulations for the tested parameter ranges. Referring to standard test model by ASTM [72], the sample sizes for three-point bending test are illustrated in Figure 15. The length, width and height are, respectively, 135 mm, 15 mm and 30 mm for all specimens. The initial cracks are applied for enhancing crack extensions. The notch sizes are 5.55 mm in height and 1.8 mm in width and have a crack closing angle of 60ยฐ. The bionic specimen designed from these six bionic models are shown in Figure 16. Using 3D printing technology, these six bionic structures were fabricated, as presented in Figure 17. The materials properties for the soft and hard parts of each specimen are listed in Table 1. The mass of layered, columnar, tubular, sutured and helical samples are around 70.5 g, and the sandwich sample is 30.1 g. Figure 15. Sizes of the six specimens for three-point bending test. Experimental Tests Referring to three-point bending test standard of ASTM [72], the experimental tests of the fabricated six bionic structures are carried out. Figure 18 illustrates three-point bending test of the bionic layered structure using a microcomputer controlled electronic universal testing machine manufactured by WANCE Ltd. (Shenzhen, China). The support Experimental Tests Referring to three-point bending test standard of ASTM [72], the experimental tests of the fabricated six bionic structures are carried out. Figure 18 illustrates three-point bending test of the bionic layered structure using a microcomputer controlled electronic universal testing machine manufactured by WANCE Ltd. (Shenzhen, China). The support span of the setup is 100 mm. The loading rate maintains 10.0 mm/min for whole test period, and the reaction forces by the specimen are recorded. Experimental Tests Referring to three-point bending test standard of ASTM [72], the experimental tests of the fabricated six bionic structures are carried out. Figure 18 illustrates three-point bending test of the bionic layered structure using a microcomputer controlled electronic universal testing machine manufactured by WANCE Ltd. (Shenzhen, China). The support span of the setup is 100 mm. The loading rate maintains 10.0 mm/min for whole test period, and the reaction forces by the specimen are recorded. Figure 19a-f presents test results of the six bionic structures. For the layered model, the crack propagated through soft phases for whole failure process and deflected when approaching stiff materials. This indicated that the stiff bricks did not support the high compression load. For the columnar model, the volume fraction of this model is slightly lower than all others, which causes the larger deformation of the soft matrix. Despite the large deformation bending of the specimen, the crack of model hardly propagated and the specimen remained intact. For tubular model, the stiff tubulars suffered to brittle fractures and the crack propagates in a straight line. For the helical model, the crack twists along layers of the alignment fibers. In addition, delamination occurs as the crack extends to different layers. For the suture model, initially the pre-crack propagated a small distance. After this, the whole structure experienced brittle fracture, which generated a new crack path. This is due to the fact that the maximum stress of the stiff material is higher than the threshold, and this model contains a larger part of the stiff material. For the sandwich model, the crack expanded slowly in soft materials and the spaceman fracture to failure Figure 19a-f presents test results of the six bionic structures. For the layered model, the crack propagated through soft phases for whole failure process and deflected when approaching stiff materials. This indicated that the stiff bricks did not support the high compression load. For the columnar model, the volume fraction of this model is slightly lower than all others, which causes the larger deformation of the soft matrix. Despite the large deformation bending of the specimen, the crack of model hardly propagated and the specimen remained intact. For tubular model, the stiff tubulars suffered to brittle fractures and the crack propagates in a straight line. For the helical model, the crack twists along layers of the alignment fibers. In addition, delamination occurs as the crack extends to different layers. For the suture model, initially the pre-crack propagated a small distance. After this, the whole structure experienced brittle fracture, which generated a new crack path. This is due to the fact that the maximum stress of the stiff material is higher than the threshold, and this model contains a larger part of the stiff material. For the sandwich model, the crack expanded slowly in soft materials and the spaceman fracture to failure when it reaches the outer hard shell. It is concluded that most of the crack propagation features were predicted by the simulations. All models suffer from failure except the column structure, which may indicate that the bionic column structure exhibits high toughness. To claim the actual improvement of nature-inspired designs of the six bionic structures, a reference model based on conventional laminated composites was used. [73]. Laminated composites are usually comprised of layers with different materials, which are widely used in industry due to remarkable strength and toughness [74]. Figure 20a shows the laminated structure of the reference model having the same sizes as shown in Figure 15. It consists of five layers of 1.5 mm thick stiff material, which are arranged in parallel in soft matrix. In this reference 3D structure model, the white laminated layers are the stiff material, which have the same mechanical properties as the black material in previous models. Figure 20b shows the bending test of the reference model using the same machine (WANCE, Shenzhen, China), and Figure 20c presents the test result of this sample. Due to the relatively high fraction for the soft matrix, crack propagation was not observed either. when it reaches the outer hard shell. It is concluded that most of the crack propagation features were predicted by the simulations. All models suffer from failure except the column structure, which may indicate that the bionic column structure exhibits high toughness. To claim the actual improvement of nature-inspired designs of the six bionic structures, a reference model based on conventional laminated composites was used. [73]. Laminated composites are usually comprised of layers with different materials, which are widely used in industry due to remarkable strength and toughness [74]. Figure 20a shows the laminated structure of the reference model having the same sizes as shown in Figure 15. It consists of five layers of 1.5 mm thick stiff material, which are arranged in parallel in soft matrix. In this reference 3D structure model, the white laminated layers are the stiff material, which have the same mechanical properties as the black material in previous models. Figure 20b shows the bending test of the reference model using the same machine (WANCE, Shenzhen, China), and Figure 20c presents the test result of this sample. Due to the relatively high fraction for the soft matrix, crack propagation was not observed either. To claim the actual improvement of nature-inspired designs of the six bionic structures, a reference model based on conventional laminated composites was used. [73]. Laminated composites are usually comprised of layers with different materials, which are widely used in industry due to remarkable strength and toughness [74]. Figure 20a shows the laminated structure of the reference model having the same sizes as shown in Figure 15. It consists of five layers of 1.5 mm thick stiff material, which are arranged in parallel in soft matrix. In this reference 3D structure model, the white laminated layers are the stiff material, which have the same mechanical properties as the black material in previous models. Figure 20b shows the bending test of the reference model using the same machine (WANCE, Shenzhen, China), and Figure 20c presents the test result of this sample. Due to the relatively high fraction for the soft matrix, crack propagation was not observed either. The force-displacement curves of the six structures and the reference model are plotted in Figure 21. Initially, the compression forces increase with crack growth. Excluding the suture model, the forces for models maintain a decreasing trend after reaching a peak. Specifically, the layered and helical models decrease in a wave form, which is due to the fact that the arrangement of stiff parts affect crack deflections. The tubular experiences a zigzag shape, which is attributed to the individual fracture of the stiff tubes. The column model did not suffer from dramatic drop, as its crack extension was restricted by stiff columns. The first drop happened to the suture is because the crack extended to soft part. Eventually, both suture and sandwich models suffer sharp drop because brittle failure occurs as cracks extend in the stiff phases. The reference model did not experience crack expansion due to the large volume fraction of soft materials. Due to the seven structures being the same size, their strength can be assessed by the maximum compression force during the three-point bending tests. The toughness of the six bionic structures can be quantified using J-integral of nonlinear fracture [75,76]: where B and b are, respectively, the thickness of the specimen and the remaining ligament of the specimen, B = 15 mm, b = 5.55 mm; F is the applied load; S is the loading displacement. The results of the maximum compression forces and toughness of the six bionic structures are presented in Figure 22. ted in Figure 21. Initially, the compression forces increase with crack growth. Excluding the suture model, the forces for models maintain a decreasing trend after reaching a peak Specifically, the layered and helical models decrease in a wave form, which is due to the fact that the arrangement of stiff parts affect crack deflections. The tubular experiences a zigzag shape, which is attributed to the individual fracture of the stiff tubes. The column model did not suffer from dramatic drop, as its crack extension was restricted by stif columns. The first drop happened to the suture is because the crack extended to soft part Eventually, both suture and sandwich models suffer sharp drop because brittle failur occurs as cracks extend in the stiff phases. The reference model did not experience crack expansion due to the large volume fraction of soft materials. occurs as cracks extend in the stiff phases. The reference model did not experience crack expansion due to the large volume fraction of soft materials. Due to the seven structures being the same size, their strength can be assessed by the maximum compression force during the three-point bending tests. The toughness of the six bionic structures can be quantified using J-integral of nonlinear fracture [75,76]: where B and b are, respectively, the thickness of the specimen and the remaining ligament of the specimen, B = 15 mm, b = 5.55mm; F is the applied load; S is the loading displacement. The results of the maximum compression forces and toughness of the six bionic structures are presented in Figure 22. Noting that the layered, columnar, tubular, sutured and helical samples have equivalent mass (70.5 g), it is observed from Figure 22 that, compared to the reference model, the tubular, helical and suture structures have a relatively greater strength and the layered, columnar and helical structures have a relatively greater toughness. The layered model exhibits the lowest strength due to the fact that the stiff bricks could not sustain high loads. The sutured model exhibits the highest strength. Nonetheless, its toughness is the least among the five models. This is because the fraction of the stiff material is too high which caused brittle failure. The helical model may be the best model, since it exhibits the highest toughness and relative high strength. For sandwich model, which had the lowest strength and toughness, its material mass is less than half of the other models. Discussions This work focuses on investigations of how structure elements affect the strength and toughness of bionic models, which are inspired by nature. To extract biological models with properties of high strength and toughness, biological function mechanisms of structure elements in biological organisms are investigated. Six biological models are extracted based on a broad range of biological organisms over micro-and meso-scales. The presented analysis of biological mechanisms is from a biological systems perspective, and it is hypothesized that the six bionic models are of significant potential in achieving high strength and toughness, owing to intrinsic composite structures [13,77]. By providing detailed analysis of the strengthening and toughening mechanisms for the structure elements in biological models, the effectiveness of using the bionic models for increasing strength and toughness are seen. By applying analytical models for assessing material failure, numerical estimations of strength and toughness of the bionic models are conducted. In the work of assessing material failure [78], the researchers successfully modelled stochastic fracture behaviors of ceramics, with respect to different microstructural features. The finite element analysis methodology adopted a fracture mechanics mode to predict the strength scatter in ceramics. Nonetheless, the models are simply applicable to a single material. The applicability to composite materials has not been verified. By contrast, in our work, the fractures over different materials can be modelled. For the work in numerical estimations of strength and toughness [79], the authors developed a constitutive model and applied to finite element simulations. Because the damage process was formulated based on the fracture mechanics of an isotropic damage model, the fracture behaviors of 2D composite structure are reasonably predicted. However, their damage model has not been tested for predicting 3D composite structures. On the contrary, our work demonstrated the simulations of crack propagation for 3D composite structures. The simulations also provide details on how the structure elements gain high strength and toughness by changing crack trajectories and alleviating concentrated stress. For a more realistic resemblance of the experiments, the contact parameters will be improved. Using prevailing 3D printing technology, the biomimetic composites, based on the six bionic models, can be conveniently fabricated. Experimental tests confirm the validity of the hypothesis that structure elements can change crack propagation directions and release stress by mechanical deformations [15]. The higher fraction of the soft matrix causes more deformations but lower strength for the layered, columnar, tubular, sutured and helical model. In addition, quantitative comparisons on the strength and toughness of the six biomimetic composites are compared, which can be referred to when applying bionic models for the design of high strength and tough structures [80]. For a combination of these structural elements, which can have additional effect on mechanisms, future investigations will be undertaken. To better illustrate the best model in different conditions, new investigations for evaluating mechanical properties, such as a tensile, torsion of the six models shall also be conducted. This is because at the end for both civil and tissue engineering, the structural choice will rely on this type of complete investigation. Conclusions To promote bionic design of high strength and tough structures, this paper classified six basic biological models from micro-and meso-scales of biological organisms. For the numerical analysis for explicitly illustrating strengthening and toughening mechanisms, six biological models were conducted. Most features of the crack propagation behaviors of the six specimens were successfully predicted. The strength and toughness of six bionic structures were assessed using three-point bending tests. It demonstrates that the arrangement of soft and stiff parts affects crack propagation behaviors, and thus dominates the strength and toughness. The solid columns and porous core can resist higher stress, compared to hollow tubulars for the test sizes. For the layered, columnar, tubular, sutured and helical models, a higher fraction of the soft matrix causes more deformations but lower strength, and higher fraction of the stiff material can cause brittle failure and lower toughness. The experimental tests showed that the helical exhibit highest toughness and also high strength. Although the sandwich model shows the lowest strength and toughness, its material cost is much less. This work provides straightforward basis for engineers to designate bionic models and applies to the design process of biomimetic structures with excellent mechanical properties.
Molecular phylogeny reveals food plasticity in the evolution of true ladybird beetles (Coleoptera: Coccinellidae: Coccinellini) Background The tribe Coccinellini is a group of relatively large ladybird beetles that exhibits remarkable morphological and biological diversity. Many species are aphidophagous, feeding as larvae and adults on aphids, but some species also feed on other hemipterous insects (i.e., heteropterans, psyllids, whiteflies), beetle and moth larvae, pollen, fungal spores, and even plant tissue. Several species are biological control agents or widespread invasive species (e.g., Harmonia axyridis (Pallas)). Despite the ecological importance of this tribe, relatively little is known about the phylogenetic relationships within it. The generic concepts within the tribe Coccinellini are unstable and do not reflect a natural classification, being largely based on regional revisions. This impedes the phylogenetic study of important traits of Coccinellidae at a global scale (e.g. the evolution of food preferences and biogeography). Results We present the most comprehensive phylogenetic analysis of Coccinellini to date, based on three nuclear and one mitochondrial gene sequences of 38 taxa, which represent all major Coccinellini lineages. The phylogenetic reconstruction supports the monophyly of Coccinellini and its sister group relationship to Chilocorini. Within Coccinellini, three major clades were recovered that do not correspond to any previously recognised divisions, questioning the traditional differentiation between Halyziini, Discotomini, Tytthaspidini, and Singhikaliini. Ancestral state reconstructions of food preferences and morphological characters support the idea of aphidophagy being the ancestral state in Coccinellini. This indicates a transition from putative obligate scale feeders, as seen in the closely related Chilocorini, to more agile general predators. Conclusions Our results suggest that the classification of Coccinellini has been misled by convergence in morphological traits. The evolutionary history of Coccinellini has been very dynamic in respect to changes in host preferences, involving multiple independent host switches from different insect orders to fungal spores and plants tissues. General predation on ephemeral aphids might have created an opportunity to easily adapt to mixed or specialised diets (e.g. obligate mycophagy, herbivory, predation on various hemipteroids or larvae of leaf beetles (Chrysomelidae)). The generally long-lived adults of Coccinellini can consume pollen and floral nectars, thereby surviving periods of low prey frequency. This capacity might have played a central role in the diversification history of Coccinellini. Electronic supplementary material The online version of this article (doi:10.1186/s12862-017-1002-3) contains supplementary material, which is available to authorized users. Background Ladybirds (Coccinellidae) are a well-defined monophyletic group of small to medium sized beetles of the superfamily Coccinelloidea, the superfamily formerly known as the Cerylonid Series within the superfamily Cucujoidea [1][2][3]. The relationships between the currently recognized 15 families of Coccinelloidea are not well understood, but comprehensive molecular phylogenetic analyses of Coccinelloidea [2] suggested that Eupsilobiidae, a mycophagous group of small brown beetles, previously included as a subfamily of Endomychidae [4,5], are the sister group of Coccinellidae. Coccinellidae, which comprises 360 genera and about 6000 species world-wide, is by far the largest family of coccinelloid beetles and, with the notable exception of the parasitic Bothrideridae, the only predominantly predatory lineage of Coccinelloidea. The ancestor of Coccinellidae presumably lived in the Jurassic (~150 Mya [6]) and even a Permian-Triassic origin of Coccinelloidea has been suggested [7]. The development of a predatory life style in the ancestor of Coccinellidae, was possibly a relevant event for the evolutionary history of this beetle lineage, with herbivory, sporophagy and pollenophagy being derived from this predatory mode of life. Most of the traditional classifications of Coccinellidae [8][9][10] recognize six or seven subfamilies (i.e., Chilocorinae, Coccidulinae, Coccinellinae, Epilachninae, Scymninae, Sticholotidinae, and sometimes Ortaliinae, each with numerous tribes). The foundation of this system was developed by Sasaji [11,12] based on comparative morphological analyses of adults and larvae from species of the Palaearctic Region, mostly Japan. Kovรกล™ [9] presented a major modification of Sasaji's classification on a global scale, recognizing seven subfamilies and 38 tribes. The classifications proposed by Sasaji [11] and Kovรกล™ [9] were found to be largely artificial and phylogenetically unacceptable by ลšlipiล„ski [13], who argued for a basal split of Coccinellidae into two subfamilies, Microweiseinae and Coccinellinae, with the latter containing most of the tribes, including Coccinellini. Six subsequent papers on the molecular phylogeny of the family Coccinellidae [14][15][16][17] and Cucujoidea [2,3] corroborated the monophyly of the family and of the two subfamilies recognized by ลšlipiล„ski [13]. They also provided strong evidence for the monophyly of Coccinellini. Based on results of phylogenetic analyses of molecular data and a combination of molecular and morphological data from Coccinellidae, ลšlipiล„ski and Tomaszewska [18] and Seago et al. [17] formalized the taxonomic status of Coccinellini as a tribe within the broadly defined Coccinellinae. Coccinellini, commonly referred to as 'true ladybirds' , comprises 90 genera and over 1000 species world-wide. The tribe includes many charismatic and easily recognised beetles that are often seen on aphid-infested trees and shrubs in the natural and urban landscapes. It is also one of the most frequently studied groups of beetles, the subject of thousands of peer-reviewed scientific papers on biology, genetics, colour polymorphism, physiology and biological control, summarized in various influential books [19][20][21][22]. Coccinellini are generally viewed as predators of aphids, but their diet is much more diverse and often includes other hemipterous insects (i.e., heteropterans, psyllids), beetle and moth larvae, pollen, fungal spores, and even plant tissues. Coccinellini display extraordinary morphological diversity in all life stages and are among the most conspicuously and attractively coloured beetles, often bearing strikingly red or yellow elytra, with contrasting black spots, stripes, or fasciae ( Figs. 1 and 2). These vivid colours are aposematic, warning predators that these beetles are distasteful and produce noxious or poisonous alkaloids [23] excreted as droplets of fluid during a 'reflex bleeding' behaviour. Many species of Coccinellini are also of great economic importance as biological control agents or unwanted invaders on a scale of entire Fig. 1 Representative spectrum of Coccinellini morphologies and feeding habits: a Coccinella septempunctata, adult feeding on aphids; b Coelophora variegata, adult feeding on aphids; c Heteroneda reticulata, pupa being parasitized by a phorid fly; d Cleobora mellyi, larva feeding on larva of Paropsis charybdis (Chrysomelidae); e Halyzia sedecimguttata, larva feeding on mildew; f Harmonia conformis, adult feeding on psyllids; g, h Bulaea lichatschovi, larva and adult, feeding on leaves and buds of Bassia prostrata. Photographs credits: a, b Paul Zborowski; c Melvyn Yeo; d Andrew Bonnitcha; e Gilles San Martin; f Nick Monaghan; g, h Maxim Gulyaev continents (e.g., multicoloured Asian ladybird beetle, Harmonia axyridis [24]). Surprisingly, relatively little is known about the phylogenetic relationships and the evolutionary history of this ecologically important and species-rich beetle lineage. The phylogeny of the tribe has not been studied in detail and its subordinated taxonomic classification is largely regional and non-phylogenetic, impeding comparative analyses of important features of coccinellid evolution, such as host preferences, on a global scale. So far, published research on the evolutionary history of Coccinellidae has focussed on the phylogeny of the entire family and included only a very limited set of Coccinellini. The study by Magro et al. [15] included more species and genera of Coccinellini (i.e., 32 species, 15 genera) than any other investigation, but the authors' taxon sampling was heavily focused on European species. Their data set differed from a smaller set of Asian taxa (24 species, 15 genera) analysed by Aruggoda et al. [16] and a similar sized but more global data set (23 species, 16 genera) by Robertson et al. [2]. In addition to the taxonomically different data sets, the molecular hypotheses put forth in the cited papers had very weak support especially at deeper nodes within Coccinellini, and each study recovered incongruent relationships among the genera. More comprehensive morphological and molecular research is required to improve the global classification of Coccinellini and to establish a reliable generic classification for this tribe. Here we present molecular phylogenetic analyses based on a world-wide and taxonomically broad sampling of Coccinellini, representing all major lineages and analysing the phylogenetic signal of four genes (one mitochondrial and three nuclear) using Bayesian and Maximum-Likelihood (ML) phylogenetic approaches. The aims of our study are to: (1) assess the monophyly of Coccinellini; (2) generate the first comprehensive phylogenetic hypothesis about generic relationships within the tribe Coccinellini; (3) test if some formerly recognised tribes of Coccinellini (i.e., Discotomini, Halyziini, Singhikaliini, and Tytthaspidini) merit recognition as subtribes; and (4) reconstruct the evolution of selected morphological characters and of food preferences within Coccinellini. Taxon sampling and morphology We analysed 38 species of Coccinellini belonging to 32 of 90 genera. They represent all previously proposed tribes currently included in Coccinellini (i.e., Coccinellini -23 of 67 genera, Discotomini -1 of 5 genera, Halyziini -3 of 8 genera, Singhikaliini -1 genus (monotypic tribe), and Tytthaspidini (=Bulaeini) -4 of 9 genera) and 14 outgroup species, representing a variety of coccinellid subfamilies and tribes, and two species of Corylophidae. Our taxon sampling was not designed to assess relationships within the family Coccinellidae, but was aimed at inferring the relationships within the tribe Coccinellini and tracing the evolution of morphological traits and food preferences. We selected species with known biology and food preferences, if tissue samples containing DNA were available to us. The biology of Seladia beltiana Gorham (former Discotomini) and Singhikalia duodecimguttata Xiao (former Singhikaliini) is unknown, but the examination of gut contents of two specimens of Seladia sp. revealed abundant fungal spores, suggesting that this species may be fungivorous (A. ลšlipiล„ski, personal observation). Gut contents of Singhikalia duodecimguttata Xiao from China and S. latemarginata (Bielawski) from Papua New Guinea showed a mixture of unrecognizable cuticular pieces and fungal conidia (A. ลšlipiล„ski, personal observation), which indicates a mixed or fungal diet. We compiled a data matrix with essential information on food preference and the state of six morphological characters of adults and immatures for each species in our study (Additional file 1: Table S3). Morphological characters selected (adult pubescence, female colleterial glands [13], larval dorsal gland openings, larval wax secretions, and pupal gin traps) have been used as diagnostic characters for Coccinellini [8,10,12,13] or (mandible type) used in discussions about the food preferred by adult beetles [25,26] but none of these have been phylogenetically tested. Morphological characters were obtained from voucher specimens at the Australian National Insect Collection (CSIRO) and the literature [8,9,11]. The primary food preference (essential food source) of each species was established from the dissected guts of several representatives of each species and from the literature [8,14,21,[27][28][29][30][31][32]. DNA sequencing of target genes DNA was extracted from ethanol preserved specimens following the standard protocol for animal tissues of the Qiagen DNeasy Blood and Tissue kit. Generally, one specimen per species was used for the extraction. Four nuclear and one mitochondrial gene fragments were amplified by PCR (i.e., two sections of carbamoylphosphate synthetase / aspartate transcarbamylase / dihydroorotase (CAD: CADMC and CADXM), topoisomerase I (TOPO), wingless (WGL), and cytochrome oxidase I (COI). These genes contrary to the widely used ribosomal genes (e.g. 18S, 28S) can be aligned with more accuracy. The amplification strategy [33], using degenerate primers with M13 (โˆ’21) / M13REV tails attached to the 5โ€ฒ ends of the forward and the reverse primer, respectively. The primers had either been published previously [34] or were developed by us in context of the present study (CADXM2; Additional file 2: Table S1). Depending on the PCR yield, PCR products were either sequenced directly or re-amplified in a second and/or third PCR with hemi-nested and / or M13 primers. Initial PCRs were performed in 50-ฮผL reaction volumes (32.8 ฮผL of water, 5 ฮผL of 10ร— buffer, 4 ฮผL of 25 mM MgCl 2 , 2 ฮผL of 10 mM dNTP mix, 2 ฮผL of each 10 mM forward and reverse primer, 0.2 ฮผL of 5 U/ฮผL KAPA taq polymerase, 2 ฮผL of template DNA) and a touch-down temperature profile that stepped from 55ยฐC down to 45ยฐC for conveniently amplifying with all primer pairs, irrespective of their specific binding temperature, 25 cycles with 94ยฐC for 30 s., 55ยฐC [โˆ’0.4ยฐC each cycle] for 30 s., and 72ยฐC, for 60 s. [+ 2 s. each cycle], followed by 13 cycles with 94ยฐC for 30 s., 45ยฐC for 30 s, 72ยฐC for 120 s [+ 3 s. each cycle], followed by 72ยฐC for 600 s. [35]. Reamplifications also used 50-ฮผl PCR reactions, but a simplified three-step hot-start temperature profile (22 cycles with 94ยฐC for 30 s., 50ยฐC for 30 s., and 72ยฐC for 60 s. [+ 2 s. each cycle], followed by 72ยฐC for 600 s.). All PCR products were bidirectionally sequenced using Sanger sequencing technology provided by LGC Genomics (Berlin, Germany). All raw reads were assembled with Geneious (v9.1.5; Biomatters, New Zealand [36]) and manually checked for sequencing errors, ambiguities and if necessary, manually edited. Phylogenetic analyses The coding DNA sequence of each gene was translated to the corresponding amino-acid sequence with the software Virtual Ribosome (version 2.0; [37]). The amino-acid sequences of each gene (CADMC, CADXM, TOPO, WGL, COI) were aligned using MAFFT (version 7.164b; [38]) and the original nucleotide sequences were mapped onto the alignments of amino-acid with a Perl script to generate a codon-based alignment of the nucleotide sequences (available upon request to AZ). The nucleotide and amino-acid multiple sequence alignments (MSAs) were visually inspected, and ambiguously aligned or gapped areas were excluded from downstream analyses (i.e., 194 of 3485 sites in the MSAs). All nucleotide sequences were queried against GenBank (NCBI [39]) using the software BLAST+ [40] to check for potential contaminations (e.g., gut content, fungi). We also inferred a neighbour-joining tree (PAUP*4.0b10, Linux, Sinauer Associates, MA, USA; [41]), from the nucleotide sequence of each gene fragment to check for potential cross-contaminations and sample swapping (the results are not shown because these were carried out on a more inclusive data set (Tomaszewska et al., in preparation)). The five MSAs of nucleotides (CADMC: 693 bp, CADXM: 735 bp, TOPO: 678 bp, WGL: 420 bp, COI: 765 bp) were concatenated to form a supermatrix (fivefragment MSAs, Additional file 3: Supermatrix S1, 52 sequences, 3291 columns and 1571 informative sites) with a custom Perl script, that also generates character sets corresponding to the concatenated gene boundaries (available on request from AZ). To explore potential conflicting phylogenetic signal between the individual gene fragments in the concatenation, each one was excluded in turn from the MSAs and the resulting four-fragment MSAs were analysed using ML as implemented in RAxML (v8.0.26; [42]) (Additional file 4: Fig. S1a-e). The best ML topology and support values from 100 rapid bootstrap pseudo-replicates were compared to the analysis results of the five-fragment MSAs, not showing conflict among well-supported nodes (bootstrap values >85%) between topologies. We inferred the optimal substitution models and partitioning scheme with PartitionFinder (version 1.1.1; [43]) using data blocks by gene fragment (CADMC, CADXM, TOPO, WGL, COI) and codon position as input, applying a greedy search approach with branch lengths linked across partitions and the Bayesian Information Criterion (BIC). The best partitioning scheme with corresponding substitution models (Additional file 5: Table S2) was then used to infer phylogenetic trees under the ML optimality criterion, as implemented in GARLI (version 2.01; [44]) (Additional file 6: Fig. S4). A total of 1080 heuristic tree searches were carried out on CSIRO compute cluster system, Pearcey (Dell PowerEdge M630), and the tree with the highest likelihood score selected. Bootstrap support values were obtained from 500 non-parametric bootstrap replicates with 10 heuristic tree search replicates each. Bootstrap values were mapped onto the ML tree using SumTrees (DendroPy version 3.12.2; [45]) and visualised with FigTree (version 1.4.2; https://github.com/rambaut/ figtree, accessed May 8, 2015). The data was also analysed using a Bayesian method, as implemented in MrBayes (version 3.2.6; [46]) and the BEAGLE library (version 2.1.2; [47]). All model parameters, except branch lengths, remained unlinked, and two independent phylogenetic analyses were run with four chains each, sampling for 10 million generations every 1000th generation. The standard deviation of split frequencies was found to be <0.01, and convergence of the two runs was assessed using Tracer (version 1.6.0; [48]). The first 25% of the sampled trees were discarded as burn-in and the remaining sampled trees from the two runs were pooled. A 50% majority rule consensus tree with clade frequencies (posterior probabilities) was calculated with SumTrees and printed with Fig-Tree (Additional file 7: Fig. S2). To check for potentially detrimental influence of synonymous substitutions, the five-fragment MSA was fully degenerated with their respective genetic codes, using the software Degeneracy Coding (version 1.4; [49]). The resulting degenerated MSAs was analysed using RAxML with the same setting as for the four-fragment MSAs data set (which refers to the five gene fragment MSA less one gene fragment) (Additional file 8: Fig. S3). Ancestral character state reconstruction The ancestral character states of six discrete morphological and of one behavioural character (Additional file 1: Table S3) were inferred using the maximum parsimony (MP) and ML methods as implemented in Mesquite (version 3.1; [50]) and using the ML tree (Fig. 3) as backbone. The Mk1 model, also implemented in Mesquite, was used to calculate the ML probabilities of the ancestral states. Phylogenetic analyses The ML and Bayesian phylogenetic analyses of the fivefragment MSA resulted in identical topologies (Fig. 3, ML topology, log likelihood โˆ’55,518.302879 and Additional file 7: Fig. S2, the result from the Bayesian analysis). In both cases, the topology is mostly well supported, with 30 of 49 edges having a bootstrap support value of at least 75% and 35 of 49 edges having a posterior probability of at least 0.95 (both here subjectively regarded as "at least moderately supported"). The ML analysis of the degeneracy-coded five-fragment MSA resulted in a similar topology with 21 of 49 edges at least moderately supported (Additional file 8: Fig. S3). These 21 edges were also all present in the above topology generated from the non-degenerated data (Fig. 3). Except for the sister-group relationship between Coccinellini and Chilocorini (bootstrap values of 76% and 60% with the degenerated and non-degenerated data, respectively), support values from the degenerated data are not much higher than those from analysing the non-degenerated data set. The higher bootstraps support values of the analysis with non-degenerated data and the topological congruence between results based on non-degenerated (Fig. 3) and degenerated data (Additional file 8: Fig. S3), for at least 21 edges with moderate to very strong support, are both indicative of the utility of the synonymous changes for the estimation of the Coccinellini phylogeny. The subsequent discussion, therefore, focuses on analyses of the non-degenerated data set (Fig. 3). Coccinellini -Monophyly and sister relationship To assess the support for the monophyly of Coccinellini, we used a comprehensive taxon sampling that represents all previously recognized tribes of Coccinellinae: Coccinellini, Discotomini, Halyziini, Singhikaliini, and Tytthaspidini (incl. Bulaeini). The outgroup includes twelve species of ladybirds classified as Microweiseinae (two species) and the Coccinellinae tribes Chilocorini (two species), Epilachnini (two species), Aspidimerini (two species), Noviini (one species), Sticholotidini (one species) and Coccidulini (two species). In addition, we included two species of fungivorous Corylophidae as more distant outgroup taxa within the superfamily Coccinelloidea [2] (Additional file 9: Table S4). The monophyly of Coccinellini sensu lato [13] was strongly supported with a bootstrap value of 100% and a posterior probability of 1.0. Despite recent attempts to establish phylogenetic relationships within Coccinellidae, there is still no satisfactory resolution within the broadly defined subfamily Coccinellinae, that would lead to a stable tribal classification [17]. In previous studies, Chilocorini and Coccinellini were repeatedly recovered as monophyletic groups and as sister taxa of each other [2,15,17]. Our results from the ML and Bayesian analyses are consistent with these findings, but the support is weak (Bootstrap Support (BS) 60%, Posterior Probability (PP) 0.57; Fig. 3). Only moderate support was obtained when analysing the degenerated data set (BS 76%; Additional file 8: Fig. S3). Other previously-suggested phylogenetic positions of Coccinellini [14,16] were not supported in our analyses. Major clades within the tribe Our analyses recovered three strongly supported clades within Coccinellini (Fig. 3), but relationships between these clades remain unresolved, as they are connected by short edges with low support values (i.e., BS < = 38 and PP < = 0.57). Clade 1 consists of species of the widespread genus Adalia and of the three New World genera Olla, Cycloneda, and Eriopis that are speciose in Central and South America [51]. Clades 2 and 3 comprise large radiations of primarily Old World species. Clade 2 is composed of species of the Holarctic genus Coccinella, of species of several genera formerly included in Tytthaspidini of species of the Holarctic genera Coleomegilla and Paranaemia (sometimes classified as Hippodamiini), and of species of the genera Oenopia, Cheilomenes, Aiolocaria and Synona. Within Clade 2, the genus Coccinella forms the sister group to the other species included in this clade. Clade 3 includes many genera. The Holarctic genera Aphidecta and Hippodamia and diverse Old World genus Harmonia constitute a wellsupported group (BS 99%, PP 1.0). The Neotropical genus Seladia (formerly Discotomini) forms a moderately supported (BS 72%, PP 0.99) sister group to a phylogenetically unresolved complex of genera (BS 99%, PP 1.0) that includes the Old World Cleobora, Coelophora, Propylea, all genera of the former Halyziini, the Chinese species Singhikalia duodecimguttata (former Singhikaliini) (BS 99%, PP 1.0), and the widely-distributed Indo-Australian species Phrynocaria gratiosa. Interestingly, very large species of Coccinellini feeding on Hemiptera, Anatis ocellata (aphids) and Megalocaria (heteropterans), form a sister, albeit weakly supported (BS 52%, PP < = 0.50) group to powdery mildew fungi feeding taxa of the former Halyziini (Halyzia, Illeis and Psyllobora). Ancestral state reconstruction The results from the ancestral state reconstruction of adult pubescence, mandible type, female colleterial glands, larval dorsal gland openings, larval wax secretions, and pupal gin traps are presented on Additional files 10, 11, 12, 13, 14 and 15: Figs. S5-S10. Both the ML and MP approaches to ancestral state reconstruction are congruent and revealed that the female colleterial glands (Additional file 10: Fig. S5) and pupal gin traps (Additional file 11: Fig. S6) were most likely present in the most recent common ancestor of Coccinellini, strongly supporting the monophyletic origin of this clade. The the common ancestor of Chilocorini and Coccinellini lacked adult dorsal pubescence (Additional file 14: Fig. S9), but it was regained in Singhikalia, the only known genus of Coccinellini with dorsal pubescence. The highly agile larvae of Coccinellini lack both defensive gland openings and protective waxes (Additional files 12 and 13: Figs. S7 and S8), and the ancestral state reconstruction analyses indicate that these features were lost in the most recent common ancestor of Chilocorini and Coccinellini. In our data set, larval and pupal waxes are present in only a few genera of Coccidulini (Rodolia, Sasajiscymnus, Rhyzobius) and appear to have evolved convergently (Additional file 13: Fig. S8). With respect to the food preferences and associated structural modifications of the adult mouth parts (Additional file 15: Fig. S10), the ancestral state reconstruction analyses suggest that preying upon aphids is the ancestral state of Coccinellini, and that feeding on other Hemiptera, beetle larvae, mildew, spores, pollen and plant tissue has occurred multiple times independently. Phylogenetic analyses In agreement with previously published molecular phylogenetic studies [2,[14][15][16][17] the monophyly of Coccinellini was resolved with high confidence in our analyses. Our studies are also consistent with the research based on nuclear and mitochondrial markers [2,15,17] recovering Chilocorini as the sister taxon of Coccinellini. The traditional idea of Coccinellini and Epilachnini being sister groups [9,11], derived from studying morphological characters, remained unsupported by our analyses, as they were in other molecular analyses, which recovered Epilachnini at the base of the tree of Coccinellidae [15], within the taxa classified in Coccidulini (incl. Scymnini; [14,16,17]) or as sister to Coccidulini [52]. Our results (Fig. 4) suggest that the relatively large and aposematically coloured adults of aphid-feeding Coccinellini and herbivorous Epilachnini, both living on exposed surfaces and capable of strong reflex bleeding, are independently derived from smaller scale feeding ancestors. Epilachnini, which nest in our inferences within the "Coccidulinae" clade, retained densely pubescent bodies, while the last common ancestor of Coccinellini and Chilocorini lost this character (Additional file 14: Fig. S9), with the exception of the genus Singhikalia (former Singhikaliini), the only known pubescent Coccinellini. The genus Singhikalia is deeply nested within the tribe Coccinellini and represents an interesting case of convergence, possibly because it is mimicking local members of Epilachnini. Singhikalia ornata Kapur (India, Vietnam, Taiwan) and S. duodecimguttata Xiao (China) are reddish with black colour markings, while S. latemarginata (Bielawski) (Papua New Guinea) is almost entirely black. In this respect, all Singhikalia species match local members of Epilachnini very closely, to the extent that they are often found in the same series in museum collections, suggesting that they may co-occur in the same area and host plants. The former tribe Halyziini forms strongly supported monophyletic group placed within the Clade 3 and comprises of speciose but very poorly defined Old World genera Coelophora, Calvia, Phrynocaria, and Propylea, the Asian Singhikalia, Old World Megalocaria, and species poor Holarctic genera Myzia and Anatis. In spite of taxon sampling differences the relationships between some of these taxa are in agreement with previous studies [2,15,16]. The second branch of the Clade 3 consists of Holarctic Aphidecta as a sister taxon to Harmonia and Hippodamia. The relationship between the last two genera has been recovered before [2,16] but the placement of Aphidecta in this clade is a new position. The exclusively Meso-and South American former tribe Discotomini is a poorly known group of 5 genera diagnosed Fig. 4 Ancestral state reconstruction of food preferences for the Coccinellini based on maximum likelihood method in Mesquite by their strongly serrate or pectinate antennae. Their placement within traditional Coccinellinae has been uncertain and the molecular studies [2,14] published so far placed Discotomini as a sister group to the remaining Coccinellini. The combined molecular and morphological analysis of Seago et al. [17] recovered Seladia as a sister group to the former Halyziini. We have recovered Seladia deeply embedded within Clade 3 at the base of large primarily Old World taxa, including former Halyziini. The placement of the small Old World genera Tytthaspis and Bulaea in Coccinellini varied considerably in the past but their close affinity has been recognised by Iablokoff-Khnzorian [53], who pointed out similarities in male and female genitalia of several genera, later recognized as Tytthaspidini (= Bulaeini) by Kovรกล™ [9]. Most of the genera of former Tytthaspidini form strongly supported monophyletic group within the Clade 2 with Oenopia as the sister group, which is in agreement with Magro et al. [15]. The close relationships between Olla, Adalia and Cycloneda recovered in the Clade 1 has been suggested before [2,15] but not the inclusion of the Neotropical Eriopis in this clade. Such arrangement suggests that the endemic New World genera and almost cosmopolitan Adalia have had long and independent evolutionary history from the much more diverse and speciose Coccinellini of the Old World. As none of the previously recognised tribes, Discotomini, Halyziini, Singhikaliini, and Tytthaspidini, correspond with major clades within Coccinellini re-granting any of them subtribal status would render Coccinellini paraphyletic. Our results indicate that the newly-discovered clades, Clades 1-3 (Fig. 3), should receive recognition as formal taxonomic units, but it requires corroboration by analysis of a larger data set (Tomaszewska et al., in preparation). Ancestral state reconstruction Morphological characters Coccinellini are generally recognised by relatively large adults having glabrous and convex dorsal surfaces, often with aposematic colouration, rather long and feebly clavate antennae inserted in front of large eyes, and strongly expanded "securiform" terminal maxillary palpomeres and 'handle and blade' female coxal plates [9,11]. Most adults of Coccinellini can be distinguished by a combination of the above listed characters, but these are known to occur in taxa classified in other coccinellid groups. ลšlipiล„ski [13] expanded the list of diagnostic characters of Coccinellini, arguing that the presence of large paired reservoirs, associated with female terminalia called "colleterial glands" (Fig. 2i) and the development of the "gin traps" between abdominal tergites of the pupa (Fig. 2j) were unique developments within Coccinellidae and may constitute autapomorphies of Coccinellini. The functions of both these structures are not well understood but the secretion of the colleterial glands have been linked to mating behaviour or egg deposition in batches [54] while the "gin traps" significantly contribute to the pupal defence [55] facilitating a quick body flicking and by creating sharp edges between segments to pinch legs or entire bodies of predatory and parasitic invertebrates to discourage oviposition or predation (Figs. 1c, 2e). To test the hypotheses by ลšlipiล„ski [13], we traced the evolution of pupal gin traps and female colleterial glands along our main ML tree with Mesquite. Using MP and ML methods applied to character evolution, we found that the above-mentioned characters originated in the common ancestor of Coccinellini (Additional files 10 and 11: Figs. S5 and S6) and consequently regard these characters as autapomorphies of the tribe. In addition to the above traits, we investigated the development of larval dorsal abdominal glands (Additional file 12: Fig. S7) and protective larval waxes (Additional file 13: Fig. S8), present in many groups of ladybirds, but absent in Coccinellini. The function and homology of the dorsal glands in larvae of Coccinelloidea has not been thoroughly investigated, but paired openings on abdominal tergites are present in larvae of most Corylophidae, some Endomychidae and Coccinellidae [13]. They are absent in several ladybird groups, including Coccinellini. Adults of Coccinellidae are known to reflex-bleed by excreting droplets of alkaloid loaded hemolymph to deter or tangle apparent predators. This process is less studied in ladybird larvae, but the "bleeding" from the dorsal glands has been observed in Hyperaspis maindroni Sicard (J. Poorani, ICAR-NRCB, India, personal information) and Orcus bilunnulatus (Chilocorini, A. ลšlipiล„ski, personal observation). Larvae of several species of Coccinellini have not been observed to excrete hemolymph when disturbed (A. ลšlipiล„ski, personal observation). However, this process has been documented in larvae of Harmonia axyridis [56,57], with droplets originating from intersegmental membranes on most abdominal segments, and more recently in larvae of Hippodamia variegata (O. Nedvฤ›d, personal observation). It is unclear whether this is a species-specific behaviour or whether it has been overlooked in other Coccinellini. The generalised carnivorous type of the adult mandible with bifid apex and molar part bearing two unequal teeth [9], that has originated in the ancestor of Coccinellidae has been carried over with very little modification in all predatory lineages of ladybirds with several independent origins of a single sharp apical incisor in specialized scale predators (Chilocorini, Microweiseinae). All known Coccinellini have apically bifid mandibles, used to pierce their prey, suck body fluids or to masticate the entire prey [26]. The mildew or microphagous feeding taxa (former Halyziini and Tytthaspidini) have the same type of mandible with additional serration along the incisor edge (Halyziini) or relatively stiff and comb like prostheca used to scoop the spores and pollen (Tytthaspidini). Interestingly, the mandible of sometimes phytophagous Bulaea does not differ from the microphagous type found in Tytthaspis, but markedly differs from strongly modified mandibles in phytophagous Epilachnini. Food preferences The evolution of food preferences in Coccinellidae is a very complex issue that has received much attention due to the importance of ladybirds as biological control agents [14,58] and, more recently, due to the recognition of the environmental impact of introduced or invading ladybirds [59] on populations of native species. Some groups of ladybirds (e.g., Noviini, Stethorini, Telsimiini and most Chilocorini) show remarkably stable food preferences, feeding mostly on taxonomically narrow groups of invertebrates [22]. Coccidophagy, preying upon scale insects, which are gregarious organisms of limited mobility, has been evolved as an ancestral food preference in Coccinellidae [14,17]. Coccidophagous coccinellids are often morphologically and physiologically adapted to a given prey [27,60]. But even very specialized ladybirds feed and develop occasionally on a very different host (e.g., Stethorini, which are specialised on spider mites (Tetranychidae) can develop on whiteflies [61]). Most species of Coccinellini are "general predators", feeding in principle on aphids. Character-state reconstruction indicates that the transition from feeding on coccids to aphidophagy was acquired in the ancestor of Coccinellini (Fig. 4), but this feeding preference has independently arisen a few times in Coccinellinae (e.g., in Aspidimerini), in some genera of Coccidulini (e.g., Coccidula, Sasajiscymnus), Scymnini (Scymnus), and Platynaspidini (Platynaspis). Coccinellini have diverse food preferences. While being primarily aphidophagous, they consume a broad spectrum of food that also includes other invertebrates, pollen, nectar, and often spores [14]. These opportunistic predators regularly cannibalize eggs and larvae of ladybirds, including those of their own species, and change diet depending on season and availability of prey. The gut contents of many species of Coccinellini examined during this study often consisted of predominantly sternorrhynchan Hemiptera mixed with pollen, and sometimes, with fungal spores. Within Coccinellini, our results revealed repeated and phylogenetically independent food preference transitions from aphidophagy to other food sources ( Fig. 4 and Additional file 16: Fig. S11): (a) to specialized and obligate mycophagy in the taxa classified in the former tribes Halyziini (Halyzia, Illeis, Psyllobora), feeding on hymenium and conidia of powdery mildew fungi (Erysiphales); (b) to a mixed diet in Bulaea, Coccinula and Tytthaspis (Tytthaspidini), with their known diet including spores of various Ascomycete fungi [32,62], but also plant tissue (Bulaea), pollen (mainly of Asteraceae in Coccinula), acari and Thysanoptera (Tytthaspis); (c) to specialised predation on nymphs of the plataspid bugs in various phylogenetically independent lineages of some Megalocaria [63] and of Synona [64]; (d) to predation on larvae of Chrysomelidae in at least Asian Aiolocaria [65], Australian Cleobora mellyi [66] and the New World Neoharmonia sp. [67] (N. Vandenberg, USDA-Smithsonian, USA, personal communication; not included in our data), and (e) to psyllids (Psylloidea) as the essential food of Harmonia conformis at least in some geographic areas [68]. Conclusions This study represents the first molecular phylogenetic analysis of the tribe Coccinellini with a world-wide taxonomic sampling. Our phylogenetic analyses revealed strong support for Coccinellini sensu lato [13] being monophyletic and a sister group to Chilocorini. Three major clades were identified within Coccinellini, suggesting that Old and New World taxa, especially South American Coccinellini, have probably evolved separately. None of the major clades correspond to the previously recognised tribes Discotomini, Halyziini, Singhikaliini, or Tytthaspidini. Consequently, we suggest that these taxonomic units should no longer be used. Further testing with more taxa, especially from South America, is required to corroborate the constitution of and relationships between the three major clades of Coccinellini proposed in this study. Our study also provides an understanding of the diversification of Coccinellini and character evolution within this tribe, particularly the evolution of food preferences. The switch from obligate coccidophagy to aphidophagy in ancestral Coccinellini was accompanied by larval changes (losing dorsal defensive glands and strong dorsal ornamentation) for increased agility, and the pupae shedding larval skins completely and exposing dorsal gin traps.
Does modern medicine increase life-expectancy: Quest for the Moon Rabbit? The search for elixir of immortality has yielded mixed results. While some of the interventions like percutaneous coronary interventions and coronary artery bypass grafting have been a huge disappointment at least as far as prolongation of life is concerned, their absolute benefit is meager and that too in very sick patients. Cardiac specific drugs like statins and aspirin have fared slightly better, being useful in patients with manifest coronary artery disease, particularly in sicker populations although even their usefulness in primary prevention is rather low. The only strategies of proven benefit in primary/primordial prevention are pursuing a healthy life-style and its modification when appropriate, like cessation of smoking, weight reduction, increasing physical activity, eating a healthy diet and bringing blood pressure, serum cholesterol, and blood glucose under control. Introduction Mortality has tormented human consciousness since time immemorial and humankind has perpetually searched for a therapy that extends life, the so-called Philosopher's Stone. In this quest, the human race has been only partially successful; the life-expectancy has certainly increased but only up to a certain point. ''Nobody has yet achieved even modest life extension beyond the apparent upper limit of about 120 years''. Thus, along this road, there have been some successes but mostly disappointments. Typically, when a ''new therapy'' is introduced, there is a lot of hope but as its use increases, its side-effects also become apparent, which starts a whole new drive toward next generation of this therapy which is safer and more effective, but then ever newer side-effects come up again and this cycle goes on and on, something like ''Carrot and the Horse.'' Further, the effects of a new therapy are more remarkable when disease has already occurred (secondary prevention) and already reduced life-expectancy as a result of this disease; the more severe/serious the disease, the greater possible benefit of the therapy. However, although effective therapy may reduce the mortality arising of this disease, it practically never brings it back to normal, ''the Zenos's Paradox.'' Recently, advanced technology has provided us with two highest-profile treatments for coronary artery disease (CAD): coronary artery bypass grafting (CABG) and percutaneous coronary interventions (PCI). Each intervention in itself promised a lifesaving relief and consequently was embraced enthusiastically by physicians and even lay public. Both these techniques indeed often provide rapid, dramatic reduction of the alarming pain/angina associated with the disease. Yet, when it comes to prolonging life, their track-record is near dismal, providing little or no improvement in survival rates over standard medical and lifestyle therapies except in the sickest of the patients. Further, these procedures are also associated with significant side effects. ''Doctors generate better knowledge of efficacy than of risk, and this skews decision making,'' says David Jones Ackerman professor of the culture of medicine. 1 But why blame only physicians, even ''patients are wildly enthusiastic about these treatments,'' he says. ''There 've been focus groups with prospective patients who have stunningly exaggerated expectations of efficacy. Some believed that angioplasty would extend their life i n d i a n h e a r t j o u r n a l 6 8 ( 2 0 1 6 ) 1 9 -2 7 a b s t r a c t While early improvement in life-expectancy was a result in control of infectious diseases, subsequent improvement occurred as a consequence of focus on life-style diseases. From 1991 to 2004, life-expectancy in US improved by 2.33 years mostly by medical innovation (discovery and availability of new drugs) but also addressing problems like smoking and obesity. 3 In context of CVS diseases, mortality from heart disease in the US fell by more than half between 1950 and 1995, with a resultant increase in life-expectancy of approximately 3ยฝ years, half to two-thirds of which has been attributed to coronary care units, treatment of hypertension, and medical and surgical treatment of CAD. 4,5 3. Approaches to improving life-expectancy Improvement of life-expectancy with any maneuver essentially depends on: Severity of disease -Baseline mortality is the most important factor operative on lifespan-gain from any procedure. Diseases with a higher baseline annual mortality rate demonstrated more lifespan gained. Thus, therapeutic maneuvers provide more survival benefit in secondary prevention than primary or primordial prevention. Duration for which intervention is appliedage of the patient. Caloric restriction Caloric restriction (CR) is the only consistently reproducible experimental means of extending lifespan. Laboratory experiments show markedly decreased morbidity in laboratory mammals that are fed to only 80% full. 6 Indirect human proof comes from Okinawa, a region in Japan which boasts one of the longest life expectancies for its population in the world as also having a significantly large population of centenarians (living within the region) despite being one of the poorest regions in the country (being the bottom ranked in socioeconomic indicators for Japan). This is attributed to diet, high levels of physical activity, and strong cultural values that include good stress-coping abilities. Among the peculiarities of culture, Okinawa culture embraces Hara Hachi Bu, which means to eat only until 80% full. 7 Further, studies on the oldest living natural population in the world, the Seventh Day Adventists living in California, support these findings. 8 Longterm human trials of CR are now being done. More recent work reveals that the effects long attributed to caloric restriction may be obtained by restriction of protein alone, and specifically of just the sulfur-containing amino acids cysteine and methionine. 9,10 Increased physical activity Undertaking regular exercise (jogging) increases the lifeexpectancy of men by 6.2 years and women by 5.6 years, as per data from the Copenhagen City Heart study presented at the EuroPRevent2012 meeting. It showed that between one and two-and-a-half hours of jogging per week at a ''slow or average'' pace delivered optimal benefits for longevity. 11 Metformin A study by Bannister and co-workers revealed that patients with type 2 diabetes mellitus (DM) initiated with metformin monotherapy not only had 38% better survival than those with DM and treated with sulphonylurea (0.62, 0.58-0.66), but unexpectedly also survived 15% longer than even matched, non-diabetic controls (0.85, 95% CI 0.81-0.90). This brings out an interesting prospect of metformin as first-line therapy and may imply that metformin may confer benefit even in non-DM. 12 Geroprotectors Experimental proof of this class of drugs comes from sirolimus. It is an immune-modulator (also the drug in drug-eluting stent) which was found to lengthen the mices' lives by up to 14%. Likewise, everolimus was found to partially reverse the immune deterioration that normally occurs with age in a pilot trial in people over 65 years. The drug acting by inhibiting a protein called mTOR (interestingly mTOR also seems to be affected by calorie restriction) improved participants' immune response and is involved in sensing the level of nutrients available within cells, shifting cells into energyconserving mode, which has anti-aging effects, including that on the immune system. 13 In addition to rapamycin analogs, resveratrol, found in grapes, and pterostilbene, a bio-available substance found in blueberries, have also shown favorable response. 14 Scientists estimate that these drugs could increase life-expectancy by 10 years. Senolytics Investigators from The Scripps Research Institute, Mayo Clinic and other institutions have identified a new class of drugs that in animal models dramatically slows the aging process, alleviating symptoms of frailty, improving cardiac function, and extending a healthy lifespan. The 2 drugs are dasatimib (an anti-cancer drug) and quercetin (a natural compound found in many fruits, vegetables, leaves, and grains), an antihistamine and anti-inflammatorywhich can kill senescent cells. Senescent cells are cells which have stopped dividing and accumulate with age, are a non-productive burden on the total cell population, and accelerate the aging process. 15 4.6. Genome sequencing Geneticist Craig Venter announced that he is pursuing a goal of extending and enhancing the healthy and high performance life-span by employing the power of human genomics, informatics, next-generation DNA sequencing technologies, and stem cell advances. Maintaining ideal cardiovascular health In the middle ages of human life-span, the major diseases limiting the life-expectancy are cerebro-vascular diseases and cancer. Thus, it is not surprising that attempts to prevent the occurrence of CVS diseases (primordial prevention) would have an impact on increasing life-expectancy. The best way to do that seems to be to remain at a level of health which does not permit risk factors to appear (as defined by American Heart Association [AHA]). It has been suggested that community-based primordial prevention is capable of reducing cardiac deaths by 90% and prolonging life-expectancy by 10 years. 16 5. Primary prevention of CAD Risk factor modifications Mere presence of risk factors leads to reduction in lifeexpectancy (Table 1). Thus logically, correction of risk factors will be expected to lead to at least partial restoration of lifeexpectancy (Table 2). Measures used in primary prevention customarily include smoking cessation, diet modification, physical activity, weight management, and correction of high blood pressure. Since the reduction of life-span is maximum with smoking, smoking cessation is likely to benefit most and it has been estimated that the risk attributable to smoking returns to baseline (nearly 14 year gain in life-expectancy) after 5 year of smoking cessation. 19 Likewise, a 10 mm drop in systolic blood pressure may reduce cardiovascular mortality by up to 40%. 20 Another study noted that on average, male smokers would gain 2.3 years from quitting smoking; males with hypertension would gain 1.1-5.3 years from reducing their diastolic blood pressure to 88 mmHg; men with serum cholesterol levels exceeding 200 mg/dl would gain 0.5-4.2 years from lowering their serum cholesterol level to 200 mg/dl; and overweight men would gain an average of 0.7-1.7 years from achieving ideal body weight. Corresponding projected gains for at-risk women are 2.8 years from quitting smoking, 0.9-5.7 years from lowering blood pressure, 0.4-6.3 years from decreasing serum cholesterol, and 0.5-1.1 years from losing weight. 21 Eliminating coronary heart disease mortality is estimated to extend the average life-expectancy of a 35-year-old man by 3.1 years and a 35-year-old woman by 3.3 years. 22 Statins Statins have been hailed by many as ''wonder drugs'', with some physicians suggesting mass treatment of population. Dr John Reckless, chairman of Heart UK and a consultant endocrinologist at Bath University, went as far as suggesting they should be added to the water supply. Some advocate it being put in table salt like ''Iodine.'' The question is whether i n d i a n h e a r t j o u r n a l 6 8 ( 2 0 1 6 ) 1 9 -2 7 statins are really the wonder drugs they have been made out to be? Particularly in context of primary prevention (what to talk of primordial prevention), their role is controversial. While early trials predicted a modest reduction in mortality and a meta-analysis (14 randomized control trials (RCT); 34,272 participants) demonstrated an all-cause mortality reduction of 16% (RR 0.84, 95% CI 0.73-0.96), the analysis was criticized because many of the trials included diabetics and patients with micro-albuminuria (now considered CAD equivalents) and so these trials were not purely of primary prevention. 21 On the other hand, another meta-analysis of 11 RCTs involving 65,229 individuals completely free from CVD at baseline demonstrated that use of statins in this high-risk primary prevention setting was not associated with a statistically significant reduction (risk ratio, 0.91; 95% confidence interval, 0.83-1.01) in the risk of all-cause mortality. 23 Likewise, an NNT review for Statin Drugs Given for 5 Years for Heart Disease Prevention (Without Known Heart Disease) revealed that no life was saved consequent to their use. 24 Aspirin The role of aspirin (ASA) in primary prevention of CAD is also controversial. While use of ASA is definitely of use in prevention of CAD, the balance between vascular events avoided and major bleeds caused by ASA is substantially uncertain. A recent meta-analysis shows that for individuals without pre-existing vascular disease, the reduction of cardiovascular events after adding long-term ASA are likely to be of similar magnitude as the hazards (Table 3). 25,26 6. Stable CAD Statins There is little doubt that statins are effective in reducing mortality and heart attacks in patients with manifest CAD. Several large controlled trials including 4S, CARE, LIPID, HPS, TNT, MIRACL, PROV-IT, and A to Z have shown relative risk reductions between 7% on the low end (in MIRACL) and 32% on the high end (in 4S), with an average relative risk reduction of about 20%. However, the sobering aspect is that absolute risk reductions are much more modest. 6.3. Renin angiotensin system Nishino and co-workers investigated the effect of angiotensinconverting enzyme inhibitors (ACE-I)/angiotensin receptor blockers (ARB) on survival benefits in patients with stable CAD (CAD but without MI). They found that all-cause (5.2% vs. 5.6%, p = 0.56) and cardiovascular (3.2% vs. 3.0%, p = 0.23) mortality were similar regardless of whether ACEI/ARB were used or not. 30 On the other hand, HOPE study showed that ACEI therapy may reduce SCD mortality in those with CAD, stroke, peripheral vascular disease, or diabetes and at least one other cardiovascular risk factor. Over a mean follow-up period of five years, the relative risk of SCD was reduced by approximately 40%, although the absolute risk was low in both treatment and control groups (0.8% vs. 1.3%, respectively). 31 Beta blockers A post-hoc analysis of CHARISMA trial revealed that in known CAD but without MI, b-blocker use was not associated with lower ischemic outcomes, but rather a trend toward a higher stroke risk (3.5% versus 1.5%; hazards ratio, 2.13; 95% confidence interval, 0.92-4.92; p = 0.079). 32 CABG First successful CABG procedure was performed by Rene Favaloro of the Cleveland Clinic in 1968. Favaloro's report fired the imagination of many surgeons, initially operating on stable patients but as skill was acquired on ever-sicker patients, and even during MI. Within next decade cardiac surgeons were performing 100,000 bypass procedures per year based only on case reports with no single trial available to justify its usefulness. ''Surgeons said trials were totally unnecessary, as the logic of the procedure was self-evident, you have a plugged vessel, you bypass the plug, you fix the problem, end of story.'' But there was 'a fly in the ointment, ' The first RCT of CABG, from Veterans Administration hospitals, published in 1977 revealed that there was no Table 3 -Risk-benefit analysis of ASA in primary prevention. Primary prevention Benefit (number of patients in whom a major vascular event is avoided per 1000/year) Harm (number of patients in whom a major GI bleeding event is caused per 1000/year) Men at low-to-high cardiovascular risk 1-3 1-2 Essential hypertension 2 1-2 i n d i a n h e a r t j o u r n a l 6 8 ( 2 0 1 6 ) 1 9 -2 7 survival benefit in most patients who had undergone CABG versus those receiving standard medication. During this time, there were two other separate multicenter RCTs: the European Coronary Surgery Study and the Coronary Artery Surgery Study which showed however, that in some high risk sub-set of patients of CAD; significant obstruction of the left main coronary artery, triple-vessel CAD and left ventricular (LV) systolic dysfunction, and two-vessel CAD plus proximal left anterior descending artery disease there could be a benefit. [33][34][35] However, even this survival advantage vanished on longer-term follow-up (12 years or more). 36 On the other hand, a recent network analysis evaluating 95 trials and 93,553 patients did reveal that CABG reduced an all cause mortality by 20% (rate ratio 0.80, 95% credibility interval 0.70-0.91). Thus, the current evidence shows that CABG may improve survival for a few patients with the most severe forms of CAD, but for most others while it relieves symptoms, it may not improve life-expectancy. Percutaneous coronary angioplasty The issue with PCI is even more contentious. Like CABG, PCI rates went from zero to 100,000 procedures in no time with no clinical trial to assess long-term outcomes, based just on the logic of the procedure and patients' reports of how much better they felt. Yet, the first clinical trials, which appeared around early 1990s, showed no survival benefit of elective angioplasty as compared with medication. However, here the physicians took a different approach, because by the time trial results came (negative results), the interventionists claimed that they had moved to next-generation devices; on the other hand, the now evaluated procedure was already out-dated and therefore the trial meaningless. However, the matter of fact is that there are several small trials in stable CAD patients comparing PCI with medical therapy (with both single and multi-vessel disease). While most have reported only limited follow-up data, they do show that PCI significantly improved angina relief and short-term exercise tolerance, but did not significantly reduce death, MI, or need for subsequent revascularization. [37][38][39] In fact, a meta-analysis of six RCTs comprising 1904 patients revealed that the only outcome measure that favored PCI (compared with medical therapy) was angina relief (OR 0.70; 95% CI 0.50-0.98). However, for death, MI, and need for repeat revascularization, the ORs trended strongly in favor of medical therapy (29-42%) versus PCI. Further, the need for subsequent CABG was nearly 60% higher with PCI, although the situation may be slightly different when newer generation of drug eluting stents is used. 40,41 On positive side, like CABG, there are certain subsets of patients where there may be survival advantage with PCI, particularly primary PCI. A comparative-effectiveness study of CABG surgery in a population of real-world patients (105,156 propensity score-matched Medicare patients) has shown that CABG surgery may be associated with approximately 19 days increase in life-expectancy versus PCI. 42 On the other hand, in a study Berger and co-workers revealed that in those high-risk anatomic subsets in which survival is prolonged by CABG (versus medical therapy), revascularization whether by PCI or CABG yielded equivalent survival over seven years. 43 Statins RIKS-HIA study demonstrated that early statin (started before or at the time of hospital discharge) therapy could lead to a 25% reduction in 1-year mortality (relative risk, 0.75; 95% CI, 0.63-0.89; p = .001) in hospital survivors of AMI. 44 Even in individuals with elevated CRP (a marker of inflammation/ACS), statin therapy could lead to a gain of life-expectancy, 6.6 months in male and 6.4 months in female. 45 ASA In the ISIS-2 study, the use of ASA (162 mg chewed) in AMI was associated with nearly 1/4th reduction of vascular mortality. 46 In other ACS (Non MI), ASA use has been associated with reduction in fatal or nonfatal MI by 50-70% during the acute phase and by 50-60% at 3 months to 3 years. 47,48 Beta blockers Several prospective RCTs trials of beta-receptor blockade therapy after AMI have demonstrated an improvement in survival, primarily due to a decreased incidence of SCD. [49][50][51] The benefit was notable right from the beginning (in the first few months) and persisted on long-term follow-up (even up to 6 years). At follow-up, beyond a year, these studies show a 30-45% relative reduction in SCD, with an absolute sudden death incidence reduction of 1.3-6.2%. On the other hand, CHARISMA Trial showed that b-blocker use in patients with prior MI but no heart failure was associated with a lower composite cardiovascular outcome end-points but no reduction in mortality. 32 The ACC/AHA committee on chronic stable angina recommends beta-blockers as the first-line therapy in post-MI patients based on evidence of improved mortality. 52 7.4. Renin angiotensin system CREDO-Kyoto PCI/CABG registry cohort-2 investigators studied nearly 12,000 patients undergoing first PCI and demonstrated that patients with MI, treated with ACEI/ARB had a survival advantage: 3-year all-cause mortality (6.6% vs.11.7%, p < 0.0001). However, this benefit was not manifest in non-MI patients. 53 Thrombolysis The Fibrinolytic Therapy Trialists' Collaborative Group evaluated 9 trials including 58,600 patients and demonstrated highly significant absolute mortality reductions of about 30 per 1000 for those presenting within 0-6 h and of about 20 per 1000 for those presenting 7-12 h from onset but a (statistically) uncertain benefit of about 10 per 1000 for those presenting at 13-18 h. The benefit was observed both among patients presenting with ST elevation or bundle-branch blockirrespective of age, sex, BP, heart rate, or previous history of MI or diabetesand was greater, the earlier the treatment began. 54 The temporal effect on survival was demonstrated in other studies as well; a retrospective subgroup analysis of patients in GISSI-1 trial showed that in patients randomized to streptokinase (or control treatment) within 1 hour of symptom onset, there was a 51% reduction in mortality (studied at 21 days). 55 Congestive heart failure As life-span decreases, as a consequence of severity of disease, several therapeutic interventions may aid in bringing down the mortality. Drugs Several drugs may be effective in this situation and the mechanism may involve either preventing the development of lethal heart rhythms or by limiting the on-going damage to heart muscle (Table 4). 57 1. ACE-I. 2. ARBs. 3. Beta-blockers. 4. Aldosterone receptor antagonists (but not other diuretics which can improve symptoms but do not improve survival). 5. Hydralazine/Nitrates. Beta-blockers, bisoprolol, metoprolol, and carvedilol have been shown to reduce total mortality in several studies. [58][59][60] The effect seems to be predominantly due to reduction of mortality from SCD (42% with bisoprolol in CIBIS II, an absolute risk reduction of 2.7% over a mean follow-up period of 1.3 years) but the effect may also be due to reduction in ischemia. 61 The mechanism of mortality reduction with ACE-I is under scrutiny. The CONSENSUS trial showed a 31% reduction of total mortality at 1 year in the enalapril (vs. the placebo group) but no reduction in sudden death. 62 On the other hand, in the TRACE study, trandolapril significantly reduced the risk of SCD in post MI patients with LV dysfunction, a 22% relative decrease and a 3.2% absolute decrease in SCD over a 4-year period. 63 Even, aldosterone antagonists seem to significantly reduce mortality in patients with severe heart failure by reducing arrhythmic deaths. In the RALES study, over a 2-year period, the relative risk of SCD was reduced by 29%, and absolute risk reduced by 3%. 64 Devices COMPANION trial was a RCT comparing standard heart failure drug therapy alone, or in combination with either cardiac resynchronization therapy (CRT) or CRT plus implantable cardioverter-defibrillator (ICD) in heart failure patients -NYHA class III-IV with LVEF โ‰ค35% and QRS width โ‰ฅ120 ms. They found that while CRT alone helped, mortality was reduced equally in both the device arms (with no significant improvement of mortality with combined device; CRT/ICD (combo device)). Thus, use of a combo device in this situation should be based on the indications for ICD therapy. 65 Surgery Heart transplantation is the therapy of choice for the treatment of end stage heart failure and has been shown to improve not only life-span but also exercise capacity and quality of life. 66 In patients of dilated cardiomyopathy, heart failure and significant mitral regurgitation, there are some data, which suggest that mitral valve surgery may be associated with reduction in mortality as well as improvements in quality of life. 67 9. Life-sustaining therapies Life-sustaining therapy is any intervention, technology, or treatment that forestalls the moment of death or simply those therapeutic maneuvers withholding or withdrawing them would lead to termination of life. Thus, by definition, these interventions have the effect of increasing the life span of the patient. Many ''therapies'' may qualify this category: mechanical ventilation, cardio-pulmonary resuscitation, vasoactive agents, dialysis, artificial nutrition, hydration, antibiotics, blood replacement products as well as those specific for cardiac condition such as ICDs (for secondary prevention of i n d i a n h e a r t j o u r n a l 6 8 ( 2 0 1 6 ) 1 9 -2 7 SCD), pacemakers (for bradyarrhythmias), and cardiac mechanical assist devices (for advanced decompensated heart failure). 68 Drugs or life-style modification The efficacy of either strategy depends on the stage of medical science intervention (Table 5). Since life-style diseases now account for nearly 2/3rd of all serious diseases worldwide, a strategy targeted toward these diseases is likely to yield most results. 69 Drugs are powerful, indispensable weapons against CVD once it develops. However, its value in prolongation of life may not be that impressive in stable conditions: in stable CAD, absolute reduction of mortality with drugs is in the range of 1% in this situation. The benefit of therapeutic interventions (drugs and devices) increase with severity of disease, in the range of 5-10% absolute risk reduction in ACS and in the range of 10% with CHF. However, because these strategies are expensive, and they certainly have at least some side effects; they alone may not be sufficient. In contrast, a healthy lifestyle is inexpensive, safe, and effective. In primary prevention, risk factor modification can be a very effective strategy contributing to absolute mortality reduction in the range of 5% with a combination of all these strategies. On the other hand, role of drugs (in this subset), if at all, is controversial and a matter of on-going debate. In a perfectly healthy individual (primordial prevention), the only maneuvers which seem to help are adhering to a level of health which does not permit risk factors to appear (an ideal life style), a strategy capable of reducing cardiac deaths by 90%, and prolonging life-expectancy by 10 years. However, while life-style modifications are effective they are not simple to implement. It requires change and persistence (adherence to change). Thus, going beyond mere medical care, psychological and nutritional counseling, social and family support may also be required to manifest a life-time behavior modification. Conclusions The inevitability of death has been instrumental in search for therapy that extends life, the ''elixir of life.'' Over the course of eons, several interventions have been discovered which help in prolonging life but only in a special circumstance. In general, the more severe the disease and the longer the (time) life-saving intervention is applied, the greater the benefit. PCI and CABG are more useful in sicker patients with CAD while statins, ASA, and ACE-Inhibitors are clearly beneficial in any CAD, although magnitude of benefit is still small, if any, when used in primary prevention. Conflicts of interest The author has none to declare.
Phenotypic Diversity of Almond-Leaved Pear ( Pyrus spinosa Forssk.) along Eastern Adriatic Coast : Almond-leaved pear ( Pyrus spinosa Forssk., Rosaceae) is a scienti๏ฌcally poorly researched and often overlooked Mediterranean species. It is an insect-pollinated and animal-dispersed spiny, deciduous shrub or a small tree, with high-quality wood and edible fruits. The aim of the study was to assess the phenotypic diversity of almond-leaved pear in the eastern Adriatic region. The examination of phenotypic diversity was based on a morphometric analysis of 17 populations using ten phenotypic traits of leaves. Varieties of multivariate statistical analyses were conducted to evaluate the within- and among-population diversity. In addition, the Mantel tests were used to test the correlations between geographic, environmental, and phenotypic differences among populations. High phenotypic variability was determined both among and within the studied populations. Leaf-size-related traits proved to be the most variable ones, in contrast to more uniform leaf shape traits. Furthermore, three groups of populations were detected using multivariate statistical analyses. The ๏ฌrst group included trees from northern- and southernmost populations characterized by high annual precipitation. However, the trees from the second and third group were highly overlapped without a clear geographical pattern. In addition, we revealed that both environmental and geographical interactions proved to be responsible for the patterns of phenotypic variation between almond-leaved pear populations, indicating signi๏ฌcant isolation by environment (IBE) and isolation by distance (IBD) patterns. Overall, our results provide useful information about phenotypic diversity of almond-leaved pear populations for further conservation, breeding, and afforestation programs. Introduction The genus Pyrus L., family Rosaceae, belongs to the subtribe Pyrinae that corresponds to the long-recognized subfamily Maloideae in which the fruit type is generally a pome [1,2]. The genus Pyrus is believed to have originated in Central Asia, the mountainous regions our knowledge, there has been no research on the almond-leaved pear population variability to date. Scientific research on almond-leaved pear is rare [43] and mostly oriented towards its usage as a rootstock in the Mediterranean area [19,30]. However, more attention should be given to this pear species because of its valuable wood and edible fruits, as well as its great role in ecosystems, since many species of mammals feed on its fruit, such as foxes, badgers, and martens [44,45]. Furthermore, the possibility of its application in the afforestation of burned and risk areas in the Mediterranean should be examined, as it is one of the few species that tolerate repeated passages of fire (personal observations). In the present study, the diversity of almond-leaved pear populations along the eastern Adriatic coast, one of the Mediterranean hotspots of biodiversity, was examined on the leaf material from 17 populations. Our main objectives were (1) to determine the intraand interpopulation phenotypic diversity of almond-leaved pear and (2) to test whether the patterns of phenotypic divergence across the studied populations are better explained by geographical distances (isolation by distance-IBD) or by environment differentiation (isolation by environment-IBE). We hypothesized that (1) phenotypic variability was higher within populations than among them and (2) the influence of geographical and environmental factors has a positive relationship with phenotypic divergences of populations; therefore, we expected significant isolation by distance (IBD) and by environment (IBE). Plant Material and Studied Leaf Traits The material for this research was collected from 17 natural populations of almondleaved pear along the eastern Adriatic coast ( Figure 1, Table S1). This area is geomorphologically unique in the world and mostly karstic, with tectonically uplifted, rocky, and steep coastlines with a pronounced anthropogenic influence throughout history [46]. Belonging to the Mediterranean region, the covered area is characterized by hot drought summer periods with a cool and wet winter period [47]. In each of the studied populations, leaf morphometric material was collected from ten trees/shrubs. Leaves were collected only from short shoots, on the external, sunlit part of the tree's crown, as those are generally considered to be the most uniform ones [48]. From each tree, approximately 30 to 50 fully developed and undamaged leaves were collected during the vegetation season in 2020. The leaves were transported to the laboratory, and then pressed and fully dried for further morphometric analysis. Finally, 20 leaf samples were randomly selected per each tree/shrub and subjected to morphometric analysis. Vouchers of the studied populations were deposited in the herbarium at the Faculty of Forestry and Wood Technology of the University of Zagreb (DEND). A total of 3400 leaves were scanned and measured using the WinFolia program [49] with a measurement accuracy of 0.1 mm. Ten phenotypic traits were measured on each leaf: leaf area (LA); perimeter (P); form coefficient (FC); leaf length (LL); maximum leaf width (MLW); leaf length, measured from the leaf base to the point of maximum leaf width (PMLW); leaf blade width at 90% of leaf blade length (LWT); angle closed by the main leaf vein and the line defined by the leaf blade base and a point on the leaf margin, at 10% (LA1) and 25% (LA2) of leaf blade length (LA1); and petiole length (PL). Finally, 34,000 simple data values were obtained. Environmental Data Climate data were obtained from the WorldClim 2 database with a spatial resolution close to a square km [50,51]. Prior to the correlation analysis between morphometric, geographic, and environmental data, the correlations among all 19 WorldClim bioclimatic variables for all studied populations were calculated to exclude the highly correlated ones [52]. The following bioclimatic variables were used in the statistical analysis: BIO1 (Annual Mean Temperature); BIO3 (Isothermality (BIO2/BIO7) ร—100)); BIO4 (Temperature Seasonality (standard deviation ร—100)); BIO5 (Max Temperature of Warmest Month); BIO8 (Mean Temperature of Wettest Quarter); BIO9 (Mean Temperature of Driest Quarter); BIO12 Forests 2021, 12, 1630 4 of 15 (Annual Precipitation); BIO15 (Precipitation Seasonality (Coefficient of Variation)); BIO17 (Precipitation of Driest Quarter); and BIO19 (Precipitation of Coldest Quarter). Finally, ten bioclim variables, solar radiation in June (SOLAR6), and altitude were selected to describe the ecological characteristics of the studied populations, for the principal component (PC) analysis and for the calculation of environmental distances (Table S1). Environmental Data Climate data were obtained from the WorldClim 2 database with a spatial resolution close to a square km [50,51]. Prior to the correlation analysis between morphometric, geographic, and environmental data, the correlations among all 19 WorldClim bioclimatic variables for all studied populations were calculated to exclude the highly correlated ones [52]. The following bioclimatic variables were used in the statistical analysis: BIO1 (Annual Mean Temperature); BIO3 (Isothermality (BIO2/BIO7) ร—100)); BIO4 (Temperature Seasonality (standard deviation ร—100)); BIO5 (Max Temperature of Warmest Month); BIO8 (Mean Temperature of Wettest Quarter); BIO9 (Mean Temperature of Driest Quarter); BIO12 (Annual Precipitation); BIO15 (Precipitation Seasonality (Coefficient of Variation)); BIO17 (Precipitation of Driest Quarter); and BIO19 (Precipitation of Coldest Quarter). Finally, ten bioclim variables, solar radiation in June (SOLAR6), and altitude were selected to describe the ecological characteristics of the studied populations, for the principal component (PC) analysis and for the calculation of environmental distances (Table S1). Population Diversity and Phenotypic Traits Descriptive statistics (arithmetic mean, standard deviation, minimum and maximum value, and coefficient of variation) were calculated for the particular trait for each Population Diversity and Phenotypic Traits Descriptive statistics (arithmetic mean, standard deviation, minimum and maximum value, and coefficient of variation) were calculated for the particular trait for each population in order to determine the range of their variation [53,54]. To detect the level of among-and within-population variability, hierarchical analysis of variance was used. The analyzed factors were populations and trees within populations (tree factor nested inside the population factor). Correlations between Geographic, Environmental, and Morphometric Data The simple Mantel tests were performed in order to evaluate the correlation between multicharacter differences among populations [55,56]. Dissimilarity matrices were calculated to test correlations between geographic (latitude and longitude), environmental (ten bioclim, altitude, and solar radiation in June), and phenotypic differentiation (all studied leaf variables). Environmental and morphometric distance matrices were assessed as the Euclidian distances between the population means for the first three factors of the principal component analysis. Geographic distances were calculated as the Euclidian distance between the population sites (latitude and longitude). The significance level was assessed after 10,000 permutations, and the Mantel tests were performed with the R package "Vegan" [57]. Population Structure Population differentiation was identified using multivariate statistical methods [58]. The K-means method was applied to detect phenotypic structure and define the number of K-groups that best explained the phenotypic variation of populations [59][60][61][62]. If the proportion of a specific population was equal to or higher than 0.7, that population was assumed to belong to one cluster, and if it was lower than 0.7, that population was considered to be of mixed origin. A dendrogram of the closest Euclidean distances on the basis of the unweighted pair-group method using arithmetic means (UPGMA) was constructed to check the structure between the studied populations. In addition, population structure was assessed using principal component (PC) analysis across all individuals and all studied traits. The input data in multivariate statistical methods were previously standardized, i.e., standardization of traits to zero mean and unit standard deviation was performed prior to each multivariate analysis. All statistical analyses were performed using the software packages STATISTICA version 13 [63] and R v.3.4.3 [64]. Population Diversity and Phenotypic Traits The results of the conducted descriptive statistical analysis are shown for each studied population (Table S2), and for all populations together (Table 1). Of all measured leaf traits, leaf area (LA) and petiole length (PL) proved to be the most variable ones, with a coefficient of variation above 40%. Contrarily, the leaf trait with the lowest variability was the form coefficient (FC), with a coefficient of variation under 20%. In general, leaf traits related to its form (FC, LA1, and LA2) showed less variability compared to those relating to leaf size. The most variable was the southernmost population Konavle, while no population stands out as the least variable one. The population ลฝminj was distinguished by having the highest values for four out of the ten measured leaf traits (LA, FC, MPW, and LWT). On the other hand, population Hvar has the smallest value for the majority of measured variables (LA, P, LL, MLW, PMLW, LWT, and PL). The highest and the smallest values for traits considering leaf angles (LA1 and LA2) were noted for populations Krka and Muฤ‡, respectively. The already mentioned population Hvar, along with other southern island and coastal hinterland populations, was found to have generally smaller leaves. On the other hand, northern populations located in Istria and far southern coastal populations tend to have rather larger leaves. Statistically significant differences among and within populations were confirmed for all studied leaf traits at a significant level p < 0.001. The percent of variation was significantly higher among the trees within populations compared to the one among populations ( Figure 2). However, the component of error accounted for the greatest part of total variation for the majority of measured leaf traits. Statistically significant differences among and within populations were confirmed for all studied leaf traits at a significant level p < 0.001. The percent of variation was significantly higher among the trees within populations compared to the one among populations ( Figure 2). However, the component of error accounted for the greatest part of total variation for the majority of measured leaf traits. Figure 2. Partitioning of total variance by hierarchical level for studied leaf phenotypic traits. Acronyms for leaf morphometric traits as in Table 1. Climate Differences among Sampling Sites The ten bioclim variables, solar radiation in June, and altitude varied across the sampling sites ( Figure 1B). The principal components analysis revealed distinct climates based on the annual mean temperature (BIO1), seasonality of precipitation (BIO15), precipitation of driest quarter (BIO17), solar radiation in June (SOLAR6), and mean temperature of driest quarter (BIO9), along the first principal component. The second principal component was in negative correlation with the annual precipitation (BIO12) and precipitation of the coldest quarter (BIO19). The first two components accounted for 64.15% of the total variance (Table S3). These two principal components clearly distinguished the two northernmost and two southernmost populations that share high annual precipitation values from other studied populations. Acronyms for leaf morphometric traits as in Table 1. Climate Differences among Sampling Sites The ten bioclim variables, solar radiation in June, and altitude varied across the sampling sites ( Figure 1B). The principal components analysis revealed distinct climates based on the annual mean temperature (BIO1), seasonality of precipitation (BIO15), precipitation of driest quarter (BIO17), solar radiation in June (SOLAR6), and mean temperature of driest quarter (BIO9), along the first principal component. The second principal component was in negative correlation with the annual precipitation (BIO12) and precipitation of the coldest quarter (BIO19). The first two components accounted for 64.15% of the total variance (Table S3). These two principal components clearly distinguished the two northernmost and two southernmost populations that share high annual precipitation values from other studied populations. Correlations between Geographic, Environmental, and Morphometric Data In order to determine whether the observed phenotypic variability was caused by geographical (IBD) or environmental distances (IBE) between the studied populations, the Mantel tests were performed. The results identified significant correlations between the phenotypic, geographical, and environmental distance matrices ( Figure 3). Correlations were higher between phenotypic and environmental distance matrices (r = 0.4134, p = 0.002), and slightly smaller but still statistically significant between phenotypic and geographical distance matrices (r = 0.2516, p = 0.029). Therefore, leaf phenotypic traits in our populations show greater dependence on environmental than geographic factors, which explains why some geographically distant populations, such as Pula in the north and southernmost Konavle, share great similarities, despite being more than 440 km apart. phenotypic, geographical, and environmental distance matrices (Figure 3). Correlations were higher between phenotypic and environmental distance matrices (r = 0.4134, p = 0.002), and slightly smaller but still statistically significant between phenotypic and geographical distance matrices (r = 0.2516, p = 0.029). Therefore, leaf phenotypic traits in our populations show greater dependence on environmental than geographic factors, which explains why some geographically distant populations, such as Pula in the north and southernmost Konavle, share great similarities, despite being more than 440 km apart. Population Structure The structure of the 17 almond-leaved populations was illustrated by the K-means clustering method, which revealed the most probable division of populations into three groups ( Figure 1A To additionally analyze the relationships between the studied populations, a dendrogram was constructed using the UPGMA method of cluster analysis on the closest Euclidean distances between populations ( Figure 4). The results are compliant with the results of the K-means clustering method, and the studied populations were divided into three major clusters. The first cluster (A) that stands out the most was formed by the northernmost populations ล kropeti and ลฝminj. The second extensive cluster (B) was formed by populations Blato na Cetini, Nin, Konavle, Pula, Slano, Sinj, Vir, and Muฤ‡. Populations Biokovo, Braฤ, Peljeลกac, Hvar, Drniลก, Krka, and Obrovac formed the third cluster (C). Populations Biokovo and Braฤ, with almost identical altitude, were shown to be the most similar ones, followed by populations ล kropeti and ลฝminj, which share similar altitudes and Population Structure The structure of the 17 almond-leaved populations was illustrated by the K-means clustering method, which revealed the most probable division of populations into three groups ( Figure 1A To additionally analyze the relationships between the studied populations, a dendrogram was constructed using the UPGMA method of cluster analysis on the closest Euclidean distances between populations ( Figure 4). The results are compliant with the results of the K-means clustering method, and the studied populations were divided into three major clusters. The first cluster (A) that stands out the most was formed by the northernmost populations ล kropeti and ลฝminj. The second extensive cluster (B) was formed by populations Blato na Cetini, Nin, Konavle, Pula, Slano, Sinj, Vir, and Muฤ‡. Populations Biokovo, Braฤ, Peljeลกac, Hvar, Drniลก, Krka, and Obrovac formed the third cluster (C). Populations Biokovo and Braฤ, with almost identical altitude, were shown to be the most similar ones, followed by populations ล kropeti and ลฝminj, which share similar altitudes and almost identical precipitation values. Interestingly, populations Pula and Konavle, which are located at the opposite ends of the study area, showed great similarity, corresponding to similar environmental conditions. PC analysis showed that 90.0% of the total variation is explained by the first two principal components ( Table 2). The first principal component participates in the overall variance with 54.75% and is in highly negative correlation with the leaf size variables. In other words, individuals with larger leaves and longer petioles are grouped on the left side of the diagram, while those with smaller leaves and shorter petioles are grouped on the right side of the diagram ( Figure 5). The second principal component accounts for 35.25% of the overall variance and is negatively correlated with leaf shape variables, i.e., leaf angles (LA1 and LA2), and form coefficient (FC). A significant overlap in PC diagram was observed between the studied populations. PC analysis showed that 90.0% of the total variation is explained by the first two principal components ( Table 2). The first principal component participates in the overall variance with 54.75% and is in highly negative correlation with the leaf size variables. In other words, individuals with larger leaves and longer petioles are grouped on the left side of the diagram, while those with smaller leaves and shorter petioles are grouped on the right side of the diagram ( Figure 5). The second principal component accounts for 35.25% of the overall variance and is negatively correlated with leaf shape variables, i.e., leaf angles (LA1 and LA2), and form coefficient (FC). A significant overlap in PC diagram was observed between the studied populations. Population Diversity and Phenotypic Traits According to available literature, almond-leaved pear is described to have 2.5-7 cm long and 1-3 cm wide leaves with 1-2 cm long petioles [15,17,21]. Although closer to the lower limit, our results are in accordance with these descriptions, with the leaf length and width ranges between 1.3-7.3 cm (average 3.4 cm) and 0.4-3.3 cm (average 1.3 cm), and petiole length between 0.1-3.6 cm (average 1.1 cm). Comparing the morphometric leaf Population Diversity and Phenotypic Traits According to available literature, almond-leaved pear is described to have 2.5-7 cm long and 1-3 cm wide leaves with 1-2 cm long petioles [15,17,21]. Although closer to the lower limit, our results are in accordance with these descriptions, with the leaf length and width ranges between 1.3-7.3 cm (average 3.4 cm) and 0.4-3.3 cm (average 1.3 cm), and petiole length between 0.1-3.6 cm (average 1.1 cm). Comparing the morphometric leaf characteristics between P. spinosa and other South-East European and Western Asian narrow-leaved pear species (P. elaeagnifolia, P. nivalis and P. syriaca), almond-leaved pear stands out as the one with the smallest and narrowest leaves [9,15,21,65]. However, all of the above-mentioned species have overlapping phenotypes with regard to leaf dimensions and shape, and the similarity between them is indisputable, especially considering that all species are described as highly variable. This was also confirmed in the study conducted by Korotkova et al. [26], where P. spinosa samples were divided into two separate clades. The same authors stated that the taxon concept of P. spinosa as used in Flora Europaea [66] might not define a natural entity. Although our study was not aimed at resolving the phylogenetic relationships between narrowed-leaved pear species from southern Europe and western Asia, it is quite evident that those species are not easily differentiated because of high phenotypic plasticity and gradual trait transitions. Our research showed leaf area and petiole length as the most variable leaf traits. Such results are in agreement with previously conducted research on other Pyrus [67], Prunus L. [68][69][70][71], and Malus Mill. species [72]. Considering that leaf area plays a major role in effective light capture, water balance, and temperature regulation [73], and the petiole has an important role in the adjustment of foliage and its inclination angles for optimal light capture [74], such results are expected. In addition, our results confirmed that the leaf shape is genetically fixed in a particular type [75,76]. The least variable leaf traits in this study were those related to the shape of the leaf lamina (FC) and leaf blade base (leaf angles LA1 and LA2). Although almond-leaved pear forms small and isolated populations, they were highly diverse. Based on the analysis of variance, statistically significant differences were confirmed for all measured leaf traits on inter-and intrapopulation levels. In general, intrapopulation variability was higher than interpopulation variability for all of the measured leaf traits, which indicates high gene flow between populations. Compared to the studies conducted on other fruit species, such as Prunus avium (L.) L. (CV = 10.6-38.2%) [77], Sorbus domestica L. (CV 7.2-30.7%) [78], and S. torminalis (L.) Crantz (12.7-29.3%) [48], our results exhibit a significantly higher variability of leaf traits. Such results are noticeable even compared to other Pyrus species, such as P. mamorensis Trab. (CV 15.3-30.2%) [79], P. communis L. (CV 22.3-26.5%), P. pyrifolia (Burm.f.) Nakai (3.8-15.8%), and P. syriaca (11.3-41.5%) [80]. High phenotypic variability of almond-leaved pear leaves probably resulted from a combination of multiple indicators related to the diversity of environmental factors and genotypic variability [81]. The ability of a particular genotype to exhibit different phenotypic characteristics under diverse environmental conditions is usually described as phenotypic plasticity [82,83]. Phenotypic plasticity enables species to survive in a wide range of environmental conditions and reduces the risk of species loss due to climate change [84]. Such adaptation to changing environmental conditions is crucial for a species' persistence [85] and could have a decisive role in the future global warming context [84]. Since heterogeneous species, such as pears, respond to climate change within their natural range, they are directly influenced by phenomena such as local adaptation, intra-specific diversity, and phenotypic plasticity [84], which contributes to genetic variation between populations [86]. This might explain the existing variability between our populations, as they grow under different bioclimatic conditions and therefore try to adapt in order to fully utilize available resources. Correlations between Geographic, Environmental, and Morphometric Data A significant relationship was found between both geographical and phenotypic distance matrices (isolation by distance-IBD) and environmental and phenotypic distance matrices (isolation by environment-IBE). The isolation by distance (IBD) model indicates that genetic differentiation between populations increases with geographical distances [87]. On the other hand, isolation by environment (IBE) explains genetic differentiation through environmental differences between populations, where phenotypic and environmental distances are positively correlated, independent of geographical distance [88]. As a consequence, populations often diverge in ways that are crucial to their interaction with the landscape, including dispersal patterns, habitat preference, and adaptation to different environmental conditions, all of which may influence patterns of gene flow. Today, ecologically driven diversification is progressively being acknowledged as the main driver of population diversification [89,90]. Our results show that the predominant pattern of phenotypic differentiation among P. spinosa populations is isolation by environment (IBE), that is, environmental effects play an important role in shaping population diversity and manifesting phenotypic differences between environmentally heterogeneous populations. Overall, our results suggest that reduction in leaf size is a likely outcome of almond-leaved pear growth in an environment with low precipitation during the vegetation season accompanied by higher temperatures and solar radiation. However, our results confirmed that phenotypic differentiation is correlated with both geographical distances and environmental differences between studied populations, that these adaptive and neutral processes are not mutually exclusive, and that natural populations are expected to experience them in combination [91,92]. Population Structure The structure of researched populations followed IBD and IBE patterns to some extent. We obtained three clusters of populations. Interestingly, the first cluster consisted of two northernmost and partially two southernmost populations that share high annual precipitation values. The other two clusters of populations were highly intermixed, and they did not appear to follow any clear geographical or environmental pattern. Almond-leaved pear is an insect-pollinated and animal-dispersed species, which outlines the assumption that seeds are dispersed in any possible direction [93]. As there are no major geographical barriers between these populations, animals and birds that feed on almond-leaved pear fruit spread genetic material into other populations, even those on islands, causing mixed ancestry of populations. In addition, we cannot exclude the possibility that, despite the heterogeneous Mediterranean landscape and the low-density populations of P. spinosa in the study area, most of the adult trees formed part of an extensive network of pollen flow spanning discontinuous, widely spaced, open bush associations. This could possibly explain the similarity between geographically close populations. In any case, we assume that the selective pressure of the environment is strong for almond-leaved pear and likely explains the differences between the studied populations along the precipitation gradient. We suggest that the adaptability of P. spinosa to local climate affects its phenotypic traits. Conclusions and Practical Implications Almond-leaved pear showed great phenotypic diversity within and among natural populations distributed along the eastern Adriatic coast. The greatest diversity was found among phenotypic traits related to leaf size and water use efficiency, such as leaf area and petiole length, while mostly genetically driven leaf shape indicators were the least variable. In addition, we revealed that both geographical and environmental interactions play an important role in the patterns of phenotypic variation between almond-leaved pear populations. The predominant pattern of phenotypic differentiation among P. spinosa populations was isolation by environment (IBE), indicating that populations from similar environmental conditions exhibit similar phenotypes. This has reflected on population structure, where environmentally similar populations, as well as geographically close populations, were classified into three clusters, with relatively heterogeneous ancestry. The problematic taxonomy of this species requires comprehensive molecular and morphological research throughout its natural range. For now, this question remains open, which leaves room for future research. The almond-leaved pear is of great importance for biodiversity in the xerophytic and thermophilic habitats of the Mediterranean area. Like other pears, the almond-leaved pear also has high quality wood, which can be used for the carving of small objects. However, on more favorable sites, it can reach bigger dimensions, so the wood can be used for furniture or as high-quality firewood. It has edible fruits that can be dried and used for tea or processed as desired. Various birds and mammals, such as martens, badgers, and foxes, feed on its fruit, which contributes to the normal functioning of the ecosystem. In addition, its currently most important use is as a rootstock for grafting various pear cultivars, especially in the Mediterranean area with calcareous soils, where drought resistance is a highly desirable trait. When collecting samples for this study, it was noticed that the species is highly fire-resistant and acts as a passive pyrophyte, indicating the potential use in afforestation of burned and other areas exposed to such risks. Consequently, the almond-leaved pear can be a potentially useful species in forestry in the Mediterranean area, especially taking into account the increasingly pronounced climate changes to which the area is highly exposed. Decreasing amounts of precipitation and increasing temperatures will force foresters to resort to new solutions in afforestation and forest management. In addition, by selection and crossbreeding, cultivars with desirable fruit traits could be obtained and could be commercially grown in such areas. It is also useful to mention its ornamental value, especially during autumn when fruits are ripening.
Women Working in Nonstandard Forms of Employment: Meeting Employee Interests : Purpose: The main aim of this article was to present characteristics of the employment of women working in nonstandard forms of employment in Poland considering employee interests. Design/Methodology/Approach: Based on the narrative literature review, hypotheses and research questions were formulated and the meaning of key categories was clarified. Data for the analysis were derived from quantitative research conducted using the CAWI technique. The survey was conducted on a representative sample of 1,000 working Poles. The analysis of the results was performed using the methods of descriptive statistics and statistical tests to check the correlations between variables. Findings: The research results did not confirm the beneficial effect of nonstandard forms of employment on the labor activation of women. Women do not work in these forms more often than men, but they are more frequently employed involuntarily. The possibility to choose the form of employment is linked to a better meeting of the interests of working women. In the standard forms, women's interests that are satisfied include employment stability, protection and access to social benefits, health services at the employer's expense, good occupational safety and health, training at the employer's expense and assistance during redundancy. In nonstandard forms, these mainly include a good atmosphere and the possibility to influence working time organization. Practical Implications: The results of the research can be used in making the decisions of political character, concerning the applicability and share of particular forms of employment in the labor market. From the perspective of usefulness to employers, important results were obtained concerning the possibilities of choosing the form of employment and meeting specific interests of women. Originality/value: based on secondary from markets. A survey questionnaire used in project. Original and representative research results were obtained on women's work in nonstandard forms of Introduction Nonstandard forms of employment have been the subject of scientific explorations and research for many years. Their presence in European labor markets has also become established, whereas the economic slowdown associated with the COVID-19 pandemic may further induce employers to use nonstandard forms of employment to provide greater flexibility (Bฤ…k-Grabowska and Piwowar-Sulej, 2020). Furthermore, the issue of benefits of nonstandard forms of employment for employees is not presented unequivocally. One of the barriers here is the differences in the definition of nonstandard forms of employment and the diversity of these forms across individual countries. An important factor considered during the analysis of the benefits of nonstandard forms of employment is the gender of employees. The gender gap indicates that women work in nonstandard forms of employment more often than men. In addition, these forms are considered to have the characteristics of precarious employment, indicating a low degree of meeting employee interests (Jacquemond and Breau, 2015). Furthermore, the benefits of the use of nonstandard forms of employment for women are highlighted by the possibility of more flexibility combining work with family life, creating the conditions for the labor activation of women, and, in some nonstandard forms of employment, such as self-employment, the possibility of meeting employees' business ambitions and earning higher incomes (Tonkikh et al., 2019;Buribayev and Khamzina 2019;Farnรฉ and Vergara 2015). Patterns of the behavior of men and women in the labor market are not just a reflection of the economic situation, the cultural context and individual beliefs, but are also a reflection of the existing institutional arrangements (Dobrotiฤ‡, 2015). This justifies conducting research in a national context. Poland is among the European Union countries (next to Spain and Portugal) with the highest percentage of nonstandard forms used in employment (Eurostat, 2021). The range of these forms is also very wide, including contracts based on civil law (e.g., contracts of mandate and contracts for specific work), employment through temporary work agencies, selfemployment, and unregistered employment. These conditions make Poland an appropriate subject to conduct research on the use of nonstandard forms of employment. The aim of this article is to show characteristics of the employment of women who are working in nonstandard forms in Poland and to provide answers to the questions of whether women work in nonstandard forms of employment more often than men and whether they work in these forms more often involuntarily. To characterize employment in nonstandard forms, reference was made to the category of employee interests. Nineteen employee interests were selected, and the degree of realization of these interests was evaluated by the respondents. The survey was conducted on a representative sample of 1,000 respondents (working Poles), including 441 women. Empirical data were used to test hypotheses, with their content determined using a narrative literature review. A Narrative Literature Review: Assumptions A narrative literature review was used in this study. This method was chosen to show the current state of the knowledge on the use of different forms of employment considering the gender criterion. The choice of this method was dictated mainly by the complexity of the phenomenon studied and the methodological diversity in the approach to its explorations. A narrative literature review seeks to combine different studies to reinterpret and to establish mutual relationships allowing for the definition of the context of the research problem (Baumeister and Leary, 1997). It should be emphasized that this method contains a subjective element that gives the researcher greater latitude in the identification of studies to be reviewed and discussing the results. The researcher decides which studies should be included and which should be excluded from further analysis. The selection procedure and the choices made, however, must be established and explained (Green et al., 2006). This means identifying the subject of the study, the database, and the time frame adopted in the literature review. In the present study, forms of employment with respect to the gender criterion were adopted as the subject of the study. The publications used in the analysis were searched in the ISI Web of Science database. In the main stage, the literature review was limited to scientific publications published between 2010 and 2020. To identify potentially relevant research articles, the database was searched using the following keywords: forms of employment, employment forms, gender, and women. This was followed by content analysis and verification of the usefulness of the results obtained. The next step was to extend the scope of the analyzed publications with scientific papers published before 2010. Publications of significant importance to which the authors of the publications published after 2010 made reference to analyze the research problems and the components of the research hypotheses were selected. At its core, a narrative literature review, referring to both empirical and theoretical studies, is a technique for building the theory and generating hypotheses (van Knippenberg, 2012). In this paper, the review was primarily used to formulate research hypotheses. Working in Nonstandard Forms of Employment According to Gender: Results of a Literature Review The vast majority of the analyses on the use of nonstandard forms of employment have indicated the importance of the gender criterion. Women work in nonstandard forms of employment more often than men with these forms taking on a precarious nature more often than for men (Benach et al., 2016;Acker, 1999;Tapia et al., 2017;Williamson and Baird, 2014;Bondy, 2018;Klimenko et al., 2017). This phenomenon is part of the gender gap problem. It is considered that the gender gap favors men if we consider the rate of participation in the labor market, wage discrepancies or the presence in senior positions in company boards; conversely, the gender gap favors women when we address atypical forms of employment (part-time work or fixed-term contracts) and precarious jobs (Niลฃฤƒ, 2019;Bieszk-Stolorz, 2020). The association of precarious employment with gender is also confirmed by a study of the spatial dimensions of precarious employment (Jacquemond and Breau, 2015). The overrepresentation of women in nonstandard forms of employment is accentuated when some additional factors are present, such as being a migrant. Research in Canada showed a higher concentration of precarious forms among migrants (Ali and Newbold, 2020). Furthermore, research in Australia found that in a disadvantaged group in the labor market (migrants), women are more likely to be burdened/doomed to insecure forms of employment, and they also more often give up work to care for their families (Ressia et al., 2017). Among those employed in agriculture, women also work significantly more often in precarious forms of employment, including unregistered employment (Gormus, 2019). Research conducted during the economic crisis produced interesting conclusions. Research in Cuba showed that during the crisis, highly skilled women accepted jobs in nonstandard forms with relatively frequent work that was not in line with their qualifications, including paid work performed at home in unregistered employment (Jerรณnimo Kersh, 2018). In the United States and Canada, however, the nonstandard employment of women increasingly manifests itself, although not on a mass scale, in the shift to craft and artistic work performed at home, which is sometimes associated with blogging (craft blogging, especially domestic arts and crafts) (Black et al., 2019). There is also a noticeable trend towards increasing the ratio of women in selfemployment (Nina-Pazarzi and Giannacourou, 2005). International studies have found that nowadays women are also more often self-employed than men (Bรถgenhold, 2019). Therefore, it can be concluded that women work more often than men in nonstandard forms and that research on the relationship between work in these forms and meeting employee interests considering the gender criterion should be conducted. One of the basic issues of women's working in nonstandard forms of employment is whether making the labor market more flexible by introducing such forms is beneficial for women. The literature analysis indicates that the answer to this question depends on the context, including national determinants. In some countries, the labor activation of women is still at such a low level that the use of nonstandard forms of employment is seen primarily as an opportunity for women to attain gainful employment. In Tunisia, which, compared to other Arab countries, is characterized by a relatively high level of respect for women's rights, the majority of women are still outside the labor market, unemployed, or work in precarious forms of employment. The need for gender-based policies with respect to women's economic participation and rights is strongly emphasized (Moghadam, 2019). Kazakhstan is described as a country at the beginning of the development of a legislative and organizational framework for gender equality in the workplace. Attention is also drawn to the paradox that currently, the social expectations in Kazakhstan that women would work are basically the same as those for men. At the same time, however, a stereotype of male privilege persists in family and domestic relationships. In this context, expanding the use of nonstandard forms of employment is seen by the authors of the analysis as potentially beneficial for the situation of women in the labor market (Buribayev and Khamzina, 2019). Similar conclusions were drawn from the research conducted in Russia on women's employment in remote forms of employment. It was concluded that remote working can be seen as a factor in improving women's quality of life in sociocultural, familial, parental and reproductive terms. An assessment of women's attitudes towards remote working revealed that the majority of respondents perceived their conditions as positive. The high willingness of female job seekers to find remote employment was empirically confirmed. The authors found this form of employment to be beneficial in promoting female employment (Tonkikh et al., 2019). Conclusions regarding the positive role of flexible forms of employment in the activation of women in the Russian labor market were also presented by other researchers who indicated that such solutions are particularly beneficial for activating rural female residents and women raising children (Blinova et al., 2014). Additionally, in Hungary, the promotion of female employment occurs through increasing the use of flexible forms of employment. The research stated that these solutions should be family-friendly, particularly for women raising children. The authors further argue that the introduction of such solutions increases the adaptability of companies and that there is a need for more knowledge of managers on this topic (Essosy and Vinkoczi, 2018). Therefore, in countries where the degree of labor activation of women is relatively low and gender equality at the workplace still needs to be significantly improved, nonstandard forms of employment tend to be perceived as beneficial for women. Furthermore, this perception is largely related to the need for women to combine their work with family life, including childcare. But the acceptance of women's working roles increasing (Motiejลซnaitฤ—, 2010). A number of recent research results, however, show that the gender perspective presented above is not relevant for countries where the presence of women in the labor market is well established and the gender gap is not so pronounced. Results show that work-life conflict is experienced by both women and men. Both women and men feel they have to adapt their family lives to their professional needs. According to research, this transformation mainly concerns middle-class families (Rincรณn and Martรญnez, 2020). In-depth research on preferences for combining onsite and home-based work revealed that, to a relatively large extent, these preferences are independent of gender and the caregiving factor. These conclusions were drawn, among others, from a study in which 187 participants were imprisoned. A preference for approximately two remote working days per week was revealed. No significant differences were observed between men, women, parents, nonparents, fathers, and mothers in their choice of the number of days of discretionary remote working (Sherman, 2020). Therefore, creating conditions that improve work-life balance should become the standard independent of the employee's gender. Furthermore, research on the employment of women in the IT sector showed that women are highly career-oriented, involved in their careers and experience fewer conflicts related to work-life balance than expected. According to the authors, to overcome the problem of a low percentage of women working in the IT sector, it is necessary not to offer them flexible forms of employment but to take measures to combat gender and age stereotypes in the workplace (Lamolla and Gonzรกlez Ramos, 2020). Similarly, research in higher education in the UK showed a rather stigmatizing role of nonstandard forms of employment. It was shown that the use of such forms of employment, especially those with a protracted nature, can suppress women's leadership aspirations due to the lack of career development opportunities and lead to a sense of alienation from the professional community and even personal difficulties such as feelings of isolation and poor self-esteem. It was concluded that to develop academic careers and leadership, women usually need to first gain permanence in the organization, guaranteed by a long-term form of employment (Vicary and Jones, 2017). In the context of these research results, presenting nonstandard forms of employment as a solution for women that facilitates their caretaking roles may be perceived as unfavorable and stigmatizing for women, especially those who plan their careers within industries with attractive jobs that offer development opportunities and relatively high salaries. The perception of the advantages of nonstandard forms in the context of the gender criterion is currently ambiguous and evolving. It is stated in the literature that the gender gap is a phenomenon that has a territorial, geographical, historical, economic and cultural context, so there are differences from country to country (Niลฃฤƒ, 2019). Perceptions of the benefits of nonstandard forms of employment are also contextual. An important component of this context is the role of women in individual cultures, the relationship between a woman's role in family and professional life and how this relationship compares to that observed in men. In countries with greater gender equality, women's work is seen more as providing career development and not just having a job in general. Further results of the review refer to an attempt to answer the question of what motives induce women more precisely to accept nonstandard forms of employment and what interests such work may entail. It is recognized that in relation to work, women manifest certain expectations and needs that may change over the life course as women develop in professional and personal terms. The results of the research conducted on the criteria of choosing the form of employment showed that the most significant criteria are the following: (1) free time, (2) career opportunities, (3) monthly salary, (4) income stability, (5) a reasonable workday, and (5) the ability to perform work outside the home. The results also showed the high importance of working in women's lives. It was concluded that women do not choose between working and not working but between alternative forms of employment. It is worth noting, however, that the study assumed a priori that women's careers should be balanced with family life and that women are relatively free to choose their form of employment (Busygina et al., 2019). Some research results refer to more precisely defined forms of employment, e.g., self-employment. A study in Austria, where one-person enterprises already account for more than 50% of all enterprises, identified primary motives for selfemployment such as self-realization and working without a hierarchy and partly identified the lack of opportunities for standard forms of employment (Bรถgenhold and Klinglmair, 2017). Examining macrolevel patterns of self-employment shows the diversity of this form of employment. Self-employment is a heterogeneous category that can expose people to the risk of precariousness and poverty, but it can also be a form of satisfying the interests of individuals, including economic interests, and contribute to job creation and economic growth for society (Bรถgenhold, 2019). It was shown in a study conducted in Colombia that self-employed women reported improvements in the quality of employment evaluated by parameters such as: (1) social security coverage, (2) earnings, (3) social dialogue, (4) job satisfaction, and (5) reconciling work and family life. Earnings were identified as the factor most relevant to improving the quality of employment (Farnรฉ and Vergara, 2015). Many studies, however, reveal shortcomings in the realization of workers' interests in the case of nonstandard forms of employment. One such problem is the paucity of information, which, as shown by research conducted in Germany, has a negative impact on the social climate and the possibility of employee development. The research highlighted this problem, especially in relation to the phenomenon of performing several jobs (Kottwitz et al., 2017). Workers employed in nonstandard forms located in peripheral spheres of employment may not be covered by activities aimed at developing their competencies and careers. This relationship was supported by the example of agency workers (temporary work agency workers). The limited job security accompanying this form of employment means that both companies and agencies do not invest significantly in the development of employees' competences (Hรฅkansson and Isidorsson, 2015). One of the deferred effects of working in nonstandard forms may be lower retirement benefits. Since these forms have been present in labor markets for a long time, this finding has been confirmed empirically. Research in Germany showed that two factors are particularly important in reducing the level of future pensions: late entry into the labor market and working in diversified and unstable employment (Tophoven and Tisch, 2016). Commonly cited disadvantages associated with nonstandard forms of employment include the following: high levels of job insecurity, unpredictable working hours, low pay, and limited opportunities for career progression (Kalleberg et al., 2000;McGovern et al., 2004). From this perspective, employees are considered to accept nonstandard forms of employment because their choices are limited (Buddelmeyer, McVicar, and Wooden 2015). Therefore, when analyzing the degree to which the interests of employees working in nonstandard forms are met, involuntary nonstandard employment (INE) seems to be an important category. When examining this category in an international context, the INE index was assumed to reflect poor working conditions in nonstandard forms of employment. The results of the analyses showed significant differences between countries in the prevalence of INE, which is the highest in Spain, Portugal, and Poland. INE is typically lower in countries with Anglo-Saxon and Nordic models of a welfare state. Econometric analyses have further shown that women are more exposed to work in INE (Green and Livanos, 2017). Some authors stressed the need for greater protection of the interests of the employees working in nonstandard forms and the need to study the effect of these forms on employee well-being. From this perspective, nonstandard forms of employment are perceived as the cause of a lack of job security and a barrier to ensuring social justice due to the decreasing role of trade unions. A study demonstrated the correlation of low job security associated with the presence of nonstandard forms of employment with the level of unionization (Essers, 2017). The high percentage of nonstandard forms of employment in the labor market is sometimes even indicated as a determinant of a crisis in employment (Klimenko et al., 2017). It is recommended that solutions be developed through social dialogue to ensure the protection of all labor market participants and reduce gender inequalities (Novikova et al., 2019;Benach et al., 2016). Recommendations for further research in this area emphasize the importance of the gender factor, especially in conjunction with other variables (Benach et al., 2016). Basic Categories: A Definitional Approach Nonstandard forms of employment: In many studies, nonstandard forms of employment are not precisely defined, and the characteristics of precarious employment are assigned to them a priori. The definitions of precarious employment accentuate the negative features of this employment, such as uncertainty and instability, low wages and economic deprivation, limited workplace rights and social protection, and powerlessness to exercise legally granted workplace rights (Standing, 2011;Benach et al., 2014). The assumptions of this project included a departure from assuming a negative perception of nonstandard forms of employment. Therefore, it was necessary to adopt objective criteria for dividing forms of employment into standard and nonstandard. This was based on the criterion of the type of contract/agreement concluded with the worker. If it is an employment contract based on labor law regulations concluded directly with the worker, the form is considered standard. Nonstandard forms include self-employment, contracts based on civil law such as contracts of mandates or contracts for specific work (used in some countries), agency employment and some types of outsourcing (agency or outsourced workers), and unregistered employment (Leighton et al., 2007;Cappeli and Keller, 2013). This approach to nonstandard forms of employment does not force the assumption that every form of nonstandard employment must be precarious. This seems to be an important assumption contributing to the objectivity of the research. Employee interests: The term employee interests is not clearly defined in the literature. Interests are most often described as the manifestation of the advantages (Cambridge Dictionary) and expectations (Lotko et al., 2016) of employees in relation to the performed work. It is noted that employees who have their expectations strive to achieve them, and if this does not happen, they eventually seek work for another employer. The explorations of the interests of employees are conducted in a rather scattered way. They are concerned, for example, economic or professional aspects (Carter, 1940). The problems of interests viewed as employee expectations have been addressed in reference to the research on motivation to work (Lobanova, 2015), the possibility of activating this motivation by meeting the employee interests; or showing the relationship between interests and future job search, choice of education, or the currently performed job (Harackiewicz and Hulleman, 2010;Hidi and Renninger, 2006). Consideration is also given to job expectations and satisfaction or interests are combined with motivation theory, stakeholder theory, and competence theory (Haidong and Yu-jun, 2006). There are also views that the advantages and employee expectations are not static or stable because they depend on many variables such as personality, experience, culture, and peer influence (Oginni et al., 2018). Together, this means that there is no clearly defined list of employee interests to become the basis for research. Individual authors have created such lists mainly for their research, e.g., the strong interest inventory in the context of career development (Staggs, 2004;Katz et al., 1999;Prince, 1998). In addition, there are statements in the literature that it is critical to be aware of the expectations of current employees, Maxwell and Knox (2009) state that this key issue is too often overlooked. There are no clear indications as to the rationale and means of taking measures to make it easier for employers to meet employees halfway and respect their interests related to performing the work. Therefore, the aim of the present study is to bring certain arguments in this regard. Bearing in mind the abovementioned shortcomings of creating a closed list of employees' interests and the indications of authors who state that the list of employees' interests is subject to constant change (Baker, 1996), such a list for the purposes of the research presented in this study was created by referencing the studies available in the literature. References were made to the statements that the essential expectations of employees include adequate remuneration, training and development opportunities, promotion opportunities, safe and healthy working conditions, welfare, and the quality of professional life (Oginni et al., 2018;Gableta and Bodak, 2012;Zhong et al., 2017). Therefore, the authors' questionnaire for the recognition of employee interests in different forms of employment was developed, using, among others, the suggestions indicated by the abovementioned authors, taking into account the changes in the economic reality. Meeting the Interests of Women Working in Nonstandard Forms: Research Hypotheses The dominant findings in the analyses are that women work in nonstandard forms of employment more often than men. This position, however, should be verified by considering the current national circumstances. H1: Women now work in nonstandard forms of employment more often than men. Importantly, it is emphasized that women work in involuntary nonstandard forms of employment more often than men. The verification of this position seems to be particularly important due to the assumption formulated in the related literature that the lack of choice of the form of employment coexists with lower quality jobs and a lower level of meeting employee interests. H2: Women work involuntarily in nonstandard forms of employment more often than men. H3: The perception of the degree of meeting employee interests by women working in nonstandard forms of employment depends on the opportunity to choose a specific form. The analysis of previous research results further indicates that meeting employee interests may depend on the form of nonstandard employment. Differences in the perception of motivations and meeting employee interests were identified, among others, within forms of nonstandard employment such as self-employment or working for a temporary employment agency. H4: The perception of the degree of meeting employee interests by women working in nonstandard forms of employment depends on the type of nonstandard employment. The analysis shows that the phenomenon of meeting the interests of women working in nonstandard forms is not perceived unequivocally and is evolving. Increasingly, it is not so much a question of providing women with opportunities to accept work that can be combined with family life and caring responsibilities but of ensuring that women have attractive jobs and career opportunities. This is the rationale for the explorations of what interests of women can currently be met by working in nonstandard forms of employment. Such research should be conducted without assuming a priori a positive or a negative role of nonstandard forms of employment. It is also worth noting that a significant part of the analyses to date have been based on data from available databases and labor market surveys (international or national). Most studies have failed to include in-depth job characterization. In this context, research designed to collect data directly from employees, conducted on large representative samples and identifying the degree to which individual employee interests are met seems valuable. Material and Methods The paper presents the results of quantitative research conducted using the CAWI technique on a sample of 1,000 working Poles. The survey was conducted in late 2019 and early 2020. A ratio developed based on BAEL (Polish: Badanie Aktywnoล›ci Ekonomicznej Ludnoล›ci, meaning Labor Force Survey) data was used to present the distribution of the sample (see Table 1). The aim of the BAEL is to obtain information on the size and structure of labor resources. The survey was conducted by Statistics Poland (in accordance with the methodology of the International Labour Organization). The study was based on stratified random sampling, which allowed for inference within individual categories (strata). The proportional stratified sampling method was used. Table 2 shows the distribution of the research sample with the ratio used to its preparation. The adopted methodology including the design of the research sample allowed us to conduct representative research with respect to the gender and age of working Poles. The study was conducted on a sample of n=1,000, with the sample size calculated using the following parameters: population of 15,828,000, fraction size 0.1, and confidence level 0.95. The analysis of the results was performed using the methods of descriptive statistics (mean values and distribution of responses) and statistical tests to check the correlations between variables. A chi-squared test was calculated for the categorical variables. Nonparametric tests were used to examine the correlations between the assessment of employee interests and respondent characteristics because the assumption of group equality was not met. Depending on the number of groups, the U statistic of the Mann-Whitney test (two groups) was calculated or the Kruskal-Wallis test (three groups) was used. For hypothesis testing, a two-sided asymptotic significance level was set at p<0.05. The calculations were performed using the PQ Stat 18.4 software. H1: Women now work in nonstandard forms of employment more often than men. The first hypothesis was verified for the entire study sample (n=1000). As part of the first research hypothesis, working in nonstandard forms of employment was compared between men and women (Table 3). Statistical analysis of the frequency of employment based on nonstandard forms of employment and the frequency of this type of employment according to gender revealed no significant differences (p>0.05): the chi-squared test ฯ‡ยฒ(1) =0.01 and p=0.97. This means that current working (at the time of the survey) in nonstandard forms of employment does not depend on gender. The first hypothesis was rejected. H2: Women work involuntarily in nonstandard forms of employment more often than men. The second hypothesis was tested among men and women working in nonstandard forms of employment (n=191). Considering the possibility of choosing the form of employment, a distinction was made between people who voluntarily accepted employment in nonstandard forms and those who did not have such an opportunity (see Table 4). Statistical analysis of the voluntary choice of employment in nonstandard forms of employment and the frequency of this type of employment according to gender revealed a statistically significant relationship at the level of the adopted significance (p>0.05): the chi-squared test ฯ‡ยฒ(1) =3.73 and p=0.05. This means that the voluntary choice of working in nonstandard forms of employment is related to gender. The second hypothesis was confirmed, indicating that women accepting employment in nonstandard forms have less influence on the choice of this form of employment than men. H3: The perception of the degree of meeting employee interests by women working in nonstandard forms of employment depends on the opportunity to choose a specific form. The third hypothesis was tested in a group of women working in nonstandard forms of employment (n=84). The statistical analysis of the relationship between the assessment of the possibility of meeting the interests of employees working in nonstandard forms of employment and the possibility of choosing a particular form revealed no significant relationships (p>0.05). The medians in both groups were Me=3. The result of the Mann-Whitney test indicated that the observed differences in average scores were not statistically significant with U (2)=581.5 and p=0.29. This means that the assessment of the possibility of meeting employee interests does not depend on the voluntary choice of nonstandard forms of employment. The hypothesis was rejected based on statistical analysis. The possibility of choosing the form of nonstandard employment does not translate into the assessment of the degree of meeting employee interests, as reflected in the mean score (see Table 5). The mean assessment of the possibility of meeting employee interests in nonstandard forms of employment according to the voluntary choice of employment is 2.9. Respondents employed voluntarily in nonstandard forms rate the opportunity to meet the following interests significantly higher on average (Table 6), assistance from trade unions/worker's councils (p<0.01), influence on the choice of colleagues (p<0.01), influence on the choice of remuneration components (p<0.05), participation in management (codeciding) (p<0.05), and participation in management (consultation) (p=0.05). H4: The perception of the degree of meeting employee interests by women working in nonstandard forms of employment depends on the type of nonstandard employment. The fourth hypothesis was tested in a group of women working in nonstandard forms of employment (n=84). To test the above hypothesis, nonstandard forms of employment were divided into three groups: CM -contract of mandate or another form of contract based on the civil law code, SE -self-employed, and OT -other (which mainly includes work through temporary employment agencies and unregistered employment). The statistical analysis of the relationship between the assessment of the possibility of meeting the interests of employees working in nonstandard forms of employment and the form of employment (divided into three groups) revealed no significant relationships (p>0.05). The results of the Kruskal-Wallis test indicated no differences between groups, H (2)=0.71 and p=0.70. This means that the assessment of the possibility of meeting employee interests does not depend on the form of nonstandard employment. The third hypothesis was rejected based on statistical analysis. This leads to the conclusion that the type of nonstandard form of employment does not affect the assessment of meeting employee interests among the women surveyed, which is reflected in the mean assessment (Table 7). The mean assessment of the possibility of meeting employee interests in nonstandard forms of employment according to the form of nonstandard employment was at a similar level of 2.9. Considering meeting individual employee interests, differences were observed for two aspects in the average assessment of the possibility of meeting the interests depending on the form of employment (Table 8). The group of women employed based on the contract of mandate assessed the possibility of formal procedures for the expression of opinions and transparent rules of promotion significantly higher (p<0.05) compared to the self-employed women and those employed in other forms than based on the civil law code. The results of the survey also showed how individual interests (the degree of meeting them) are perceived by women working in nonstandard forms of employment compared to those working in standard forms. Table 9 summarizes the responses concerning the possibility of realizing the interests of employees in the opinion of the female respondents surveyed (n=441). The response distributions presented in the table are sorted from the highest rated by the group of female respondents working in standard forms of employment. Group differences were calculated in the analysis using the Mann-Whitney U test. On average, women working in standard forms of employment rated employment stability (p<0.001), protection and access to social benefits (p<0.001), health services at the employer's expense (p<0.001), good occupational safety and health (p<0.001), training at the employer's expense (p<0.05), and assistance during redundancy (p<0.05) higher compared to women employed in nonstandard forms. Furthermore, women working in nonstandard forms of employment rated the influence on working time organization, good atmosphere in the workplace, adequate information flow, formal procedures of expressing opinions, and salaries adequate for duties higher, but the differences in the mean assessments were small (0.2 and 0.1 points) and statistically insignificant. In addition, no differences were found in the following employee interests: clear criteria for the evaluation of task performance, influence on the choice of remuneration components, opportunity for professional development, and participation in management (consultation). Discussion Research conducted on a representative sample of working Poles showed that working in nonstandard forms of employment does not depend on gender. These results are in opposition to the traditional approach to the gender gap, which finds that women work in nonstandard forms more often than men (Benach et al., 2016;Acker, 1999;Tapia et al., 2017;Williamson and Baird, 2014;Bondy, 2018;Klimenko et al., 2017;Niลฃฤƒ, 2019). It is worth noting that such results were obtained based on current empirical data from a country where the use of nonstandard forms in the labor market remains one of the highest in the European Union (Eurostat, 2021) while the degree of labor activation of women remains significantly lower than that of men (difference of 14.4 percentage points) (European Commission, 2020). Despite the frequent use of nonstandard forms of employment, women do not prefer this form of employment to a greater extent than men. Furthermore, a significant and higher than EU average percentage of women in Poland remain economically inactive (6.7% of all women aged 15-64, i.e., 2% more than the EU average) (European Commission, 2020). Therefore, the high range of the use of nonstandard forms of employment and their relative ubiquity in the labor market do not coincide with women's preference for these forms of employment and do not contribute significantly to increasing their professional activity. The study demonstrated that the voluntary choice of working in nonstandard forms of employment is related to gender. This confirms previous results indicating that women accepting working in nonstandard forms of employment have fewer choices when choosing these forms than men (Green and Livanos, 2017). More often than in the case of men, working in nonstandard forms of employment is associated with the lack of alternatives rather than with the perception of the advantages of using these forms. In addition, according to women, the perception of the degree of meeting employee interests in nonstandard forms of employment does not depend on the possibility of its choice and remains at an average level (assessment: 2.9 with a maximum of 5). The detailed analysis, however, showed that within the individual interests of working women, the possibility of choice increases the assessment of meeting these interests. Surprisingly, the respondents clearly indicated employee interests related to direct and indirect employee participation including trade union assistance and participation through codecision and consultation. Therefore, the possibility of choosing a nonstandard form of employment increases the assessment of employee interests related to employee participation. These findings are significant in light of the problems identified in previous literature related to employment in nonstandard forms such as a sense of lack of influence on the professional situation, limited access to information and even a sense of isolation for women working in such forms (Vicary and Jones, 2017;Kottwitz et al., 2017). In Poland, there is a large variety of nonstandard forms of employment, including self-employment, forms based on civil law, and employment through temporary work agencies. The analysis of the research results, however, did not confirm that the type of nonstandard form of employment has a statistically significant effect on the perception of meeting the employee interests of women working in these forms of employment. This resulted from the analysis of an overall indicator for assessing the degree of meeting employee interests. Detailed analysis revealed that women working based on a contract of mandate rated formal procedures for expressing opinions and transparent rules for promotion higher than others. Such results do not support the views presented in some studies that self-employment offers particularly high opportunities for working women to pursue their interests (Farnรฉ and Vergara, 2015). Comparison of meeting the employee interests of women working in nonstandard forms to those employed in standard forms was considered particularly important. The results lead to conclusions regarding the positive characteristics of the jobs of women working in standard forms of employment. These include employment stability, protection and access to social benefits, health services at the employer's expense, good occupational safety and health, training at the employer's expense and assistance during redundancy. Meeting these interests of women working in standard forms of employment was assessed higher than those of women employed in nonstandard forms. Similarly, the positive characteristics of the jobs of women working in nonstandard employment were identified. These include influence on working time organization, a good atmosphere in the workplace, an adequate information flow, formal procedures for expressing opinions, and salaries adequate to duties. Based on the hierarchy established, it can be found that the highest assessment for women employed in nonstandard forms was given to meeting employee interests such as a good atmosphere in the workplace, good occupational safety and health, clear criteria for the evaluation of task performance, an adequate information flow, and influence on working time organization. These results are consistent with some of the previous findings, including the advantages of standard forms of employment in terms of employment stability, social protection, and greater employer commitment to investing in employee training (Hรฅkansson and Isidorsson, 2015). Studies, however, have failed to demonstrate that women working in standard forms of employment rate their opportunities for professional development higher. Consequently, nonstandard forms are not stigmatizing in this respect. Nonstandard forms have also not been shown to be associated with impaired access to information. Therefore, previous findings such as (1) the relevance of the importance of forms of employment that guarantee long-term stability for women's career development (Vicary and Jones, 2017) and (2) the coexistence of the paucity of information with employment in nonstandard forms (Kottwitz et al., 2017) were not confirmed. It is worth noting, however, that the presented discussion of the research results is not in all cases based on comparisons of quantitative research results. Some of the previous findings were formulated based on qualitative research through analytical generalization. As a general rule, case studies capture a broader spectrum of factors, including the specificity of individual sectors. Conclusions The positive effect of nonstandard forms of employment on the labor activation of women has been traditionally emphasized in the literature. The results obtained in this study do not confirm such an effect. In Poland, despite the wide range and diversity of nonstandard forms of employment, women do not prefer these forms, do not work in these forms more often than men and, moreover, work in nonstandard forms more often involuntarily. The research failed to show that the type of nonstandard form of employment has a significant effect on women's perception of meeting their interests. It was shown that women who voluntarily accept employment in nonstandard forms rate the possibility of direct and indirect employee participation higher. This may indicate the individual character of the organizations in which these women are employed: the possibility of choosing the form of employment at the beginning coexists with the possibility of participation and codeciding already during the work. This seems to be an important recommendation for management practice: women value the choice and codeciding in the workplace. The prevalence of employment in standard forms observed in the research is related to the traditional perception of the workplace, with important employee interests including stability and employee protection by the employer. Women for whom it is important to pursue these interests are unlikely to work in nonstandard forms. Furthermore, the positive aspects of employment of women in nonstandard forms revealed in the research include mainly a good atmosphere, the possibility to influence working time organization, a good information flow, clear evaluation criteria and linking the remuneration with the performed tasks. The advantage of nonstandard forms of employment is perceived primarily by women who prefer to pursue these employment interests. The results presented may be useful in attempts to predict the consequences of the spread of non-standard forms of employment, including economic activation of women, in countries with labor markets characterized by a low-level use of these forms. It seems that the confrontation of the revealed tendencies with other factors, such as the respondent's age, performing caring functions by them or the specificity of the sector they work in, are important further objectives of research.
Website Morphing irtual advisors often increase sales for those customers who ๏ฌnd such online advice to be convenient and helpful. However, other customers take a more active role in their purchase decisions and prefer more detailed data. In general, we expect that websites are more preferred and increase sales if their characteristics (e.g., more detailed data) match customersโ€™ cognitive styles (e.g., more analytic). โ€œMorphingโ€ involves automatically matching the basic โ€œlook and feelโ€ of a website, not just the content, to cognitive styles. We infer cognitive styles from clickstream data with Bayesian updating. We then balance exploration (learning how morphing affects purchase probabilities) with exploitation (maximizing short-term sales) by solving a dynamic program (partially observable Markov decision process). The solution is made feasible in real time with expected Gittins indices. We apply the Bayesian updating and dynamic programming to an experimental BT Group (formerly British Telecom) website using data from 835 priming respondents. If we had perfect information on cognitive styles, the optimal โ€œmorphโ€ assignments would increase purchase intentions by 21%. When cognitive styles are partially observable, dynamic programming does almost as wellโ€”purchase intentions can increase by almost 20%. If implemented system-wide, such increases represent approximately $80 million in additional revenue. Introduction and Motivation Website design has become a major driver of profit. Websites that match the preferences and information needs of visitors are efficient; those that do not forego potential profit and may be driven from the market. For example, when Intel redesigned its website by adding a verbal advisor to help customers find the best software to download for their digital cameras, successful downloads increased by 27%. 1 But verbal advisors are not for every customer. Less verbal and more analytic customers found the verbal advisor annoying and preferred a more graphic list of downloadable software. If customers vary in the way they process information (that is, vary in their cognitive styles), Intel might increase downloads even 1 Although downloads are free, the benefits to Intel are substantial in terms of enhanced customer satisfaction, increased sales of hardware, and cost savings because of fewer telephone-support calls. The cost savings alone saved Intel over $1 million for their camera products with an estimated $30 million in savings across all product categories (Rhoads et al. 2004). Figure 1 illustrates one virtual advisor. See Urban and Hauser (2004) for other examples. more with a website that automatically changes its charateristics to match those cognitive styles. Intel is not alone. Banks, cell phone providers, broadband providers, content providers, and many retailers might serve their customers better and sell more products and services if their websites matched the cognitive styles of their visitors. One solution is personalized self-selection, in which a customer is given many options and allowed to select how to navigate and interact with the site. As the customer's options grow, this strategy leads to sites that are complex, confusing, and difficult to use. Another option, popular in the adaptive-learning literature, is to require visitors to complete a set of cognitivestyle tasks and then select a website from a predetermined set of websites. However, retail website visitors are likely to find such intensive measurement cumbersome and intrusive. They may leave the website before completing such tasks. We propose another approach: "morphing" the website automatically by matching website characteristics to customers' cognitive styles. Our practical goal is to morph the website's basic structure (site backbone) and other functional characteristics in real time. Website morphing complements self-selected branching (as in http://www.Dell.com), recommendations (as in http://www.Amazon.com), factorial experiments (Google's Website Optimizer), or customized content (Ansari andMela 2003, Montgomery et al. 2004). Website morphing is an example of targeting optimal marketing communications to customer segments (Tybout andHauser 1981, Wernerfelt 1996). Example dimensions on which cognitive styles are measured might include impulsive (makes decisions quickly) versus deliberative (explores options in depth before making a decision), visual (prefers images) versus verbal (prefers text and numbers), or analytic (wants all details) versus holistic (just the bottom line). (We provide greater detail and citations in ยง7.) A website might morph by changing the ratio of graphs and pictures to text, by reducing a display to just a few options (broadband service plans), or by carefully selecting the amount of information presented about each plan. A website might also morph by adding or deleting functional characteristics such as column headings, links, tools, persona, and dialogue boxes. Website morphing presents at least four technical challenges. (1) For first-time visitors, a website must morph based on relatively few clicks; otherwise, the customer sees little benefit. (2) Even if we knew a customer's cognitive style, the website must learn which characteristics are best for which customers (in terms of sales or profit). (3) To be practical, a system needs prior distributions on parameters. (4) Implementation requires a real-time working system (and the inherently difficult Web programming). We use a Bayesian learning system to address the rapid assessment of cognitive styles and a dynamic program to optimally manage the tension between exploitation (serving the morph most likely to be best for a customer) and exploration (serving alternative morphs to learn which morph is best). Uncertainty in customer styles implies a partially observable Markov decision process (POMDP), which we address with fast heuristics that are close to optimal. Surveys, using both conjoint analysis and experimentation, provide priors and "prime" the Bayesian and dynamic programming engines. We demonstrate feasibility and potential profit increases with an experimental website developed for the BT Group to sell broadband service in Great Britain. An Adaptive System to Infer Cognitive Styles and Identify Optimal Morphs A cognitive style is "a person's preferred way of gathering, processing, and evaluating information" (Hayes and Allinson 1998, p. 850) and can be identified as "individual differences in how we perceive, think, solve problems, learn and relate to others" (Witkin et al. 1977, p. 15). "A person's cognitive style is fixed early on in life and is thought to be deeply pervasive [and is] a relatively fixed aspect of learning performance" (Riding and Rayner 1998, p. 7). Cognitive styles tend to be forced-choice (ipsative) constructs, such as analytic versus holistic, and are usually measured by question banks or cognitive tasks (Frias-Martinez et al. 2007, Santally and Alain 2006, Riding and Rayner 1998. The literature is wide and varied. We derive a flexible system that works with any reasonable set of cognitive-style dimensions. We illustrate the system with commonly used cognitive-style constructs found in the literature ( ยง7, BT application). Figure 1 illustrates two of the eight versions ("morphs") of broadband advisors from the BT application. Figure 1(a) uses an analytic virtual advisor (a technology magazine editor willing to provide data) who compares plans on 10 characteristics (a large information load), presents a bar chart to compare prices (graphical), and provides technical information about plans (focused content). In contrast, Figure 1(b) uses an holistic virtual advisor (typical user) to whom the website visitor can listen (verbal). This advisor avoids details, compares plans on only four characteristics (small information load), and gives an easy-to-comprehend overall comparison of three plans (general content). We expect different morphs to appeal differentially depending on visitors' cognitive styles. For example, impulsive visitors might prefer less-detailed information, whereas deliberative visitors might prefer more information. Similarly, the more focused of the two morphs might appeal to visitors who are holistic, while the ability to compare many plans in a table might appeal to analytic visitors. If preferences match behavior (an empirical question), then, by matching a website's characteristics to cognitive styles, the morphing website should sell broadband service more effectively and lead to greater profits for BT. We defer to ยง7 the selection, definition, and measurement of cognitive styles, the definition and implementation of website characteristics (morphs), and the market research that provides prior beliefs (purchase probabilities) on the relationships between cognitive styles and morph characteristics. For BT we use four binary cognitive-style constructs yielding 2 4 = 16 cognitive-style segments, indexed by r n for the nth website visitor (customer). We attempt to morph the BT website to match cognitive styles of each segment by using three binary website characteristics yielding 2 3 = 8 possible morphs, indexed by m Observe purchase opportunity. Visitor either purchases or not. Update beliefs (ฮฑ n s, ฮฒ n s) about purchase probabilities using observed purchase opportunity and prior beliefs (ฮฑ n-1 s, ฮฒ n-1 s). Compute new morph-assignment rule (using posterior purchase probability distribution). Dynamic programming loop (after each respondent) Cognitive-style inference loop (dashed box, potentially after each click) Assign initial morph, m o , based on prior beliefs about cognitive styles, r n . Website visitor sees morph, m, and clicks on one of J k alternatives. Bayesian update of cognitive style, r n , based on clickstream. Assign morph based on current morph-assignment rule and updated cognitive-state probabilities. Visitor goes to purchase opportunity. Visitor saw optimal morph, m r * , based on updated beliefs about cognitive styles. f(r n |y n ,c kjn s, ฮฉ) โ†’ โ†’ If we had perfect information on cognitive-style segments and perfect knowledge of segment ร— morph purchase probabilities, we could map an optimal morph to each cognitive-style segment. There are 16 ร— 8 = 128 such segment ร— morph probabilities. In the absence of perfect information, our challenge is to infer the cognitive-segment to which each visitor belongs while simultaneously learning how to maximize profit by assigning morphs to cognitive-style segments. In real systems, we must infer visitors' cognitivestyle segment from their clickstreams. We can do this because each visitor's click is a decision point that reveals the visitor's cognitive-style preferences. If we observe a large number of clicks, we should be able to identify a visitor's cognitive-style segment well. However, in any real application, the number of clicks we observe before morphing will be relatively small, yielding at best a noisy indicator of segment membership. The website begins with morph m o (to be determined). We observe some number of clicks (say, 10), infer probabilities for the visitor's cognitive-style segment, then morph the website based on our inference of the visitor's segment. The visitor continues until he or she purchases (a broadband service) or exits the website without purchasing. In our application, maximizing purchases is a good surrogate for maximizing profit through the Web channel. (In ยง11 we indicate how to extend our framework to address the size of the purchase.) We begin with the Bayesian inference loop (grey dashed line in Figure 2) through which we infer the visitor's cognitive-style segment. Denote by J kn the number of potential click-alternatives that the nth visitor faces on the kth click. Let y kjn be 1 if the nth visitor chooses the jth alternative on the kth click, and 0 otherwise. Let y kn be the vector of the y kjn s and let y n be the matrix of the y kn s. Each click-alternative is described by a set of characteristics, c kjn . In our application, there are 11 characteristics: three macro characteristics (e.g., visual versus verbal), four detailed function characteristics (e.g., a link that plays audio), and four topical website areas (e.g., virtual advisor). All notation is summarized in Appendix 1 for easy reference. A visitor in a particular cognitive-style segment will prefer some combinations of characteristics to other combinations. Let r n be a vector of preference weights that maps click-alternative characteristics, c kjn , to preferences for each cognitive-style segment, r n . Define as the matrix of the r s. If we know (1) preferences for morph characteristics for each cognitive-style segment, (2) morph characteristics for click-alternatives (various links on which the visitor can click when he or she makes a decision to click), and (3) the clicks that were made, we can infer the visitor's cognitive-style segment using Bayes' theorem. Specifically, we update the posterior distribution, f r n y n c kjn s , that the visitor is in the r th n segment based on the observed data. 2 The second inference loop (outer loop denoted by a black dotted line in Figure 2) identifies the optimal morph conditioned on f r n y n c kjn s . This inference loop must learn and optimize simultaneously. In theory, we might allow the website to morph many times for each visitor, potentially after every click. However, in our application we observe only one purchase decision per visitor. To avoid unnecessary assumptions in assigning this purchase to website characteristics, our initial application morphs only once per visitor. (We address alternative strategies in ยง5.) Any results we report are conservative and might be improved with future websites that morph more often (potentially taking switching costs, if any, into account). Let p rm be the probability that a visitor in cognitivestyle segment, r n = r, will purchase BT's broadband plan after visiting a website that has the characteristics of morph m. Let p be the matrix of the p rm s. Clearly, if we knew r n and the p perfectly, then we would assign the morph that maximizes p rm . However, we do not know either r n or p perfectly; we have only posterior probabilistic beliefs about r n and p. Without perfect information, maximizing long-term expected profit (sales) requires that we solve a much more difficult problem. For example, suppose we knew r n but had only posterior beliefs about p rm . A naรฏve myopic strategy might choose the morph m, which has the largest (posterior) mean for p rm . But the naรฏve strategy does not maximize long-term profits. There might be another morph, m , with a lower (posterior) mean but with a higher variance in (posterior) beliefs. We might choose m to sacrifice current profits but learn more about the distribution of p rm . The knowledge gained might help us make better decisions in the future. We are more likely to choose m when we value future decisions and when we benefit greatly from reducing the uncertainty in p rm . The optimal morphassignment problem is even more difficult when we face uncertainty about the cognitive-style segment, r n . We must also take into account "false negatives" when we assign a morph that is not right for the true cognitive-style segment. This is an explicit opportunity cost to BT for which we must account when we assign morphs to maximize profit. To maximize profit, taking both exploration and potential false negatives into account, we formulate a dynamic program. When r is known, the solution is based on a well-studied structure ("multiarmed bandits"). The optimal morph-assignment rule can be computed between clicks to automatically balance exploration and exploitation. When r is unknown, the partial-information optimal solution is not feasible between clicks. Instead, we use a fast heuristic that obtains 99% of long-term profits (sales) when all uncertainty is taken into account. (We test both dynamic programming solutions on our data.) Before we formulate these dynamic programs we briefly review prior attempts to adapt content to latent characteristics of users of that content. Related Prior Literature Cognitive styles (also learning styles or knowledge levels) have been used to adapt material for distance learning, Web-based learning, digital libraries, and hypermedia navigation. In most cases, cognitive styles are measured with an intensive inventory of psychometric scales or inferred from predefined tasks (Frias-Martinez et al. 2007, Magoulas et al. 2001, Mainemelis et al. 2002, Santally and Alain 2006, Tarpin-Bernard and Habieb-Mammar 2005. Methods include direct classification, neuro-fuzzy logic, decision trees, multilayer perceptrons, Bayesian networks, and judgment. Most authors match the learning or search environment based on judgment by an expert pedagogue or based on predefined distance measures. In contrast we infer cognitive styles from relatively few clicks and automatically balance exploration and exploitation to select the best morph. Automatic assignment is common in statistical machine learning. For example, Chickering and Paek (2007) use reinforcement learning to infer a user's commands from spoken language. After training the system with 20,000 synthetic voices, they demonstrate that the system becomes highly accurate after 1,000 spoken commands. Like us, they formulate their problem as a multiarmed bandit, but their focus and data require an entirely different solution strategy. When latent customer states are transient, hidden Markov models (HMMs) have proven useful. Conati et al. (2002) identify students' mastery of Newton's laws by predefining a Bayesian network and updating hidden-state probabilities by observing students' answers. Conditional probabilities are set by judgment. Their intelligent tutoring system (ITS) provides hints for "rules" when it infers that a student has not yet mastered the lesson. Yudelson et al. (2008) extend this ITS with more hidden states and estimate the parameters of the Bayesian network with an expectation-maximization algorithm. In other HMM models, Bidel et al. (2003) identify navigation strategies for hypermedia, Liechty et al. (2003) identify visual attention levels for advertising, identify customer attitudes for alumni gift giving. Estimation methods include machine learning and hierarchical Bayes Monte Carlo Markov chain methods. Montoya et al. (2008) estimate an HMM and optimize sampling and detailing with dynamic programming. HMMs have proven accurate in these situations and policy simulations suggest significant profit increases. However, HMMs are computationally intensive, often requiring more than a day of computer time to estimate parameters and almost as long to optimize policies. In contrast, we compute strategies in real time between clicks (Bayesian inference loop) and update strategies between online visitors (dynamic programming loop). Because we expect cognitive styles to be enduring characteristics of website visitors (e.g., Riding and Raynor 1998), we avoid the computational demands necessary to model transient latent states. In our application we use priming data and ipsative scales to identify cognitive style segments (see ยง7 and the Technical Appendix, available at http://mktsci.pubs.informs.org, on morphing taxonomies). Alternatively, one might consider latentclass analyses to uncover enduring cognitive-style segments. We now present a working system in which we combine and adapt known methods to website morphing. Finding the Optimal Morph with Gittins Indices We present the dynamic programming solution in steps. In this section we temporarily assume that the visitor sees morph m for the entire visit and we know the visitor's cognitive segment, r. In the next section we relax these assumptions to solve a partially observable Markov decision process where we infer r and where the visitor may not see morph m for the entire visit. Let mn = 1 if the nth visitor purchases a BT broadband plan after seeing morph, m. Let mn = 0 otherwise. For clarity of exposition when r is known, we write mn as rmn to make the dependence on r explicit. Under the temporary assumption that r is known, we model the observed broadband subscriptions, rmn , as outcomes of a Bernoulli process with probability, p rm . Based on these purchase observations and prior beliefs, we infer a posterior distribution on purchase probabilities, f p mn , parameters based on previous visitors . To represent our prior beliefs, we choose a flexible family of probability distributions that is naturally conjugate to the Bernoulli process. The conjugate prior is a beta distribution with morph-and-segmentspecific parameters rmo and rmo . Specifically, With beta priors and Bernoulli observations, it is easy to show that the posterior is also a beta distribution with rm n+1 = rmn + mn and rm n+1 = rmn + 1 โˆ’ mn . If a visitor in segment r receives morph m, we expect an immediate expected reward equal to the mean of the beta distribution,p rmn = rmn / rmn + rmn , times the profit BT earns if the nth visitor purchases a broadband plan. We also earn an expected reward for acting optimally in the future, which we discount by a. The solution to the dynamic program is the morph, m * r , which maximizes the sum of the expectation of the immediate reward and the discounted future reward. In general, such multiarm bandit dynamic programs are difficult to solve. In fact, "during the Second World War [this problem was] recognized as so difficult that it quickly became a by-word for intransigence" (Whittle 1989, p. ix). However, in a now-classic paper, Gittins (1979) proposed a simple and practical solution that decomposed the problem into indices. In the Gittins solution a candidate "arm," in our case a morph, is compared to an arm for which the payoff probability is known with certainty. Gittins formulates the Bellman equation (given below) and solves for this known payoff probability, which we denote by G rmn . G rmn depends only on rmn , rmn , and a and is independent of the parameters of the other arms. This known payoff probability has become known as the Gittins index. Gittins proved that these indices contain all of the information necessary to select the optimal strategy at any point in time, automatically balancing exploitation and exploration. Gittins' solution is simply to choose the arm with the largest index. 3 Future morph assignments might change when we update rmn+1 , rmn+1 , and G rmn+1 with new information. However, the strategy of choosing the highest-index morph is always optimal. Gittins (1979) proof of indexability is beyond the scope of this paper. However, it is instructive to formulate the Bellman equation from which we obtain G rmn as a function of rmn , rmn , and a. The solution is best understood as a two-armed bandit (Gittins 1989). Consider first an arm with known payoff probability, G rmn . If we always select this arm, the expected reward in each and every period is G rmn times the reward for success. Without loss of generality, normalize the reward for success to 1.0. If we discount future periods by a factor of a per period, the net present value is computed with the closed form of a geometric series: G rmn / 1 โˆ’ a . The reward for selecting an uncertain arm is more complicated to derive because each success or failure updates our beliefs about the probability of success. Following standard dynamic programming notation we let R rmn rmn a be the value of acting optimally. To act optimally, we must choose one of two actions, the known arm or the uncertain arm. When we select the uncertain arm, we get a success (with probability rmn / rmn + rmn ) or a failure (with probability rmn / rmn + rmn ). If we observe a success, we get the payoff of 1.0 plus the discounted payoff we will receive for acting optimally in the future. The success also updates our beliefs about the future. Specifically, rmn+1 = rmn + 1 and rmn+1 = rmn . Thus, we expect a discounted reward of 1 + aR rmn + 1 rmn a when we observe a success. By similar reasoning, we expect a discounted reward of aR rmn rmn + 1 a when we observe a failure. Putting these rewards together we calculate the expected reward of an uncertain arm as rmn / rmn + rmn 1 + aR rmn + 1 rmn a + rmn / rmn + rmn aR rmn rmn + 1 a . Our strategy is to choose the arm with the highest expected discounted profit; hence the Bellman equation becomes Equation (1) has no analytic solution, but we can readily compute Gittins indices with a simple iterative numeric algorithm. 4 We illustrate G rmn as a function of n in Appendix 3. As expected, the indices behave in an intuitive manner. If uncertainty is high (n small), exploration is valuable and G rmn exceeds rmn / rmn + rmn substantially. As we observe more website visitors, G rmn decreases as a function of n. As n โ†’ , the expected rewards become known and G rmn converges to rmn / rmn + rmn . The discount rate, a, is constant for our application, but if a were to increase, we would value the future more, and G rmn would increase to make exploration more attractive. Given a we precompute a table of indices for the values of rmn and rmn that we expect to observe in the BT application, using interpolation if necessary. The โˆ’ table is made manageable by recognizing that G rmn converges to rmn / rmn + rmn as the number of visitors gets large. Is Gittins' Solution Reasonable for BT's Website? It is not uncommon for a retail website to have 100,000 visitors per annum. With so many visitors it is likely to be valuable to explore different morphs for early visitors so that BT can profit by providing the correct morph to later visitors. Suppose BT values future capital with a 10% discount per annum and suppose 100,000 visitors are spread evenly throughout the year. Then the effective discount from one visitor to the next is 1/100,000th of 10%, suggesting an implied discount factor of a = 0 999999. Even if visitors are spread among 16 cognitive-style segments, the effective discount factor is much closer to 1.0 than the discount factors used in typical Gittins applications (e.g., clinical trials, optimal experiments, job search, oil exploration, technology choice, and research and development; Jun 2004). With a so close to 1.0, we expect a Gittins strategy to entail a good deal of exploration. It is a valid fear that such exploration might lead to costly false morph assignments more so than a null strategy of one website for everyone. (The Gittins strategy is optimal if we allow morphing. The question here is whether morphing per se is reasonable in the face of issues outside our model. That is, is there a noticeable improvement relative to a no-morph strategy? 5 ) To address this practical implementation question, we use an a appropriate to BT's experimental website and we generate synthetic visitors who behave as we expect real visitors to behave. Our simulations are grounded empirically based on an experimental website. Full-scale implementation is planned, but production results are likely a year or more away. We estimate real behavior by exposing a sample of 835 website visitors to one of eight randomly chosen morphs and observing their stated purchase probabilities. We measure cognitive styles with an intrusive question bank and estimate p rm for each segment ร— morph combination. (Details are in ยง ยง7-9.) For example, to simulate one cognitive-style segment we used empirically derived probabilities 0 2996 0 2945 0 4023 0 3901 0 2624 0 2606 0 3658 0 3580 ; for morphs m = 0 to 7. For each synthetic visitor we generate a purchase using the probability that matches the morph assigned by the Gittins strategy. We generate 5,000 visitors in each of 16 cognitive-style segments (80,000 in total). This is well within the number of visitors to BT's website. We seek a conservative test. As a lower bound, we start the system with equally likely prior probabilities that do not vary by morph and we begin with low precision beta priors. To avoid ties in the first morph assignment, we perturb the prior means randomly. Figure 3 illustrates website morphing for a sample cognitive-style segment. The first panel plots the evolution of the Gittins indices; the second panel plots the morph chosen by the system. The Gittins indices for each of the eight morphs all start close to 0.7, which is significantly higher than the best-morph probability (approximately 0.4). The larger values of the indices reflect the option value of our uncertainty about the true probabilities. For the first few hundred visitors, the system experiments with various morphs before more or less settling on Morph 2 (red line). However, the system still experiments until about the 1,200th visitor. Around the 2,500th visitor the system flirts with Morph 3 (cyan line) before settling down again on Morph 2. This blip around the 2,500th visitor stems from random variation-a run of luck in which visitors purchased after seeing Morph 3. Morph 3's probability of buying is 0.3901. It is close to, but not better than, Morph 2's value of 0.4023. The system settles down after this run of luck, illustrating that the longterm behavior of the Gittins strategy is robust to such random perturbations. Because the Gittins strategy is optimal in the presence of uncertainty, we can calculate the cost of uncertainty for this cognitive-style segment. The best morph for this segment is Morph 2 with an expected reward of 0.4023 times BT's profit per sale. If we had perfect information, we would always choose Morph 2 for this segment and achieve this expected reward. Because the Gittins strategy does not have perfect information, it explores other morphs before settling down on Morph 2. Despite the cost of exploration, the Gittins strategy achieves an expected reward of 0.3913, which is 97.2% of what we could have attained had perfect information been available. This is typical. When we average across cognitive-style segments we achieve an expected reward of 97.3% of that obtainable with perfect information. We can also estimate the value of morphing. A website that is not designed with cognitive styles in mind is equivalent to one for which BT chooses one of the morphs randomly. In that case, the expected reward is 0.3292 times BT's profit per sale. The Gittins strategy improves profits by 18.9%. Even if we had perfect information on purchase probabilities, we would only do 22.2% better. Strong priors (see ยง9) improve the Gittins strategy slightly-a 19.7% improvement relative to no morphing. These results illustrate the potential improvements that are possible by using the Gittins strategy to identify the best morph for a segment (assuming we knew to which segment the visitor belonged). We now extend our framework to deal with uncertainty in cognitive-style-segment membership. Dynamic Programming When Cognitive Styles Are Inferred (POMDP) It is not feasible for BT to use an intrusive cognitivestyle assessment on its production website. However, it is feasible to infer cognitive styles from visitors' clickstreams with the Bayesian inference loop. We demonstrate in ยง6 how the clickstream provides a posterior probability, q rn = f r n y n c kjn s , that visitor n is in cognitive-style segment r n . Because the state space of cognitive styles is only partially observable, the resulting optimization problem is a POMDP. The state space is Markov because the full history of the multiple-visitor process is summarized by r n , the rmn s, and the rmn s. The POMDP cannot be solved optimally in real time, but good heuristics achieve near-optimal morph-assignment strategies. To incorporate uncertainty on cognitive styles, we make three modifications. First, the Gittins strategy defines a unique morph per visitor and assumes the visitor makes a purchase decision after having experienced that morph. The outcome of the purchase-decision Bernoulli process is an independent random variable conditioned on the morph seen by a visitor. Although we do not know with certainty to which cognitive-style segment to assign this observation, we do know the probability, q rn , that the observation, mn , updates the rth cognitive-style segment's parameters. 6 Because the beta and binomial distributions are conjugate, Bayes' theorem provides a means to use q rn and mn to update the beta distributions: rmn = rm nโˆ’1 + mn q rn rmn = rm nโˆ’1 + 1 โˆ’ mn q rn (2) Second, following Krishnamurthy and Mickova (1999; hereafter referred to as KM) we compute an expected reward over the distribution of cognitivestyle segments (the vector of probabilities q rn as well as over the posterior beta distribution with parameters rmn and rmn . KM demonstrate that while the full POMDP can be solved with a complex index strategy, a simple heuristic solution, called an Expected Gittins Index (EGI) strategy, achieves close to 99% of optimality. KM's EGI algorithm replaces G rmn with EG mn and chooses the morph with the largest EG mn , where 6 Because r n is now partially observable, we have returned to the mn notation, dropping the r subscript. To simplify exposition we continue to assume temporarily that the visitor experienced the mth morph for the entire visit. For BT's experimental websites we cannot guarantee that KM's EGI solution will be within 99% of optimality (as in their problems). Instead, we bound the EGI's performance with comparisons to the expected rewards that would be obtained if BT were able to have perfect information on cognitive styles. The EGI solution does quite well (details are in ยง6). Third, even if the website morphs once per visitor, the visitor sees the best initial morph, m o , for part of the visit and the EGI-assigned POMDP morph, m * , for the remainder of the visit. To update the EGI we must assign the visitor's purchase (or lack thereof) to a morph. The appropriate purchase-assignment rule is an empirical issue. If the number of clicks on m * is sufficiently large relative to the number of clicks on m o , then we assign the purchase to m * and update only the indices for morph m * . (We use the same rule if the last morph, m * , has the strongest effect on purchase probabilities.) Alternatively, we can assign the purchase-or-not observation to m o and m probabilistically based on the number of clicks on each morph. Other rules are possible. For example, we might weight later (or earlier) morphs more heavily or we might condition p r m1 m2 m3 on a sequence of morphs, m 1 m 2 m 3 . For our data we obtain good results by assigning the observation to m * . Fortunately, for the BT experimental website, simulations with proportional purchase-assignment rules suggest that the performance of the system is robust with respect to such assignment rules. 7 We leave further investigation of purchase-assignment rules to future research. Inferring Cognitive Styles-A Bayesian Loop BT's website is designed to provide information about and sell broadband service. Asking respondents to complete a lengthy questionnaire to identify their cognitive styles prior to exploring BT's website is onerous to visitors and might lower, rather than raise, sales of broadband service. Thus, rather than asking website visitors to directly describe their cognitive styles, the Bayesian loop infers cognitive styles. Specifically, after observing the clickstream, y n , and the click-alternative characteristics, c kjn s, we update the probabilities that the nth visitor belongs to each of the cognitive-style segments (q rn s). (Although the c kjn s depend on the initial morph, m o , seen by the nth visitor, we continue 7 For example, with a last-morph assignment rule we obtain a mean posterior probability (q rn of 0.815 and a median posterior probability of 0.995. With a proportional-morph assignment rule, the mean is higher (0.877) but the median lower (0.970). The resulting rewards are quite close. To explore this issue empirically, we might seek data in which we assign both m o and m * randomly rather than endogeneously using the EGI solution to the POMDP. to suppress the m o subscript to keep the notation simple.) We assume the nth visitor has unobserved preferences,ลฉ kjn , for click-alternatives based on the clickalternative characteristics, c kjn s, and based on his or her preference weights, r n , for those characteristics. We assume that preference weights vary by cognitivestyle segment. (Recall that is the matrix of the r n s. Temporarily assume it is known.) We express these unobserved preferences asลฉ kjn = c kjn r n +หœ kjn , wherแบฝ kjn has an extreme-value distribution. Conditioned on a cognitive-style segment, r n , the probability that we observe y kn for the kth click by the nth visitor is After we observe K n clicks, the posterior distribution for cognitive-style segments is given by Bayes' theorem: q rn = f r n y n c kjn s where the q o r n are the prior probabilities that the nth visitor belongs to cognitive-style segment r n . Computing the q rn s and the corresponding EG mn s is sufficiently fast (โˆผ0.4 seconds; dual processor, 3 GHz, 4 GB RAM); visitors notice no delays on BT's experimental website. Equations (4) and (5) require prior probabilities, q o r , and estimates of the preference matrix, . The click-alternative characteristics, c kjn s, are data. We obtain q o r n and from a priming study as described in ยง7. Because we use Bayesian methods to estimate , it is theoretically consistent to update the q rn s using the full posterior. Unfortunately, this is not yet practical because computation time is roughly linear in the number of samples from 's posterior distribution. For example, with only 15 samples from the posterior it took 6.5 seconds to compute the EG mn s-too long between clicks in a production setting. Furthermore, 15 samples is far too few to integrate effectively over the 50-element posterior distribution of . This practical barrier might fall with faster computers and faster computational methods. 8 In practice, if we identify new types of clickalternative characteristics or if BT feels that has changed because of unobserved shocks, then selected 8 We tested a 15-sample strategy with synthetic data. The results were virtually indistinguishable from those we obtained using the posterior mean for . Testing with large numbers of samples is not feasible at this time. visitors can be invited to complete the priming-study questionnaire to provide data to update . 9 At any time, we can update q o r n based on averaging the posterior q rn over n. Summary of the Gittins and Bayesian Loops For each visitor, we update q rn after each click. EG mn predicts the best morph based on these q rn s. After a set of initial clicks we morph the website to that best morph. After observing a purchase occasion we update the rmn s and rmn s for the next visitor. We use these updated rmn s and rmn s to update the Gittins indices and continue to the next visitor. As n gets sufficiently large, the system automatically learns the true p rm s. The Effect of Imperfect Cognitive-Style Identification In ยง5 we found that the cost of uncertainty in segment ร— morph probabilities reduced the optimal solution to 97.2%, of that which we would obtain if we had (hypothetical) perfect information. The EGI solution to the POMDP should achieve close to the optimal morph assignment in the face of uncertainty on both segment ร— morph probabilities and cognitive styles, but that is an empirical question. To examine this question we compare the performance of the POMDP EGI solution to four benchmarks. 10 Rewards are scaled such that 1.0000 means that every visitor purchases broadband service. The benchmarks are as follows: โ€ข A website without the Gittins loop and no knowledge of cognitive styles. 11 The expected reward is 0.3205. โ€ข A website with the Gittins loop, but no customization for cognitive-style segments. The expected reward is 0.3625. โ€ข A website with the Gittins loop and (hypothetical) perfect information on cognitive-style segments. The expected reward is 0.3879. โ€ข A website with (hypothetical) perfect knowledge of purchase probabilities and cognitive-style segments. The expected reward is 0.3984. To compare the EGI solution to these benchmarks we begin with a scenario that illustrates the potential of the POMDP. We create synthetic Web pages 9 This last step adds no new conceptual challenges and incurs a modest, but not trivial, cost. BT has not yet seen a need to collect these additional data for its experimental website. The current implementation assumes that preferences vary by cognitive styles but are homogeneous within cognitive-style segment. 10 Figure 3 and the corresponding Gittins improvements in ยง4 are for a representative cognitive-style segment. The benchmarks cited here are based on the results of all 16 cognitive-style segments. 11 Without information on cognitive styles or the Gittins loop, BT must select one of the eight morphs at random. Morphing: Match characteristics to cognitive-style segment. ( c jkn s) that provide clear choices in click-alternative characteristics both among and within morphs. In the simulations we know each customer's cognitive style, r. We create synthetic clickstreams from representative r s by making multinomial draws from the random-utility model in Equation (4). After 10 clicks, we use the Bayesian loop to update q rn and choose an optimal morph based on the EGIs. The synthetic customer then purchases a broadband service with probability p rm , where r is the true cognitive state and m is the morph provided by the EGIs. (The EGIs may or may not have chosen the best morph for that synthetic customer.) Based on the observed purchase ( mn ), we update the rmn s and rmn s and go to the next customer. We simulate 80,000 customers (5,000 customers per cognitive-style segment). As the number of clicks per customer increases, we expect the (Bayesian) posterior q rn s to converge toward certainty and the rewards to converge toward those based on (hypothetical) perfect cognitive-style-segment information. Thus, for comparison, we include a 50-click simulation even though 50 clicks are more clicks than we observe for the average BT website visitor. This simulation illustrates the potential of the EGI solution. It corresponds to a second generation website (Gen-2) that is now under development. The first-generation (Gen-1) BT experimental website was, to the best of our knowledge, the first attempt to design a website that morphs based on cognitivestyle segments. We learned from our experience with that website. There were sufficient differences among morphs to identify p rm easily with the Gittins loop; however, the relative similarity between clickalternatives within a morph meant that the Bayesian loop required more click observations than anticipated. We return to the Gen-1 website after we describe fully the empirical priming study (see ยง ยง7 and 8). The empirical insights obtained by comparing the Gen-1 and Gen-2 simulations are best understood based on the estimated from the data in the priming study. (The Gen-1 Bayesian-loop improvements in revenue that we report in ยง10 are less dramatic but not insignificant from BT's perspective.) In Table 1 we compare the Bayesian loop to the four benchmarks with three metrics. "Improvement" is the percent gain relative to the baseline of what would happen if a website were created without any attempt to take cognitive styles into consideration. The 10-click Bayesian/Gittins loop improves sales by 19.9%. "Efficiency" is the percentage of sales relative to that which could be obtained with perfect knowledge. The 10-click Bayesian/Gittins loop attains 96.5% of that benchmark. "Relative efficiency" is the percent gain relative to the difference in the lower and upper benchmarks. The 10-click Bayesian/Gittins loop attains an 82.0% relative efficiency. Based on 10 clicks the Bayesian loop can identify most cognitive states. The median posterior probability (q rn ) is 0.898; the lower and upper quartiles are 0.684 and 0.979, respectively. However, on four of the cognitive states the Bayesian loop does not do as well; posterior probabilities are in the range of 0.387 to 0.593. If we were to allow more clicks (50 clicks) than we observe for the average website visitor, the posterior probabilities converge toward certainty. Based on 50 clicks the median and upper quartile are both 1.00, while the lower quartile is 0.959. The efficiency is 97.0%-very close to what BT would obtain if it had perfect information on cognitive styles (97.4%). We estimate the marginal contribution of the Gen-2 Bayesian loop using revenue projections based on discussions with managers at the BT Group. (Gen-1 results are discussed in ยง10.) A 20% increase in sales corresponds to an approximately $80 million increase in revenue. The Gittins loop projects a gain of approximately $52.3 million by finding the best morph even without customization. The 10-click Bayesian loop adds another $27.4 million by customizing the look and feel of the website based on posterior cognitivestyle-segment probabilities. This is within $2.6 million of what could be obtained with 50 clicks. Perfect information on cognitive-style segments would add yet another $1.8 million, bringing us to $84.1 million. These potential improvements are not insignificant to BT. However, we must caution the reader that BT has not yet implemented a Gen-2 website, and the Gen-1 website is still experimental. Many practical implementation issues remain before these gains are achieved. Data to Prime the Automated Inference Loops We now describe the priming study for the experimental BT website. Although the morphing theory of ยง ยง2-6 can be applied to a wide range of websites, the priming study is an integral component of the BT application. It provides priors for the rmo s, rmo s, and q o r n s and data with which to estimate preference weights ( ) for website characteristics. 7.1. Priming Study-Questionnaires to Potential BT Website Visitors Using a professional market research company (Applied Marketing Science, Inc.) and a respected British online panel (Research Now), we invited current and potential broadband users to complete an online questionnaire that combined BT's experimental website with a series of preference and cognitivestyle questions. This sampling strategy attempts to obtain a representative sample of potential visitors to BT's broadband website. Because these data are used to calibrate key parts of the preference model, it is important that this sample be as representative as is feasible. Within a cognitive-style segment, we seek to assure that any response bias, if it exists, is not correlated with r n . Fortunately, with sufficient productionwebsite data, the Gittins and Bayesian loops should self-correct for response biases, if any, in segment ร— morph probabilities and/or cognitive-style segmentmembership priors. A total of 835 respondents completed the questionnaire. Because the questionnaire was comprehensive and time consuming, respondents received an incentive of ยฃ15. The questionnaire contained the following sequential sections: โ€ข Respondents answer questions to identify whether they are in the target market. โ€ข Respondents identify which of 16 broadband providers they would consider and provide initial purchase-intention probabilities for considered providers. โ€ข Respondents are given a chance to explore one of eight potential morphs for the BT website. The morphs were assigned randomly, and respondents were encouraged to spend at least five minutes on BT's experimental website. โ€ข Respondents provide post-visit consideration and purchase-intention probabilities. 12 โ€ข Respondents are shown eight pairs of websites that vary on three basic characteristics. They are asked to express their preferences between the pairs of websites with a choice-based conjoint analysis-like exercise. These data augment clickstream data when estimating . โ€ข Respondents complete established scales that the academic literature suggests measure cognitive styles. The questionnaire closes with demographic information. Reaction to the experimental BT websites was positive. Respondents found the websites to be helpful, accurate, relevant, easy to use, enjoyable, and informative (average scores ranging from 3.2 to 3.8 out of 5.0). On average, respondents clicked more than 10 times while exploring the websites, with 10% of the respondents clicking over 30 times. Figure 4 provides 10 of the 13 scales that we used to measure cognitive styles. We chose these scales based on prior literature as the most likely to affect respondents' preferences for website characteristics. We expect these scales to be a good start for website applications. To encourage further development, the Technical Appendix, available at http://mktsci. pubs.informs.org, provides a taxonomy of potential cognitive styles. Cognitive Style Measures We expected these scales to identify whether the respondent was analytic or holistic, impulsive or deliberative, visual or verbal, and a leader or a follower. The analytic versus holistic dimension is widely studied in psychology and viewed as being a major differentiator of how individuals organize and process information (Riding and Rayner 1998, Allinson and Hayes 1996, Kirton 1987, Riding and Cheema 1991. Researchers in both psychology and marketing suggest that cognitive styles can be further differentiated as either impulsive or deliberative (Kopfstein 1973, Siegelman 1969. With a slight rescaling three cognitive reflection scales developed by Frederick (2005) differentiate respondents on the impulsive versus deliberative dimension. 13 Other scales measure visual versus verbal styles, a key cognitive concept in psychology (Harvey et al. 1961, Paivio 1971, Riding and Taylor 1976, Riding and Calvey 1981. This dimension is particularly relevant to website design where the trade-off between pictures and text is an important design element. Although leadership is not commonly a cognitivepreferences, , should not be affected by any induced demand artifacts. Any demand artifacts affect primarily the priors. Fortunately, the Gittins loop is relatively insensitive to prior probabilities. 13 For example, "A bat and a ball cost $1.10 in total. The bat costs a dollar more than the ball. How much does the ball cost?" The impulsive answer is 10ยข; all other answers are considered to be deliberative. Figure 4 Example Measures of Cognitive Styles style dimension in psychology, we included leadership scales because thought leadership has proven important in the adoption of new products and new information sources (Rogers 1962, Rogers and Stanfield 1968, von Hippel 1988. To the extent that we included scales that do not distinguish cognitive styles, our empirical analyses will find null effects. Additional scales can be explored in future research. Our results are a conservative indicator of what is feasible with improved scales. Although the scales are well established in the literature, we began with construct tests using our data. We used exploratory factor analysis and confirmatory reliability analyses to reduce the 13 scales (10 scales from Figure 4 plus the 3 impulsive versus deliberate scales) to four cognitive dimensions. (See Braun et al. 2008 for greater detail on scale development and analysis.) For the BT data, impulsive versus deliberative and leader versus follower were measured with sufficient reliability (0.55 and 0.80, respectively); analytic versus holistic and visual versus verbal combined to a single construct (0.56 reliability). The analyses identified a fourth dimension: a single scale, reader versus listener. We suspect that this reader versus listener scale was driven by the nature of the broadband service websites that often give respondents a choice of reading text or tables or listening to an advisor. Although multi-item scales are more common in the literature, recent research recognizes the corresponding advantages of single-item scales (Bergkvist andRossiter 2007, Drolet andMorrison 2001). Based on this research we include this single-item scale as a fourth cognitive-style dimension. Although some of these reliabilities are lower than we would like, this reflects the challenges in measuring cognitive styles and, for our analytic models, adds noise to the estimation of and to the Bayesian loop. Fortunately, the constructs as measured appear to affect purchase probabilities (see Braun et al. 2008). In summary, we identified four empirical constructs to measure respondents' cognitive styles: โ€ข leader versus follower, โ€ข analytic/visual versus holistic/verbal, โ€ข impulsive versus deliberative, โ€ข (active) reader versus (passive) listener. Click-Alternative Characteristics There are four sources of variation in click-alternative characteristics. First, the morphs themselves vary on three basic dimensions. Second, click-alternatives within the morphs vary on the same three dimensions. Third, there are functional characteristics of clickalternatives, for example, whether a link provides general information (of potential interest to holistic respondents). Fourth, the home page of the experimental BT website gives the respondent a choice of four content areas. We expect visitors with different cognitive styles to vary in their desire to visit different content areas on their first click. 7.3.1. Basic Characteristics of a Morph. Based on the literature cited above we chose three basic clickalternative characteristics that were likely to distinguish morphs and click-alternatives within morphs. These characteristics were used to design the basic structures (backbones) of the BT experimental websites based on initial hypotheses about the variation among cognitive-style segments in preferences for characteristics. The characteristics varied on the following: โ€ข graphical versus verbal (e.g., graphs and pictures versus text and audio), โ€ข small-load versus large-load (e.g., the amount of information presented), โ€ข focused content versus general content (e.g., a few recommended plans versus all plans). The characteristics of the websites (morphs) that were shown (randomly) to each respondent at the beginning of the questionnaire and the characteristics of the pairs of websites shown in the choice-based conjoint-like exercise were designed to be distinguished on these basic click-alternative dimensions. Hence, we describe each morph by one of eight binary vectors, from 0 0 0 to 1 1 1 . For example, the 1 1 1 morph is graphic, focused, and small load. This binary notation is chosen to be consistent with the earlier notation of m = 0 to 7; e.g., m = 0 โ‡” 0 0 0 . We invested considerable effort to design morphs that would match cognitive styles, and to some extent, we succeeded. One advantage of the EGI optimization is that asymptotically it will identify automatically the best morph for a cognitive-style segment even if that morph is not the morph that we expect to be best a priori. The system in Figure 2 is robust with respect to errors in website design. In fact, a serendipitous outcome of the priming study was a better 14 The Gittins inference/optimization loop is based on discretely many cognitive-style segments r n . Future research might explore more continuous cognitive-style descriptions of website visitors. understanding of website design and the need for a Gen-2 experimental website. Characteristics of Click Alternatives Within a Morph. We used five independent judges to rate the basic characteristics of each click-alternative, a methodology that is common in marketing (e.g., Hughes and Garrett 1990, Perreault and Leigh 1989, Wright 1973. The judges were trained in the task but otherwise blind to any hypotheses. The average reliability of these ratings was 0.66 using a robust measure of reliability (proportional reduction in loss; Rust and Cooil 1994). Like cognitive styles, click-alternative characteristics are somewhat noisy but should provide sufficient information for the Bayesian loop and the estimation of preference weights . 7.3.3. Functional Characteristics of Click Alternatives. We identified four functional characteristics that were likely to appeal differentially to respondents with different cognitive styles. These functional characteristics were represented with the following binary variables: 15 โ€ข general information about BT (e.g., likely to appeal to holistic visitors), โ€ข analytic tool that allows visitors to manipulate information (e.g., likely to appeal to analytic visitors), โ€ข link to read a posting by another consumer (e.g., likely to appeal to followers), โ€ข link to post a comment (e.g., likely to appeal to deliberative visitors). Content Areas. The home page of the experimental BT website offered the visitor four content areas (advisor, community, comparisons, and learning center), each of which could be morphed. Figure 5 illustrates these four content areas. To test whether the content areas would appeal differentially to respondents based on their cognitive-style segments, we coded the content areas as binary variables. (We have three, rather than four, independent dummy variables for the four content areas.) Together, the three types of click-alternative variations give us ten (10) click-alternative characteristics: three basic dimensions, four functional characteristics, and three of four content areas. Estimation of Click-Alternative Preferences, , from the Priming Data The Bayesian inference loop uses visitors' clickstreams to compute posterior probabilities for cognitive-style segments r n . The posterior probabilities (q rn , Equation (5)) require preference weights, , for the click-alternative characteristics ( c kjn s). We now address how we obtain from the priming data a posterior distribution for . We can infer a posterior distribution for because, in the priming data, we observe the respondent's cognitive-style segment directly. The inference problem is to infer from y n s c kjn s r n s . We have two sources of data within the priming study. First, we observe each respondent's clickstream. Second, we augment each respondent's clickstream data with conjoint analysis-like data in which the respondent provides paired-comparison judgments for eight pairs of website pages. Because the latter choices among pairs of websites may not be derived from the same "utility" scale as choices from among click-alternatives, we allow for scale differences. Before we write out the likelihoods for each of the two types of data, we need additional notation. Cognitive-Style-Segment Vector Notation In ยง ยง2-6 we defined r n as a scalar. This is a general formulation for the Gittins loop. It allows each cognitive-style segment to be independent of every other segment. In the BT application there are 2 4 = 16 cognitive-style segments based on four binary cognitive-style dimensions. To reflect this interdependence among segments, we rewrite r n as a 5ร— 1 binary vector, r n , where the first element is always equal to 1 and represents the characteristic-specific mean. Each subsequent element of r n reflects a deviation from that mean based on a cognitive-style dimension of the segment. For example, a member of cognitive-style segment r n = 0 โ‡” r n = 1 โˆ’1 โˆ’1 โˆ’1 โˆ’1 is a follower, holistic/verbal, deliberative, and a listener; r n = 15 โ‡” r n = 1 1 1 1 1 is a leader, analytic/visual, impulsive, and a reader. With this notation, we write characteristic preferences compactly as r n = r n . Clickstream Likelihood Using the vector notation combined with the notation of ยง ยง2-6, the clickstream likelihood (CSL) is based on Paired-Comparison Likelihood Each respondent is presented with eight pairs of website pages that vary on the three basic morph characteristics of graphic versus verbal, focused versus general, and small versus large load. The eight pairs are chosen randomly from a 2 3 experimental design such that no pair is repeated for a respondent and left and right presentations were rotated randomly. The overall D-efficiency of this design is close to 100%. For each respondent, n, let d t1n and d t2n be the descriptions of the left and right website pages, respectively, for the tth pair on the three dimensions, and let s tn indicate the selection of the left website page, t = 1 to 8. The respondent's preference for the left website page is based on the characteristics of the website pages. Ifหœ tn is an extreme-value measurement error, then the respondent's unobserved preference for the left website page is given by d t1n โˆ’ d t2n r n +หœ tn . Note that we allow a differential scale factor, , to reflect possible differences between the clickstream and paired-comparison choice tasks. With this formulation, the paired-comparison likelihood (PCL) becomes the standard choice-based conjoint likelihood, which assumes that the unobserved errors are independent across paired-comparison choices: Finally, we use the method of Train (2003) to match the variances in Equations (6) and (7) and to assure that is scaled properly for both likelihoods. 16 Posterior Distribution for Cognitive-Style Preferences We combine Equations (6) and (7) with weakly informative priors, g , on the unknown parameters to obtain a posterior distribution for the cognitive-style preferences and the scaling parameter. Equation (8) assumes that the unobserved errors in the clickstream are independent of the measurement errors in the paired comparison choices: f c kjm d tn1 d tn2 s tn y n r n โˆ€ k j t n โˆ PCL * CSL * g From the 835 respondents in the priming study we observe 4,019 relevant clickstream choices and 6,680 paired-comparison choices. Samples from the posterior distribution of and were generated using WinBUGS. 17 Table 2 provides the posterior means of . Appendix 2 provides the intervals between the 0.05 and 0.95 quantiles for the posterior distribution. Using the mean posterior probabilities alone, we explain 60.3% of uncertainty in the clickstream choices (U 2 = 0 603; Hauser 1978). 16 The standard deviations of the error terms, kjn and tn , for the logit likelihoods determine the scale or "accuracy" of the parameter estimates. By allowing = 1, we automatically allow different standard deviations for the errors. Independence assumes the conjoint design is not endogenous (Hauser and Toubia 2005). 17 WinBUGS code and convergence details are available from the authors. As a check on the WinBUGS code, we also estimated using classical methods (maximum likelihood estimation (MLE)). The Bayesian and MLE estimates were statistically equivalent. We have highlighted in bold those coefficients for which the 0.05 to 0.95 quantile of the posterior distribution is either all positive or all negative. The lack of "significance" for the remaining coefficients might reflect insufficient variation in functional characteristics, the relative sparseness of data for the website areas (first click only), or unobserved variation. 18 19 We expect improved discrimination on BT's Gen-2 websites. By creating more distinct click-alternative choices, the Gen-2 website will be better able to identify cognitive styles with only a few clicks. On average, graphical content increases preference but small loads and focused content decrease preference. Analytic tools, consumer posts, plan comparisons, and virtual advisors are popular click choices by respondents. Respondents prefer to go first to website areas that compare plans and provide virtual advisors. There are also cognitive-stylespecific effects: respondents who are holistic/verbal or readers prefer focused content. Although not quite "significant," impulsive respondents prefer small information loads. The tendency to go first to plan comparisons and virtual advisors while avoiding general information appears to be a trait that distinguishes analytic/visual from holistic/verbal respondents. In the spirit of Bayesian inference, we cautiously examine characteristics for which 80% of the posterior is either all positive or negative. In this case we would find that followers such as learning communities and listeners like to post comments and compare plans. Listeners also prefer verbal and general content and analytic/visual respondents prefer large information loads. We interpret these results, based on the Gen-1 experimental website, as hypotheses to be tested with Gen-2 websites and the corresponding priming studies. Strong Priors for Gittins and Bayesian Loops The priming study was based on a representative sample of potential visitors to BT's experimental Gen-1 website. We can use these data to obtain strong priors with which to improve the performances of the 18 We use the classical term "significance" as shorthand for the quantiles being either all positive or negative. We do this for ease of exposition recognizing the more subtle Bayesian interpretation. 19 Preferences vary across cognitive-style segments and the model does explain over 60% of the variation in clickstream choices. Future research might test more complex specifications subject to the need to update q rn in real time. For example, if we specified a normal hyperdistribution over the 50 parameters in Table 2, updating q rm would require extensive numerical integration (or simulated draws) in real time (e.g., 50 parameters ร— 16 segments ร— 10 clicks ร— 10 alternatives per click). Gittins and Bayesian loops. For example, although the Gittins loop works well with equally likely priors on the beta parameters, the analyses of ยง4 suggest that we can achieve a slight improvement with stronger priors. 9.1. Prior Cognitive-Style-Segment Probabilities for the Bayesian Loop Using the established scales we observed the cognitive-style segment, r n , for every respondent in the representative sample. The empirical distribution of cognitive-style segments provides priors, q o r n , for the Bayesian loop. Prior Purchase Probabilities for the Gittins Loop In the priming study we observe directly each respondent's purchase intentions. Thus, because we assigned each respondent randomly to one of the eight morphs and we inferred that respondent's cognitive-style segment from the established scales, we have a direct estimate of the prior purchase probabilities for each segment ร— morph combination,p rmo . These direct estimates provide information on the prior beta parameters viap rmo = rmo / rmo + rmo . For the Gittins loop, we want the data to overwhelm the prior so we select a relatively small effective sample size, N rmo , for the beta prior. Because N rmo = rmo + rmo and because the variance of the beta distribution is rmo rmo / rmo + rmo 2 rmo + rmo + 1 we choose an approximate N rmo by managerial judgment informed by matching the variance of the beta distribution to the variance of the observed purchaseintention probabilities. For our data we select N rmo 12. Caveats and Practical Considerations With sufficiently many website visitors from whom to observe actual purchase decisions, thep rmn will converge to their true values and the priors will have negligible influence. Nonetheless, we sought to use the data more efficiently for obtaining strong priors for the Gittins and Bayesian loops. Our first practical consideration was sample size. With 835 respondents for 16 cognitive-style segments and eight morphs, the average sample size is small for each segment ร— morph estimate ofp rmo . To make more efficient use of the data and smooth these estimates over the r ร— m cells, we used logistic regression. The explanatory variables were the basic characteristics of the morphs, the cognitive-style dimensions of the segments, and characteristic-dimension matches (e.g., small information loads for impulsive segments). The variance ofp rmo is also based on the smoothed estimates. See Braun et al. (2008) for further analyses. Our second practical consideration in the priming study was the use of purchase intentions rather than observed purchases. In a production, website visitors self-select to come to BT's website; we expect such visitors are closer in time to purchasing broadband service than those recruited for the priming study. Although we were careful in recruiting to obtain a representative sample, we measured purchase intentions rather than observed purchases. 20 Purchase intentions have the benefit of obtaining a more discriminating measure from each respondent than 0 versus 1 purchase. However, purchase intentions are often subject to demand artifacts (e.g., Morwitz et al. 1993). For example, for nonfrequently purchased items, true probabilities tend to be linear in purchase intentions (Jamieson and Bass 1989, Kalwani and Silk 1982, Morrison 1979. To reduce the impact of potential scale factors, we normalized purchase intention measures relative to other broadband services and we used baseline benchmarks in Table 1 as quasi controls. Revenue increases are based on the relative efficiencies of the Gittins and Bayesian loops. Finally, because morphs were assigned randomly and each respondent saw only one morph, the relative differences between morphs are less sensitive to any demand artifacts. Improvements and Further Applications The development and testing of morphing websites is ongoing. BT is optimistic based on the Gen-1 priming study. Viewed as a feasibility test, the Gen-1 test identified a few website characteristics that could be matched to cognitive-style segments. The Gen-1 test also confirmed that website characteristics can affect purchase probabilities. Before collecting data we did not know which of the eight morphs would maximize revenue. However, the Gittins loop alone (without morphing) identified the best website characteristics, implying an increase in revenue of $52.3 million (Table 1 and ยง6). Section 6 also suggests that a Gen-2 website (designed to distinguish among cognitive styles cleanly after 10 clicks) could increase revenues an additional $27.4 million. Based on this "proof of concept," BT plans to implement the customer advocacy backbone, illustrated in Figures 1 and 5, and add Gen-2 morphing to the site as soon as feasible. 20 As is appropriate ethically and legally, respondents were recruited with promises that we would not attempt to sell them anything in the guise of market research. Because of these guidelines we could not offer respondents the ability to sign up for a BT broadband plan. In addition, Suruga Bank in Japan is developing and testing a morphing website to sell personal loans. The website morphs based on cognitive styles and cultural preferences such as hierarchical versus egalitarian, individual versus collective, and emotional versus neutral (Hofstede 1983(Hofstede , 1984(Hofstede , 2001Trompenaars and Hampden-Turner 1997;Steenkamp et al. 1998). 10.1. Gen-1 Compared to Gen-2 Experimental Websites The eight morphs in the Gen-1 experimental website were sufficiently varied in the way they affected purchase probabilities. However, the website characteristics within a morph (from which we identify cognitive-style segments) were not sufficiently varied in Gen-1. For example, the website areas on the Gen-1 home page were effective at distinguishing analytic/visual from holistic/verbal respondents (see in Table 2), but less so on the other cognitivestyle dimensions. The simulations in Table 1 assumed that website characteristics within a morph were more distinct leading to larger posterior means (Gen-2 ). (BT feels that such a website is feasible.) To motivate Gen-2 development and to assess the Bayesian-loop gains for Gen-1, we resimulated the Bayesian loop with the Gen-1 . (The Gittins-onlyloop results remain unchanged.) With 10 clicks, 80,000 visitors, and a Gen-1 , the expected reward is 0.3646. While the implied revenue increase is not insignificant for BT, the Gen-1 gains (total Gittins + Bayesian gains = $54.9 million) are much smaller than the potential gains with a Gen-2 website (total gains = $79.7 million). Interestingly, even the Gen-1 website could get substantially more revenue if it had infinitely many visitors such that the system learned almost perfectly the segment ร— morph purchase probabilities p rm . Gen-1 n = could achieve $75.7 million in additional revenues, close to that which Gen-2 achieves with 80,000 visitors. Future Research to Improve the Theory and Practice of Morphing Prior research and industry practice have demonstrated the power of self-selected branching, recommendations, and customized content (Ansari andMela 2003, Montgomery et al. 2004). In this paper we explore the next step, changing the presentation of information to match each customer's cognitive style. The EGI solution to the POMDP enables us to explore different assignments of morphs to cognitivestyle segments. The Bayesian updating enables customers to reveal their cognitive styles through their clickstreams. Together, the Gittins and Bayesian loops automate morphing (after a priming study). Feasibility considerations required empirical tradeoffs. We used segments of cognitive styles rather than continuously defined cognitive styles because the dynamic program requires finitely many "arms." We morphed once per visit, in part, because we observe a single subscription decision per customer. We estimated homogeneous click-characteristic preference weights so that we could identify cognitivestyle segments in real time. We used the posterior mean of rather than sampling from the posterior distribution of because we need to compute the EGI between clicks. Moreover, the priming study was based on a Gen-1 implementation. Each of these issues can be addressed in future applications. BT was most interested in broadband subscriptions. In other applications, purchase amounts might be important. If purchase amounts are normal random variables, we can use normal priors rather than beta priors. Gittins (1979, pp. 160-161) demonstrates that this normal-normal case is also solved with an index strategy and provides algorithms to compute the normal-normal indices. Vermorel and Mohri (2005) explore a series of heuristic algorithms that perform well in online contexts. We easily extend the theory to a situation where we observe (1) whether a purchase is made and (2) the amount of that purchase. In this case we observe the normally distributed outcome conditioned on a Bernoulli outcome. This is a special case of "bandit-branching" as introduced by Weber (1992) and studied by Bertsimas and Niรฑo-Mora (1996) and Tsitsiklis (1994). Using a "fair charge" argument, Weber shows that the value of a bandit-branching process can be computed by replacing the reward to a branch with its Gittins index. The index of a sales-then-sales-amount process becomes the product of the beta-Bernoulli and the normal-normal indices. All other considerations in Figure 2 remain the same. Recent developments in the bandit literature now make it feasible to include switching costs via fast generalized index heuristics (e.g., Dusonchet andHongler 2006, Jun 2004). Our application focused on cognitive styles. The literatures in psychology and learning posit that cognitive styles are enduring characteristics of human beings. If our EGI algorithm was extended to other marketing-mix elements besides website design, we might consider latent states that evolved randomly or based on marketing-mix elements. (See review in ยง3.) There are exciting opportunities to combine the advantages of HMMs or latent-class analysis with the exploration-exploitation trade-offs made possible with expected Gittins indices. u kjn = visitor n's utility for the jth click-alternative of the kth click; implies clickstream likelihood; y kjn = 1 if visitor n chooses the jth click-alternative on the kth click, 0 otherwise; y kn = binary vector for the kth decision point for the nth visitor; y n = clickstream matrix for the nth visitor; y = set of y n s for all n, used only in summary notation; rmn = parameter of the naturally conjugate beta distribution used in the Gittins dynamic program ( rmo is a prior value); rmn = parameter of the naturally conjugate beta distribution used in the Gittins dynamic program ( rmo is a prior value); Visitors (n + priors) Gittins index Priors, ฮฑ rmo and ฮฒ rmo , are equivalent to pseudo-observations mn = indicator variable to indicate when the nth visitor purchases a BT broadband plan after seeing morph, m; rmn when r is known and we wish to make dependence on r explicit; = matrix of the mn s, used in summary notation only; kjn = extreme-value errors for choice among clickalternatives; = scaling parameter to allow scale differences in clickstream and paired-comparison data; r n = preference vector for the r n th cognitive-style segment; used inลฉ kjn = c kjn r n +หœ kjn ; = matrix of the r n ; is a 10 ร— 5 matrix; tn = extreme-value measurement error used for pairedcomparison conjoint questions.
The Effects of Internet-Based Storytelling Programs (Amazing Adventure Against Stigma) in Reducing Mental Illness Stigma With Mediation by Interactivity and Stigma Content: Randomized Controlled Trial Background Mental illness stigma has been a global concern, owing to its adverse effects on the recovery of people with mental illness, and may delay help-seeking for mental health because of the concern of being stigmatized. With technological advancement, internet-based interventions for the reduction of mental illness stigma have been developed, and these effects have been promising. Objective This study aimed to examine the differential effects of internet-based storytelling programs, which varied in the levels of interactivity and stigma content, in reducing mental illness stigma. Methods Using an experimental design, this study compared the effects of 4 storytelling websites that varied in the levels of interactivity and stigma content. Specifically, the conditions included an interactive website with stigma-related content (combo condition), a noninteractive website with stigma-related content (stigma condition), an interactive website without stigma-related content (interact condition), and a noninteractive website without stigma-related content (control condition). Participants were recruited via mass emails to all students and staff of a public university and via social networking sites. Eligible participants were randomized into the following four conditions: combo (n=67), stigma (n=65), interact (n=64), or control (n=67). The participants of each group viewed the respective web pages at their own pace. Public stigma, microaggression, and social distance were measured on the web before the experiment, after the experiment, and at the 1-week follow-up. Perceived autonomy and immersiveness, as mediators, were assessed after the experiment. Results Both the combo (n=66) and stigma (n=65) conditions were effective in reducing public stigma and microaggression toward people with mental illness after the experiment and at the 1-week follow-up. However, none of the conditions had significant timeร—condition effects in reducing the social distance from people with mental illness. The interact condition (n=64) significantly reduced public stigma after the experiment (P=.02) but not at the 1-week follow-up (P=.22). The control condition (n=67) did not significantly reduce all outcomes associated with mental illness stigma. Perceived autonomy was found to mediate the effect of public stigma (P=.56), and immersiveness mediated the effect of microaggression (P=.99). Conclusions Internet-based storytelling programs with stigma-related content and interactivity elicited the largest effects in stigma reduction, including reductions in public stigma and microaggression, although only its difference with internet-based storytelling programs with stigma-related content was not statistically significant. In other words, although interactivity could strengthen the stigma reduction effect, stigma-related content was more critical than interactivity in reducing stigma. Future stigma reduction efforts should prioritize the production of effective stigma content on their web pages, followed by considering the value of incorporating interactivity in future internet-based storytelling programs. Trial Registration ClinicalTrials.gov NCT05333848; https://clinicaltrials.gov/ct2/show/NCT05333848 The CONSORT-EHEALTH checklist is intended for authors of randomized trials evaluating web-based and Internet-based applications/interventions, including mobile interventions, electronic games (incl multiplayer games), social media, certain telehealth applications, and other interactive and/or networked electronic applications. Some of the items (e.g. all subitems under item 5 -description of the intervention) may also be applicable for other study designs. The goal of the CONSORT EHEALTH checklist and guideline is to be a) a guide for reporting for authors of RCTs, b) to form a basis for appraisal of an ehealth trial (in terms of validity) CONSORT-EHEALTH items/subitems are MANDATORY reporting items for studies published in the Journal of Medical Internet Research and other journals / scientific societies endorsing the checklist. As the CONSORT-EHEALTH checklist is still considered in a formative stage, we would ask that you also RATE ON A SCALE OF 1-5 how important/useful you feel each item is FOR THE PURPOSE OF THE CHECKLIST and reporting guideline (optional). Mandatory reporting items are marked with a red *. In the textboxes, either copy & paste the relevant sections from your manuscript into this form -please include any quotes from your manuscript in QUOTATION MARKS, or answer directly by providing additional information not in the manuscript, or elaborating on why the item was not relevant for this study. yes: all primary outcomes were significantly better in intervention group vs control partly: SOME primary outcomes were significantly better in intervention group vs control no statistically significant difference between control and intervention potentially harmful: control was significantly better than intervention in one or more outcomes inconclusive: more research is needed Other: Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if offline products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms. Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The non-web based components are involved in the INTERACT and CONTROL condition only. e a t u r e s / f u n c t i o n a l i t i e s / c o m p o n e n t s o f t h e i n t e r v e n t i o n a n d c o m p a r a t o r i n t h e M E T H O D S s e c t i o n o f t h e A B S T R A C T Mention key features/functionalities/components of the intervention and comparator in the abstract. If possible, also mention theories and principles used for designing the site. Keep in mind the needs of systematic reviewers and indexers by including important synonyms. Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "This study compared the effects of four storytelling websites varied on levels of interactivity and stigma content using an experimental design. Specifically, the conditions included an interactive website with stigma-related content (COMBO condition), a noninteractive website with stigma content (STIGMA condition), an interactive website without stigma-related content (INTERACT condition) and a non-interactive website without stigmarelated content (CONTROL condition). Participants were recruited via mass emails to all students and staff of a public university and social networking sites. Eligible participants were randomized into four conditions: COMBO (n=67), STIGMA (n=65), INTERACT (n=64) or CONTROL (n=67). Participants viewed the respective Web page at their own pace. Public stigma, microaggression, and social distance were measured online at pre-experiment, postexperiment, and 1-week follow-up. Perceived autonomy and immersiveness as mediators were assessed at post-experiment." Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). t o -f a c e a s s e s s m e n t s i n t h e M E T H O D S s e c t i o n o f t h e A B S T R A C T Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic or a closed online user group (closed usergroup trial), and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment). Clearly say if outcomes were self-assessed through questionnaires (as common in web-based trials Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants were recruited via mass emails to all students and staff of a public university and social networking sites." "Participants viewed the respective Web page at their own pace." "Public stigma, microaggression, and social distance were measured online at preexperiment, post-experiment, and 1-week follow-up. Perceived autonomy and immersiveness as mediators were assessed at post-experiment." Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Both COMBO (n=66) and STIGMA (n=65) conditions were efficacious in reducing public stigma and microaggression towards people with mental illness at post-and 1-week followup. However, none of the conditions had significant time x condition effects in reducing social distance from people with mental illness. INTERACT condition (n=64) can significantly reduce public stigma at post-but not 1-week follow-up. CONTROL condition (n=67) was not significant in reducing all mental illness stigma outcomes. Perceived autonomy was found to mediate the effect of public stigma, and immersiveness mediated the effect of microaggression." Conclusions/Discussions in abstract for negative trials: Discuss the primary outcome -if the trial is negative (primary outcome not changed), and the intervention was not used, discuss whether negative results are attributable to lack of uptake and discuss reasons. Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Internet-based storytelling program with stigma-related content and interactivity elicited the largest effects in stigma reduction, including reductions in public stigma and microaggression, although its difference with Internet-based storytelling programs with stigma-related content only was not statistically significant. In other words, although interactivity could strengthen the stigma reduction effect, stigma-related content was a more critical element than interactivity in reducing stigma. Future stigma reduction efforts should place higher priority in producing effective stigma content in a Web page, followed by considering the value of incorporating interactivity in future Internet-based storytelling programs." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Mental illness stigma is a globally concerning issue due to its detrimental effects imposed on people with mental illness across various life domains (e.g., education, housing, employment, healthcare) during their recovery and their willingness to seek help [1-3]." "However, the content and design of these Internet-based stigma reduction programs varied largely and limited efforts have been put to investigate the common factors contributing to their effectiveness." "Apart from incorporating the critical determinants, namely education and contact, in stigma reduction, many Internet-based interventions have made use of interactivity and storytelling in their designs and have demonstrated positive results in reducing mental illness stigma [24][25][26][27][28]. Yet, the types of interactivity are diverse and it is unknown whether the addition of interactivity induces significant positive attitudinal changes that should be valued." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Recently, Internet-based programs tackling mental illness stigma have been established across the globe due to their low cost, accessibility, and scalability [12][13][14]." "Research has also shown that Internet-based and face-to-face stigma reduction programs are equally effective [20,21]." "According to the Systemic Thinking Model, in interactive environments, interactivity allows individuals to be the agent and effect physical environmental changes that best align with their thinking needs and flow [36]." "Research has found interactivity to have a significant role in improving information processing through enhanced motivation, which facilitates stigma reduction [26]. Perceived autonomy and immersiveness have been found to enhance motivation [43,44]" Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The present experimental study aimed to investigate the effect of Internet-based storytelling programs with the manipulation of stigma-related content and interactivity. In the present study, we hypothesized that an Internet-based storytelling program with a combination of interactivity and stigma content would lead to the most significant reduction in public stigma, microaggression, and social distance from people with mental illnesses, followed by Internet-based storytelling program with stigma content-only and interactivityonly, compared with control. Second, we hypothesized that the effects observed in stigma reduction would be mediated by perceived autonomy and immersiveness due to the presence of interactivity." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Recruitment was done through sending mass emails to students and staff at a public university in Hong Kong and by posting advertisements on social media." As the whole recruitment procedure was done online, participants successful registered were considered as having an adequate level of computer literacy. Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic, and clarify if this was a purely webbased trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasi-anonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email confirmation, phone calls) were used to detect/prevent these. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Individuals who were interested in participating in the study visited the registration link where they were screened by completing a Web-based survey on basic contact information and age. The experimenter then provided eligible individuals a Zoom appointment link, where individuals indicated their preferred experiment day and time. A Zoom link was given to individuals upon their completion of booking. At the scheduled Zoom experimental session, participants were given detailed information about the study aims, length of the program, and participant involvement. Participants provided informed consent by checking the "I agree" button at the end of the study description page. Afterwards, participants received another Web-based questionnaire link to complete the pre-experiment questionnaire. Participants were then randomly assigned to one of the four experimental conditions through block randomization. Participants completed the pre-, post-, and 1-week follow-up questionnaires on the Web." Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "At the scheduled Zoom experimental session, participants were given detailed information about the study aims, length of the program, and participant involvement. Participants provided informed consent by checking the "I agree" button at the end of the study description page." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants completed the pre-, post-, and 1-week follow-up questionnaires on the Web." Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "At the end of each Web page experience, participants were provided with a questionnaire link, of which measures microaggression, public stigma, social distance from people with mental illness, perceived autonomy, and immersiveness. One week after the experimental session, participants completed the follow-up questionnaire assessing microaggression, public stigma, and social distance from people with mental illness." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Recruitment was done through sending mass emails to students and staff at a public university in Hong Kong and by posting advertisements on social media." This is irrelevant for the study as information related to the institution was only displayed during recruitment. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No developers were involved in the study. - Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study There was no previous attempts to evaluate the effect of the intervention. Adoption/ use rate was not applicable as it was a one-off intervention. - Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial. Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study There were no revisions and updating after the trial commencement. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The presence or absence of interactivity and the presence or absence of stigma content were manipulated in the four Web pages. All Web pages involved a story. For the COMBO and STIGMA conditions, the story was identical, which was about the journey of a person experiencing mental illness stigma. The COMBO condition utilized the Amazing Adventure Against Stigma website (https://antistigma.psy.cuhk.edu.hk/). For the INTERACT and CONTROL conditions, the story was also identical and non-stigma related, illustrating a typical day of a person." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Please refer to Multimedia Appendix 1. - Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Each Web page took approximately 20 minutes to browse through." Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered". It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability). Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Afterwards, participants received another Web-based questionnaire link to complete the pre-experiment questionnaire." "At the end of each Web page experience, participants were provided with a questionnaire link, which measures microaggression, public stigma, social distance from people with mental illness, perceived autonomy, and immersiveness. One week after the experimental session, participants completed the follow-up questionnaire assessing microaggression, public stigma, and social distance from people with mental illness." "To assess one's previous experience with mental illness, the Level of Contact Report [55] was employed, where participants indicated whether they had the experiences reported in the 12 items such as "I have watched a movie or television show in which a character depicted a person with mental illness" and "I have observed persons with a severe mental illness on a frequent basis". Higher scores indicate higher levels of previous contact with people having mental illness." "The 21-item Public Stigma Scale-Mental Illness-Short Version (PSSMI) [56] was used to assess mental illness public stigma and personal advocacy. Each item was rated on a 6point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Sample items included "People with mental illness are a burden to society." (public stigma), and "I wholeheartedly fight for the rights of people with mental illness." (personal advocacy). Reverse scoring was done for personal advocacy items. Higher scores indicate higher levels of public stigma. In this study, its Cronbach alphas were .93, .95, and .94, at baseline, post, and 1-week followup, respectively." "Microaggression was measured by the 17-item Mental Illness Microaggressions Scale (MIMS-P) [57], which covers assumption of inferiority, patronization, and fear of mental illness. Each item was rated on a 4-point Likert scale from 1 (strongly disagree) to 4 (strongly agree). Sample items included "If someone I'm close to told me that they had a mental illness diagnosis, I would expect them to have trouble understanding some things" (assumption of inferiority), "If someone I'm close to told me that they had a mental illness diagnosis, I would give them advice on how to remain stable" (patronization) and "If I saw a person who I thought had a mental illness in public, I would keep my distance from them" (fear of mental illness). Higher scores indicate higher levels of microaggression. In this study, the Cronbach alphas of the MIMS-P were .78, .86, and .87 at baseline, post, and 1week follow-up, respectively." "The 8-item Social Distancing Scale [56] was used to measure the behavioral intention to keep a social distance from people with mental illness. Participants rated the extent to which they endorsed each item from 1 (very willing) to 6 (very unwilling) on items such as "Assuming you have children, you will let persons with mental illnesses take care of your children" and "You will work with persons with mental illnesses in the same institution". In this study, its Cronbach alphas were .83, .88, and .86, at baseline, post, and 1-week followup, respectively." "To assess perceived autonomy of the Web page experience, the 10-item Self Determination Scale (SDS) [58] was used in the post-experiment questionnaire. Each item was a pair of opposite statements, in which participants rated their level of perceived choice and selfawareness with a slider from 1 (only A feels true) to 5 (only B feels true subitem not at all important 1 2 3 4 5 essential g (p ) g p g p , y sometimes seem alien to me. B. During this web page experience, my emotions always seem to belong to me" (self-awareness)". Reverse scoring was done for perceived choice items. In this study, its Cronbach alpha was .89 at post-experiment." "The 15-item Transportation Scale [59] was used to assess participants' immersiveness in the Web experience. It had a 4-point Likert scale from 1 (very much) to 4 (not at all) on items such as "I could picture myself in the scene of the events described in the Web page". The last four items were adapted to fit with the experimental conditions. In the COMBO and STIGMA conditions, the last four items included "While reading the Web page, I had a vivid image of the avatar representing me", "While reading the Web page, I had a vivid image of the host", "While reading the Web page, I had a vivid image of the journey" and "While reading the Web page, I had a vivid image of the dialogue". In the INTERACT and CONTROL conditions, the last four items were "While reading the Web page, I had a vivid image of the avatar representing me", "While reading the Web page, I had a vivid image of my home", "While reading the Web page, I had a vivid image of my breakfast" and "While reading the Web page, I had a vivid image of my office". Items 2, 5 and 9 were framed negatively. All the items are scored in the direction that higher scores indicate higher levels of immersiveness. In this study, its Cronbach alpha was .84 at post-experiment." Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The procedure of the study is illustrated in Figure 1. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants provided informed consent by checking the "I agree" button at the end of the study description page. Afterward, participants received another Web-based questionnaire link for pre-experiment questionnaire. Participants were then randomly assigned to one of the four experimental conditions through block randomization." The block randomization was done by computer. Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator". Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Participants were blinded while the researcher was not. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "All analyses were conducted using SPSS version 27.0 (IBM Corporation) and the moderation and mediation plug-in PROCESS. Categorical chi-square and one-way ANOVA were used to examine baseline differences between experimental conditions. Repeated measures ANOVA analyses with Bonferroni adjustment and post-hoc analysis were conducted to detect significant interaction effects between condition and time to see if conditions showed significant reduction in all mental illness stigma outcomes across the three time points. Mediation analysis was conducted using PROCESS Model 4 to investigate the relationship between possible mediators, perceived autonomy, and immersiveness, with all outcomes at follow-up assessment." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "A total of 263 participants were recruited in this study and completed the experimental session, pre-experiment questionnaire and post-experiment questionnaire. All but one participants (99.62%; 262/263) completed the 1-week follow-up questionnaire. The procedure of the study is illustrated in Figure 1. Demographics and baseline characteristics of 263 participants were analysed, and data from 262 participants were analysed with repeated measures ANOVA analyses and mediation analysis." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "All analyses were conducted using SPSS version 27.0 (IBM Corporation) and the moderation and mediation plug-in PROCESS. Categorical chi-square and one-way ANOVA were used to examine baseline differences between experimental conditions. Repeated measures ANOVA analyses with Bonferroni adjustment and post-hoc analysis were conducted to detect significant interaction effects between condition and time to see if conditions showed significant reduction in all mental illness stigma outcomes across the three time points. Mediation analysis was conducted using PROCESS Model 4 to investigate the relationship between possible mediators, perceived autonomy, and immersiveness, with all outcomes at follow-up assessment." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "At the scheduled Zoom experimental session, participants were given detailed information about the study aims, length of the program, and participant involvement. Participants provided informed consent by checking the "I agree" button at the end of the study description page." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Everything was addressed in the informed consent form. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "A total of 263 participants were recruited in this study and completed the experimental session, pre-experiment questionnaire and post-experiment questionnaire. All but one participants (99.62%; 262/263) completed the 1-week follow-up questionnaire. The procedure of the study is illustrated in Figure 1." * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "A total of 263 participants were recruited in this study and completed the experimental session, pre-experiment questionnaire and post-experiment questionnaire. All but one participants (99.62%; 262/263) completed the 1-week follow-up questionnaire. The procedure of the study is illustrated in Figure 1." Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) or other figures or tables demonstrating usage/dose/engagement. Copy and paste relevant sections from the manuscript or cite the figure number if applicable (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The procedure of the study is illustrated in Figure 1." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The procedure of the study is illustrated in Figure 1." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study There were no secular events. D o e s y o u r p a p e r a d d r e s s C O N S O R T s u b i t e m 1 4 b ? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The trial did not ended or stop early. ? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Other detailed demographics and baseline characteristics of the participants are displayed in Table 1 Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Other detailed demographics and baseline characteristics of the participants are displayed in Table 1." Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention. Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Public Stigma towards People with Mental Illness: "Results from repeated measures ANOVA analyses indicated a significant time x condition effect (P=.002, ฮท2 =.04) and a post-hoc analysis was conducted. In the COMBO condition, public stigma significantly decreased from baseline to post-assessment (mean difference=0.61, 95% CI 0.49 to 0.74, P<.001, ฮท2 =.37) and the decrease was maintained at 1-week follow-up (mean difference=0.53, 95% CI 0.37 to 0.69, P<.001, ฮท2 =.37). In the STIGMA condition, public stigma also significantly decreased from baseline to post-assessment (mean difference=0.42, 95% CI 0.30 to 0.55, P<.001, ฮท2 =.22) and the decrease was maintained at 1-week follow-up (mean difference=0.34, 95% CI 0.18 to 0.50, P<.001, ฮท2 =.22). In the INTERACT condition, public stigma significantly decreased from baseline to post-assessment (mean difference=0.14, 95% CI 0.02 to 0.26, P=.02, ฮท2 =.03) but the effect cannot be sustained at 1-week follow-up (mean difference=0.12, 95% CI -0.04 to 0.28, P=.22, ฮท2 =.03). In the CONTROL condition, the effect was not significant from baseline to post-assessment (mean difference=0.07, 95% CI -0.06 to 0.20, P=.56, ฮท2 =.01) and from baseline to 1-week follow-up (mean difference=0.09, 95% CI -0.08 to 0.26, P=.57, ฮท2 =.01). In terms of mean difference values, the results indicated that the effect was strongest in COMBO, followed by STIGMA and INTERACT conditions respectively. An additional post-hoc analysis was carried out to compare COMBO with STIGMA conditions, the interaction effect between interactivity and stigma content was not significant (P=.09)." Microaggression: "The results found a significant time x condition effect (P<.001, ฮท2 =.06) and a post-hoc analysis was carried out. Microaggression significantly decreased from baseline to post-assessment in both COMBO (mean difference=0.34, 95% CI 0.25 to 0.42, P<.001, ฮท2 =.31) and STIGMA conditions (mean difference=0.28, 95% CI 0.19 to 0.36, P<.001, ฮท2 =.24). The effects were sustained and strengthened at 1-week follow-up in both conditions (COMBO: mean difference=0.39, 95% CI 0.29 to 0.49, P<.001, ฮท2 =.31; STIGMA: mean difference=0.33, 95% CI 0.23 to 0.43, P<.001, ฮท2 =.24). In the INTERACT condition, the effect was not significant from baseline to post-assessment (mean difference=0.03, 95% CI -0.05 to 0.12, P=1.00, ฮท2 =.01) and from baseline to 1-week follow-up (mean difference=0.06, 95% CI -0.04 to 0.16, P=0.40, ฮท2 =.01). In the CONTROL condition, the effect was also not significant from baseline to post-assessment (mean difference=0.03, 95% CI -0.06 to 0.12, P=1.00, ฮท2 =.01) and from baseline to 1-week follow-up (mean difference=-0.04, 95% CI -0.15 to 0.07, P=1.00, ฮท2 =.01). The results indicated that the effect of COMBO condition was stronger than that of STIGMA condition with regard to the mean difference values. No significant interaction effect between interactivity and stigma content was found (P=.58) after running the additional post-hoc analysis to compare COMBO and STIGMA conditions. Social Distance from People with Mental Illness: "The results showed a non-significant time x condition effect (P=.25, ฮท2 =.02). The additional post-hoc analysis comparing COMBO and STIGMA showed no significant interaction effect between interactivity and stigma content (P=.46). Details of the repeated measures ANOVA analyses are shown in Table 2." Mediating Analysis: "To compare the mediation effect of perceived autonomy and immersiveness between conditions with public stigma and microaggression, mediation analyses were performed by putting both perceived autonomy and immersiveness into PROCESS Model 4. Table 3 showed the unstandardized and standardized factor loadings for the model. A mediation model of perceived autonomy and immersiveness between conditions with public stigma and microaggression is shown in Figure 2. Mediation analysis subitem not at all important 1 2 3 4 5 essential p g gg g y for social distance was not conducted because no interaction effect was observed in social distance across conditions." "Significant indirect effects of , .02]), and .02]) on public stigma through perceived autonomy were observed. The non-significant indirect effects of 0.00]), 0.00]), and INTERACT (b=-0.07, BCa CI [-0.16, 0.00]) on public stigma through immersiveness were observed. The results showed that perceived autonomy was a significant mediator between conditions and public stigma." "Non-significant indirect effects of COMBO (b=0.07,0.17 In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a). Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Results supported our hypotheses that an Internet-based storytelling program with a combination of stigma content and interactivity was able to significantly reduce public stigma, microaggression immediately post-experiment and at 1-week follow-up assessment. Contrary to our hypotheses, an Internet-based storytelling program with a combination of stigma content and interactivity could not significantly reduce social distance from people with mental illness. In other words, the storytelling program was more effective in improving individuals' stigmatizing cognitions, sense of personal advocacy Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Interestingly, Internet-based storytelling program with interactivity-only was also found to reduce public stigma at post-assessment although the effect could not be maintained after one week. It might support our assumption on the positive relationship between interactivity and positive affect and between positive affect and reduced prejudice [37,40]. Future studies need to examine these relationships." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "This study has some limitations that deserve attention. First, our sample mainly consisted of young university students. The findings may not be generalizable to different populations. That being said, in social marketing, segmentation of our target population is essential. The present study showed that Internet-based anti-stigma storytelling programs with interactivity may be an effective tool in reducing mental illness stigma for young, educated population in the community who are comfortable and skillful in accessing information over the Internet. Furthermore, due to the homogeneous nature of our sample, moderation analysis was not carried out in the present study. Future studies can explore possible moderators of the effect of Internet-based stigma reduction interventions, for example, gender, age, and education level...." Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This was not mentioned as the major focus was on effectiveness.
Supersymmetry Simulations with Off-Shell Effects for LHC and ILC At the LHC and at an ILC, serious studies of new physics benefit from a proper simulation of signals and backgrounds. Using supersymmetric sbottom pair production as an example, we show how multi-particle final states are necessary to properly describe off-shell effects induced by QCD, photon radiation, or by intermediate on-shell states. To ensure the correctness of our findings we compare in detail the implementation of the supersymmetric Lagrangian in MadGraph, Sherpa and Whizard. As a future reference we give the numerical results for several hundred cross sections for the production of supersymmetric particles, checked with all three codes. Introduction The discoveries of the electroweak gauge bosons and the top quark more than a decade ago established perturbative quantum field theory as a common description of electromagnetic, weak, and strong interactions, universally applicable for energies above the hadronic GeV scale. The subsequent measurements of QCD and electroweak observables in high-energy collision experiments at the SLAC SLC, CERN LEP, and Fermilab Tevatron have validated this framework to an unprecedented precision. Nevertheless, the underlying mechanism of electroweak symmetry breaking remains undetermined. It is not clear how the theory should be extrapolated beyond the electroweak scale v = 246 GeV to the TeV scale or even higher energies [1]. At the LHC (and an ILC) this energy range will be directly probed for the first time. If the perturbative paradigm holds, we expect to see fundamental scalar Higgs particles, as predicted by the Standard Model (SM). Weak-scale supersymmetry (SUSY) is a leading possible solution to theoretical problems in electroweak symmetry breaking, and predicts many additional new states. The minimal supersymmetric extension of the Standard Model (MSSM) is a model of softly-broken SUSY. The supersymmetric particles (squarks, sleptons, charginos, neutralinos and the gluino) can be massive in comparison to their SM counterparts. Previous and current high-energy physics experiments have put stringent lower bounds on supersymmetric particle masses, while fine-tuning arguments lead us to believe they do not exceed a few TeV. Therefore, a discovery in Run II at the Tevatron is not unlikely [2], and it will fall to the LHC to perform a conclusive search for SUSY, starting in 2008. Combining the energy reach of the LHC with precision measurements at a possible future electron-positron collider ILC, a thorough quantitative understanding of the SUSY particles and interactions would be possible [3]. Most realistically, SUSY will give us a plethora of particle production and decay channels that need to be disentangled and separated from the SM background. To uncover the nature of electroweak symmetry breaking we not only have to experimentally analyze multi-particle production and decay signatures, we also need to accurately simulate the model predictions on the theory side. Much SUSY phenomenology has been performed over the years in preparation for LHC and ILC, nearly all of it based on relatively simple 2 โ†’ 2 processes [4,5] or their next-to-leading order (NLO) corrections. These approximations are useful for highly inclusive analyses and convenient for analytical calculations, but should be dropped once we are interested in precise measurements and their theoretical understanding. Furthermore, for a proper description of data, we need Monte-Carlo event generators that fully account for high-energy collider environments. Examples for necessary improvements include: consideration of spin correlations [28] and finite width effects in supersymmetric particle decays [65]; SUSY-electroweak and Yukawa interferences to some SUSY-QCD processes; exact rather than common virtual squark masses; and 2 โ†’ 3 or 2 โ†’ 4 particle production processes such as the production of hard jets in SUSY-QCD processes [6] or SUSY particles produced in weak-boson fusion (WBF) [7]. In this paper we present three new next-generation event generators for SUSY processes: madgraph ii/madevent [8,9], o'mega/whizard [10,11], and amegic++/sherpa [12,13]. They properly take into account various physics aspects which are usually approximated in the literature, such as those listed above. They build upon new methods and algorithms for automatic tree-level matrix element calculation and phase space generation that have successfully been applied to SM phenomenology [14,15,16]. Adapted to the more involved structure of the MSSM, they are powerful tools for a new round of MSSM phenomenology, especially at hadron colliders. The structure of the paper is as follows: In Sec. 2 we consider basic requirements for realistic SUSY simulations, in particular the setup of consistent calculational rules and conventions as a conditio sine qua non for obtaining correct and reproducible results. Sec. 3 gives some details of the implementation of MSSM multi-particle processes in the three generators, while Sec. 4 is devoted to numerical checks. Finally, Secs. 5 and 6 cover one particular application, the physics of sbottom squarks at the LHC and an ILC, respectively. Our emphasis lies on off-shell effects of various kinds which for the first time we accurately describe using the tools presented in this paper. In the extensive Appendix we list as a future reference cross sections for several hundred SUSY 2 โ†’ 2 processes that are the main part of these checks. We include all information necessary to reproduce these numbers. Supersymmetry Simulations Throughout this paper, we assume R-parity conservation in the MSSM. The SUSY particle content consists of the SM particles, the five Higgs bosons, and their superpartners, namely six sleptons, three sneutrinos, six up-type and six down-type squarks, two charginos, four neutralinos, and the gluino. We allow for a general set of TeV-scale (or weak-scale) MSSM parameters, with a few simplifying restrictions: (i) we assume CP conservation, i.e. all softbreaking terms in the Lagrangian are real (cf. also (iii) below); (ii) we neglect masses and Yukawa couplings for the first two fermion generations, i.e. left-right mixing occurs only for third-generation squarks and sleptons; (iii) correspondingly, we assume the SM flavor structure to be trivial, V CKM = V MNS = 1; (iv) we likewise assume the flavor structure of SUSY-breaking terms to be trivial. None of these simplifications is a technical requirement, and all codes are capable of dealing with complex couplings as well as arbitrary fermion and sfermion mass and mixing matrices. However, with very few exceptions, these effects are numerically unimportant or irrelevant for the simulation of SUSY scattering and decay processes at high-energy colliders. (In Sec. 4.3 we discuss residual effects of nontrivial flavor structure. ) We thus define the MSSM as the general TeV/weak-scale Lagrangian for the SM particles with two Higgs doublets, with gauge-and Lorentz-invariant, R-parity-conserving, renormalizable couplings, and softly-broken supersymmetry. Unfortunately, while this completely fixes the physics, it leaves a considerable freedom in choosing phase conventions. The large number of Lagrangian terms leaves ample room for error in deriving Feynman rules, coding them in a computer program, and relating the input parameters to a standard convention. The three codes we consider here are completely independent in their derivation of Feynman rules, implementation, matrix element generation, phase space setup, and integration methods. A detailed numerical comparison should therefore reveal any mistake in these steps. To this end, we list a set of 2 โ†’ 2 scattering processes that involve all Feynman rules that could be of any relevance in Sec. 4. Parameters and Conventions Apart from the simplifications listed above, we do not make any assumptions about SUSY breaking. No physical parameters are hard-coded into the programs. Instead, all codes use a set of weak-scale parameters in the form of a SUSY Les Houches Accord (SLHA) input file [17]. This file may be generated by any one of the standard SUSY spectrum generators [18,19]. Since the SLHA defines weak-scale parameters in a particular renormalization scheme, we have to specify how to use them for our tree-level calculations: we fix the electroweak parameters via G F , M Z , and ฮฑ QED . Using the tree-level relations (as required for gauge-invariant matrix elements at tree level) we obtain parameters such as sin 2 ฮธ w and M W as derived quantities; M W and M Z are defined as pole masses. The SLHA uses pole masses for all MSSM particles, while mixing matrices and Yukawa couplings are given as loop-improved DR values. From this input we need to derive a set of mass and coupling parameters suitable for comparing tree-level matrix-element calculations. This leads to violation of electroweak gauge invariance which we discuss in Sec. 2.2. However, numerically this is a minor problem, relevant only for some processes (e.g. SUSY particle production in weak-boson fusion [7]) at asymptotically high energies. For the numerical results of this paper, we therefore use the SLHA masses and mixing matrices at face value. For the bottom and top quarks we identify the (running) Yukawa couplings and the masses, as required by gauge invariance. The weak scale as the renormalization point yields realistic values for the Yukawa couplings. One might be concerned that the kinematical masses are then off from their actual values. However, since our production cross section should be regarded as the leading contribution to the inclusive cross section, the relevant scale is the energy scale of the whole process rather than the scale of individual heavy quarks. This necessitates the use of running masses to make a reliable estimate. The trilinear couplings and the ยต parameter which explicitly appear in some couplings are fixed by the off-diagonal entries in the chargino, neutralino and sfermion mass matrices. We adopt two schemes for negative neutralino mass eigenvalues: sherpa and madgraph use the negative values directly in the propagator and wave function. o'mega/whizard rotates the neutralino fields to positive masses, which yields a complex neutralino mixing matrix, even though CP is conserved. For our comparison we neglect all couplings that contain masses of light-flavor fermions, i.e. the Higgs couplings to first-and second-generation fermions and their supersymmetric counterparts; as well as left-right sfermion mixing. This includes neglecting light fermion masses in the neutralino and chargino sector, which would otherwise appear via Yukawa-higgsino couplings. Physically, this is motivated by flavor constraints which forbid large deviations from universality in the first and second generations [20]. For our LHC calculations we employ CTEQ5 parton distribution functions [21]. Unitarity and the SLHA Convention The MSSM is a renormalizable quantum field theory [22]. To any fixed order in perturbation theory, a partial-wave amplitude calculated from the Feynman rules, renormalized properly, is bounded from above. Cross sections with a finite number of partial waves (e.g. s-channel processes) asymptotically fall off like 1/s, while massless particle exchange must not lead to more than a logarithmic increase with energy. This makes unitarity a convenient check for the Feynman rules in our matrix element calculators. As an example, individual diagrams that contribute to 2 โ†’ 2 weak boson scattering rise like the fourth power of the energy, but the two leading terms of the energy expansion cancel among diagrams to ameliorate this to a constant. This property connects the three-and four-boson vertices, and predicts the existence and couplings of a Higgs boson, assuming the theory is weakly interacting to high energies [23]. For example, for weak boson fusion to neutralinos and charginos, these unitarity cancellations can be neatly summarized in a set of sum rules for the SUSY masses and couplings [7]. For generic Higgs sectors, the unitarity relations were worked out in [24]. Many, but not all, terms in the Lagrangian can be checked by requiring unitarity. For instance, gauge cancellations in W W scattering to two SUSY particles need not happen if the final-state particle has an SU(2) ร— U(1) invariant mass term. In the softly-broken SUSY Lagrangian, this property holds for the gauginos and higgsinos as well as for the second Higgs doublet in the MSSM. For these particles, we expect unitarity relations to impose some restrictions on their couplings, but not a complete set of equations, so some couplings remain unconstrained. As mentioned above, for our numerical comparison of SUSY processes we use a renormalization-group improved spectrum in the SLHA format [18,19]. In particular, we adopt this spectrum for the Higgs sector, where gauge invariance (or unitarity) relates masses, trilinear and quartic couplings. While at tree-level all unitarity relations are automatically satisfied, any improved spectrum will violate unitarity constraints unless the Higgs trilinear couplings are computed in the same scheme. However, not all couplings are known to the same accuracy as the Higgs masses [25]. We follow the standard approach of computing the trilinear Higgs couplings from effective mixing angles ฮฑ and ฮฒ. As a consequence, we expect unitarity violation. Luckily, this only occurs in 2 โ†’ 3 processes of the type W W โ†’ W W H [24], while in 2 โ†’ 2 processes of the type W W โ†’ HH where one might naively expect unitarity violation, the values of the Higgs trilinear couplings change the value of total high-energy asymptotic cross section but do not affect unitarity. A similar problem arises in the neutralino and chargino sector. Unitarity is violated at high energies in processes of the type V V โ†’ฯ‡ฯ‡ (V = W, Z) [7]. If we use the renormalization-group improved DR neutralino and chargino mass matrices (or equivalently the masses and mixing matrices) the gaugino-higgsino mixing entries which are equivalent to the Higgs couplings of the neutralinos and the charginos implicitly involve M W,Z , also in the DR scheme. To ensure proper gauge cancellations which guarantee unitarity, these gauge boson masses must be identical to the kinematical masses of the gauge bosons in the scattering process, which are usually defined in the on-shell scheme. One possible solution would be to extract a set of gauge boson masses that satisfies all tree-level relations from the mass matrices. This scheme has the disadvantage that while it works for the leading corrections, it will likely not be possible to derive a consistent set of weak parameters in general. Moreover, the higher-order corrections included in the renormalization-group improved neutralino and chargino mass matrices will not be identical to the leading corrections to, for example, the s-channel propagator mass. However, an artificial spectrum that is specifically designed to fulfill the tree-level relations can be used for a technical test of high-energy unitarity. Such a detailed check has been performed for the susy-madgraph implementation [7]. Symmetries and Ward Identities An independent method for verifying the implementation is the numerical test of symmetries and their associated Ward identities. A trivial check is provided by the permutation and crossing symmetries of many-particle amplitudes. More subtle are the Ward identities of gauge symmetries, which can be tested by replacing the polarization vector of any one external gauge boson by its momentum วซ ยต (k) โ†’ k ยต and, if necessary, subtracting the amplitude with the corresponding Goldstone boson amplitude. Finally, the SUSY Ward identities can be tested numerically. Ward identities have the advantage that they require no additional computer program, can be constructed automatically and can be applied separately for each point in phase space. If applied in sensitive regions of phase space, tests of Ward identities will reveal numerical instabilities. Extensive tests of this kind have been carried out for the matrix elements generated by o'mega and its associated numerical library for the SM [26] and for the MSSM [27]. Intermediate Heavy States During the initial phase of the LHC, narrow resonances can be described by simple 2 โ†’ 2 production cross sections and subsequent cascade decays. However, establishing that these resonances are indeed the long-sought SUSY partners would call for more sophisticated tools. The identification of resonances as SUSY partners would require determination of their spin and parity quantum numbers [28]. This in turn requires a proper description of the spin correlations among the particles in the production and the decay cascades. The simplest consistent approximation calculates the Feynman diagrams for the 2 โ†’ n process and forces narrow intermediate states on the mass shell without affecting spin correlations. For fermions we can write the leading term in the small expansion parameter ฮ“/m as: For SM processes this computation of 2 โ†’ n matrix elements has been successfully automatized by the programs described below. The alternative approach of manually inserting the appropriate density matrices for production and decay is more error-prone due to the need for consistent phase conventions. The width of the heavy resonances are themselves observables predicted by SUSY for a given set of soft breaking parameters and should be taken into account. A naรฏve Lorentzian smearing of Eq. (1) will not yield a theoretically consistent description of finite width effects. Gauge and SUSY Ward identities are immediately violated once amplitudes are continued offshell. Since scattering amplitudes in gauge theories and SUSY theories exhibit strong numerical cancellations, the violation of the corresponding Ward identities can result in numerically large effects. Therefore a proper description of a resonance with a finite width requires a complete gauge invariant set of diagrams, the simplest of which is the set of all diagrams contributing to the 2 โ†’ n process [29]. In Secs. 5 and 6 we study the numerical impact of finite-width effects for the concrete example of sbottom production at high-energy colliders. Intermediate charged particles with finite widths present additional gauge invariance issues, which were studied at LEP2 in great detail for W boson production processes [30]. Although various prescriptions for widths are available in the matrix element generators described in the paper, we used the fixed-width scheme for the calculations. A careful analysis on the impact of different choices is beyond the scope of the paper. Calculational Methods and Algorithms Each of the three calculational tools we use for this paper consists of two independent programs. The first program uses a set of Feynman rules, which can be preset or user-defined, to generate computer code that numerically computes the tree-level scattering amplitude for a chosen process. These numerical codes call library functions to compute wave functions of external particles, internal currents and vertices to obtain the complete helicity amplitude (the amplitude for all helicity configurations of external particles, which are then summed over [31]). The second program performs adaptive phase space integration and event generation, and produces integrated cross sections and weighted or unweighted event samples. The required phase space mappings are determined automatically, using appropriate heuristics, from the 'important' Feynman diagrams contributing to the process that is being studied. In principle, there is nothing that precludes the use of other combinations of the three matrix element generators and the three phase space integrators. In practice however, it requires some effort to adopt the interfaces that have grown organically. Nevertheless, whizard can e.g. use madgraph as an alternative to o'mega. helas [32] is the archetypal helicity amplitude library and is now employed by many automated matrix element generators. The elimination of common subexpression to optimize the numerical evaluation was already suggested in Refs. [32] for the manual construction of scattering amplitudes. The actual libraries used by our three tools choose between different trade-offs of maintainability, extensibility, efficiency and numerical accuracy. Majorana spinors are the crucial new ingredient for calculating helicity amplitudes in supersymmetric field theories. In the simple example process e + e โˆ’ โ†’ฯ‡ฯ‡ we see the complication which arises: if we naรฏvely follow the fermion number flow of the incoming fermions, the tchannel and u-channel amplitudes require different external spinors for the final-state fermions. The most elegant algorithm known for unambiguously assigning a fermion flow and the relative signs among Feynman diagrams is described in Ref. [33]. Consequently, all matrix element generators use an implementation of this algorithm. Beneath some common general features, the similarities of the three tools quickly disappear: they use different algorithms, implemented in different programming languages. That such vastly different programs can be tested against each other with a Lagrangian as complex as that of the TeV-scale MSSM should give confidence in the predictive power of these programs for SUSY physics at the LHC and later at an ILC. To compute cross sections in the MSSM, we need a consistent set of particle masses and mixing matrices, computed for a chosen SUSY-breaking scenario. Various spectrum generators are available, all using the SUSY Les Houches Accord as their spectrum interface. The partonic events generated by our three tools can either be fragmented by a built-in algorithm (sherpa) or passed via a standard interface to external hadronization packages [34]. However, proper hadronization of the parton level results presented here is beyond the scope of this paper. 3.1 madgraph ii and madevent madgraph [8] was the first program allowing fully automated calculations of squared helicity amplitudes in the Standard Model. In addition to being applied to many physics calculations, it was later frequently used as a benchmark for testing the accuracy of new programs as well as for gauging the improvements implemented in the new programs. madgraph ii is implemented in fortran77. It generates all Feynman diagrams for a given process, performs the color algebra and translates the result into a fortran77 procedure with calls to the helas library. During this translation, redundant subexpressions are recognized and computed only once. While the complexity continues to grow asymptotically with the number of Feynman diagrams, this approach generates efficient code for typical applications. The correct implementation of color flows for hadron collider physics was an important objective for the very first version of madgraph, while the implementation of extensions of the standard model remained nontrivial for users. madgraph ii reads the model information from two files and supports Majorana fermions, allowing fully automated calculations in the MSSM. The MSSM implementation makes use and extends the list of Feynman rules that have been derived in the context of [35,36,37]. madevent [9] uses phase space mappings based on single squared Feynman diagrams for adaptive multi-channel sampling [38]. The madgraph/madevent package has a web-based user interface and supports shortcuts such as summing over initial state partons, summing over jet flavors and restricting intermediate states. Interfaces to parton shower and hadronization Monte Carlos [34] o'mega constructs an expression for the scattering matrix element from a description of the Feynman rules and the target programming language. The complexity of these expressions grows only exponentially with the number of external particles, unlike the factorial growth of the number of Feynman diagrams. Optionally, o'mega can calculate cascades: long-lived intermediate particles can be forced on the mass shell in order to obtain gauge invariant approximations with full spin correlations. o'mega is implemented in the functional programming language Objective Caml [39], but the compiler is portable and no knowledge of Objective Caml is required for using o'mega with the supported models. The tables describing the Lagrangians can be extended by users. Its set of MSSM Feynman rules was derived in accordance with Ref. [40]. whizard builds a Monte Carlo event generator on the library VAMP [41] for adaptive multichannel sampling. It uses heuristics to construct phase space parameterizations corresponding to the dominant time-and space-like singularities for each process. For processes with many identical particles in the final state, symmetries are used extensively to reduce the number of independent channels. whizard is written in fortran95, with some Perl glue code. It is particularly easy to simulate multiple processes (i.e. reducible backgrounds) with the correct relative rates simultaneously. It has an integrated interface to pythia [42] that follows the Les Houches Accord [34] for parton showers and hadronization. amegic++ and sherpa sherpa [13] is a new complete Monte Carlo Generator for collider physics, including hard matrix elements, parton showers, hadronization and other soft physics aspects, written from scratch in C++. The key feature of sherpa is the implementation of an algorithm [43,44,45], which allows consistent combination of tree-level matrix elements for the hard production of particles with the subsequent parton showers that model softer bremsstrahlung. This algorithm has been tested in various processes [46,47]. Both of the other programs described above connect their results with full event simulation through interfaces to external programs. amegic++ [12] is the matrix element generator for sherpa. It generates all Feynman diagrams for a process from a model description in C++. Before writing the numerical code to evaluate the complete amplitude (including color flows), it eliminates common subexpressions. En passant, it selects appropriate phase space mappings for multi-channel sampling and event generation [38]. For integration it relies on vegas [48] to further improve the efficiency of the dominant integration channels. The MSSM Feynman rules and conventions in amegic++ are taken from Ref. [49]. The Setup As long as R-parity is conserved, SUSY particles are only produced in pairs. Therefore, SUSY phenomenology at the LHC and ILC amounts to essentially searching for all accessible supersymmetric pair-production channels with subsequent (cascade) decays. Proper simulations need to describe this type of processes as accurately as possible. This requires a careful treatment of many-particle final states, off-shell effects and SUSY as well as SM backgrounds. The complexity of this task and the variety of conventions and schemes commonly used require careful cross-checks at all levels of the calculation. As a first step, we present a comprehensive list of total cross sections for on-shell supersymmetric pair production processes (cf. Appendix B). These results give a rough overview of the possible SUSY phenomenology at future colliders, at least for the chosen point in SUSY parameter space. The second purpose of this computation is a careful check of our sets of Feynman rules and their numerical implementation. After testing our tools we will then move on to a proper treatment beyond naรฏve 2 โ†’ 2 production processes. We compute all numbers independently with madgraph, whizard, and sherpa, using identical input parameters. We adopt a MSSM parameter set that corresponds to the point SPS1a [50]. This point assumes gravity mediated supersymmetry breaking with the universal GUT-scale parameters: (2) We use softsusy to compute the TeV-scale physical spectrum [18]. For the purpose of evaluating 2 โ†’ 2 cross sections, we set all SUSY particle widths to zero. The final states are all possible combinations of two SUSY partners or two Higgs bosons. The initial states required to test all the SUSY vertices are: e + e โˆ’ , e โˆ’ฮฝ e , e โˆ’ e โˆ’ , ฯ„ + ฯ„ โˆ’ , ฯ„ โˆ’ฮฝ ฯ„ , uลซ, dd, uu, dd, bb, bt, W + W โˆ’ , W โˆ’ Z, W โˆ’ ฮณ, ZZ, Zฮณ, ฮณฮณ, gW โˆ’ , gZ, gฮณ, gg, ug, dg . The (partonic) initial-state energy is always fixed. This allows for a comparison of cross sections without dependence on parton structure functions, and with much-improved numerical efficiency. Clearly, some of these initial states cannot be realized on-shell or are even impossible to realize at a collider. They serve only as tests of the Feynman rules. Any MSSM Feynman rule relevant for an observable collider process is involved in at least one of the considered processes. For SM processes, comprehensive checks and comparisons were performed in the past [51]. The complete list of input parameters is given in Appendix A. The input is specified in the SLHA format [17]. This ensures compatibility of the input conventions, even though different conventions for the Lagrangian and Feynman rules are used by the different programs. In Appendix B, we list and compare the results for two partonic c.m. energies โˆš s = 500 GeV and 2 TeV. All results agree within a Monte Carlo statistical uncertainty of 0.1% or less. These errors reflect neither the accuracy nor the efficiency of any of the programs; we do not specify the number of matrix element calls or the amount of CPU time required in the computation. To obtain a precise 2 โ†’ 2 total cross section, Monte Carlo integration is not a good choice. On the other hand these simple processes serve as the most efficient framework to test the numerical implementation of Feynman rules and the MSSM spectrum. We emphasize that the three programs madgraph, whizard, and sherpa, and their SUSY implementation are completely independent. As explained in Sec. 3, they use different conventions, signs and phase choices for the MSSM Feynman rules; have independent algorithms and helas-type libraries; and use different methods for parameterizing and sampling the phase space. We consider our results a strong check that covers all practical aspects of MSSM calculations, from the model setup to the numerical details. Specifically, we confirm the Feynman rules in Ref. [49] as they are used in sherpa. These Feynman rules do not use the SLHA format, so translating them is a non-trivial part of the implementation. For madgraph and whizard, the Feynman rules were derived independently. Sample Cross Sections We briefly discuss our cross section results and their physical interpretation. While the numbers are specific for the chosen point SPS1a [50] and its associated mass spectrum, many features of the results are rather generic in one-scale SUSY breaking models and depend only on the structure of the TeV-scale MSSM. e + e โˆ’ processes All e + e โˆ’ -induced SUSY production cross sections receive contributions from s-channel Z and (for charged particles) photon exchange. The couplings of the supersymmetric particles to Z and photon are determined by the SU(2) L ร— U(1) Y gauge couplings and mixing angle. As expected from perturbative unitarity, all s-channel-process cross sections asymptotically fall off like 1/s. If the process in question includes t-channel exchange, then we must sum over all partial waves for no vector boson exchange, s m 2 for vector boson exchange. The implication of the second line is that Coulomb scattering, WBF, and in some sense all hadronic cross sections, do not decrease with s. We show the e + e โˆ’ cross sections in Table B.1. The largest, of up to a few hundred fb at โˆš s = 500 GeV, correspond to sneutrino and selectron production,ฯ‡ 0 1 andฯ‡ 0 2 , and chargino pair production. These are the processes with a dominant t-channel slepton contribution. In SPS1a the heavier neutralinosฯ‡ 0 3 ,ฯ‡ 0 4 are almost pure higgsinos. Higgsinos couple only to the s-channel Z, and diagonal pair production ofฯ‡ 0 3ฯ‡ 0 3 ,ฯ‡ 0 4ฯ‡ 0 4 is suppressed because of the inherent cancellation between the two higgsino fractions h u and h d ; i.e. the amplitudes are proportional to |h u | 2 โˆ’ |h d | 2 , which vanishes in the limit where they have the same higgsino masses. Only mixedฯ‡ 0 3ฯ‡ 0 4 production has a significant cross section, because it is proportional to the sum |h u | 2 + |h d | 2 . In the Higgs sector, SPS1a realizes the decoupling limit where the light Higgs h closely resembles the SM Higgs. The production channels Zh, AH, and H + H โˆ’ dominate if kinematically accessible, while the reduced coupling of the Z to heavy Higgses strongly suppresses the ZH and Ah channels. For completeness, we also show the e โˆ’ฮฝ e set of cross sections in Table B.3, even though such a collider is infeasible. W + W โˆ’ and WZ processes The cross sections for weak boson fusion processes, shown in Tables B.6 and B.7, are generically of the same order of magnitude as their fermion-initiated counterparts, with a few notable differences. In addition to gauge boson exchange, s-and t-channel Higgs exchange contributes to WBF production of third-generation sfermions, neutralinos, charginos, and Higgs/vector bosons. These processes are sensitive to a plethora of Higgs couplings to supersymmetric particles. Furthermore, the longitudinal polarization components of the external vector bosons approximate, in the high-energy limit, the pseudo-Goldstone bosons of electroweak symmetry breaking. This results in a characteristic asymptotic behavior (that can be checked by inserting โˆš s values of several TeV, not shown in the tables): the total cross sections for vector-boson and CP-even Higgs pair production in WBF approach a constant at high energy, corresponding to t-channel gauge boson exchange between two scalars. Production cross sections that contain the CP-odd Higgs or the charged Higgs instead decrease like 1/s, because no scalar-Goldstonegauge boson vertices exist for these particles. In the cases involving first and second generation sfermions, t-channel sfermion exchange with an initial-state W contributes only to left-handed sfermions, so thef Lf * L cross sections dominate overf Rf * R . In the neutralino sector,ฯ‡ 0 1 is dominantly bino and does not couple to neutral Higgs bosons, soฯ‡ 0 1 production in W + W โˆ’ fusion is suppressed. The other neutralinos and charginos, being the SUSY partners of massive vector bosons and Higgses, are produced with cross sections up to 100 pb. The largest neutralino rates occur for mixed gaugino and higgsino production, because the Yukawa couplings are given by the gaugino-higgsino mixing entry in the neutralino mass matrix. In the Higgs sector, the decoupling limit ensures that only W + W โˆ’ โ†’ Zh, W Z โ†’ W h (almost 100 pb), and W + W โˆ’ โ†’ hh (6 pb) are important, while the production of heavy Higgses is suppressed. For W + W โˆ’ โ†’ Ah and W โˆ’ Z โ†’ H โˆ’ h the decoupling suppression applies twice. In reality, W W โ†’ XX and W Z โ†’ XY scattering occurs only as a subprocess of 2 โ†’ 6 multi-particle production. The initial vector bosons are emitted as virtual states from a pair of incoming fermions. The measurable cross sections are phase-space suppressed by a few orders of magnitude. A rough estimate can be made by folding the energy-dependent W W/W Z cross sections with weak-boson structure functions. Reliable calculations require the inclusion of all Feynman diagrams, as can be done with the programs presented in this paper -the production rates rarely exceed O(ab) at the LHC [7]. Other processes For the remaining lists of processes with vector-boson or fermion initial states, similar considerations apply. In particular, the photon has no longitudinal component, so ฮณ-induced electroweak processes (Tables B.8, B.10 and B.11) are not related to Goldstone-pair scattering. We include unrealistic fermionic initial states such as ฯ„ + ฯ„ โˆ’ , ฯ„ โˆ’ฮฝ ฯ„ and bt in our reference list, Tables B.2, B.4 and B.5, because they involve Feynman rules that do not occur in other production processes, but are relevant for decays. Finally, our set of processes contains several lists with the colored fermionic initial states uลซ, dd and bb (Tables B.16-B.18, plus Table B.20 for same-flavor fermions); gg-fusion (Table B.15); qg-fusion (Table B. 19); and mixed QCD-electroweak processes gA, gZ and gW (Tables B.12, B.13 and B.14). These (as full hadronic processes) are accessible at hadron colliders, and comparing their cross sections completes our check of Feynman rules of the SUSY-QCD sector and its interplay with the electroweak interactions. Note that for a transparent comparison we do not fold the quark-and gluon-induced processes with structure functions. The only Feynman rules not checked by any process in this list are the four-scalar couplings. It is expected, and has explicitly been verified for the four-Higgs coupling in particular [52], that these contact interactions are not accessible at any collider in the foreseeable future. We therefore neglect them. Flavor Mixing For most of this paper we have neglected the quark masses and the mixings of the first two squark and slepton/sneutrino generations. Here we give a brief account of the consequences of using a non-diagonal CKM matrix. Full CKM mixing is available as an option for the whizard and sherpa event generators. For madgraph, it is straightforward to modify the model definition file accordingly. The CKM mixing matrix essentially drops out from most processes when we sum over all quark intermediate and final states. This is due to CKM unitarity, violated only by terms proportional to the quark mass squared over โˆš s in high energy scattering processes. For the first two generations, such corrections are negligible at the energies we are considering. At hadron colliders, summation over initial-state flavors does not lead to cancellation because the parton densities are flavor-dependent. In the SM, CKM structure matters only for charged-current processes where a qq โ€ฒ pair annihilates into a W boson. For instance, the cross section for ud โ†’ W + * โ†’ X is multiplied by |V ud | 2 , and the cross section for us โ†’ W + * โ†’ X is proportional to |V us | 2 . In the partonic final state, CKM unitarity ensures that a cross section does not depend on flavor mixing. However, jet hadronization depends on the jet quark flavor. Neglecting CKM mixing can result in a wrong jet-flavor decomposition. In practice, this is not relevant since jet-flavor tagging (except for b quarks, and possibly for c quarks) is impossible. In cases where it is relevant, e.g. charm tagging in Higgs decay backgrounds at an ILC, the problem may be remedied either by reverting to the full CKM treatment, or by rotating the outgoing quark flavors before hadronization on an event-by-event basis. To estimate the impact of CKM mixing on SUSY processes, we consider the electroweak production of two light-flavor squarks at the LHC: qq โ†’q โ€ฒqโ€ฒ * . Adopting the input of Appendix A and standard values for the CKM mixing parameters reduces the cross section by about 4%, Tab. 1. This is negligible for LHC phenomenology, but ensures a correct implementation of CKM mixing in the codes. Finally, there can be nontrivial flavor effects in the soft SUSY-breaking parameters. That is, if squark mixing differs from quark mixing, in the case of flavor-dependent SUSY breaking [20]. Non-minimal flavor violation predicts large signals for physics beyond the Standard Model, in particular flavor-changing neutral currents, in low-energy precision observables like kaon mixing. Their absence is a strong indication of flavor universality in a SUSY breaking mechanism. However, if desired, nontrivial SUSY flavor effects can be included by the codes with minor modifications. Sbottom Production at the LHC A SUSY process of primary interest at the LHC is bottom squark production. For this specific discussion, we adopt a SUSY parameter point with rather light sbottoms and a rich low-energy phenomenology. The complete parameter set is listed in Appendix C. The sbottom masses are In the following we will focus on the decayb 1 โ†’ bฯ‡ 0 1 with a branching ratio of 43.2%. The lightest Higgs boson is near the LEP limit, but decays invisibly to neutralinos with a branching ratio of 44.9%. The heavy Higgses are at 300 GeV. The lightest neutralino mass is mฯ‡0 1 = 46.84 GeV, while the other neutralinos and charginos are between 106 and 240 GeV. Sleptons are around 200 GeV. The squark mass scale is 430 GeV (except for mt 2 ), and the gluino mass is 800 GeV. A spectacular signal at this SUSY parameter point would of course be the light Higgs. Apart from SUSY decays, our light MSSM Higgs sits in the decoupling region, which means it is easily covered by the MSSM No-Lose theorem at the LHC [53]: for large pseudoscalar Higgs masses a light Higgs will be seen by the Standard Model searches in the WBF ฯ„ ฯ„ channel. Unfortunately, in most scenarios it would be challenging to distinguish a SUSY Higgs boson from its SM counterpart, after properly including systematic errors. Here, our SUSY parameter point predicts a large light Higgs boson invisible branching fraction, which would also be visible in the WBF channel [54]. There would be little doubt that this light Higgs is not part of the SM Higgs sector. While this point might look slightly exceptional, in particular because of the large invisible light Higgs branching ratio, the only parameters which matter for sbottom searches at the LHC are the fairly small sbottom masses. The current direct experimental limits come from the Tevatron search for jets plus missing energy, where at least for CDF the jets include bottom quark tags [2]. However, for sbottom production the Tevatron limit has to be regarded as a limit on cross section times branching ratio. The mass limits derived in the light-flavor squark and gluino mass plane assume squark pair production including diagrams with a t-channel gluino, which is strongly reduced for final-state sbottoms. Moreover, strong mass limits arise from associated squark-gluino production, which is also largely absent in the case of sbottoms [35]. Searching for squark and gluino signatures at the LHC as a sign of physics beyond the Standard Model (such as SUSY) has one distinct advantage: once we ask for a large amount of missing energy, the typical SM background will involve a W or Z boson. Because squarks and gluinos are strongly interacting, the signal-to-background ratio S/B is automatically enhanced by a factor ฮฑ s /ฮฑ. This means that for typical squark and gluino masses below O(TeV) we expect to see signs of new physics before we see a light-Higgs signal. Most SUSY mass spectrum information is carried by the squark and gluino cascade decay kinematics [28,65], and we are confident that, though non-negligible, QCD effects will not alter these results dramatically [6]. The most dangerous backgrounds to cascade decay analyses are not SM Z+jets events, but SUSY backgrounds, for example simple combinatorics with two decay chains in the same event. The (less likely) case that SUSY particles are produced at the LHC, but do not decay within the detector, is an impressive show of the power of the LHC detectors -finding and studying these particles does not pose a serious problem at either ATLAS or CMS [66]. Off-Shell Effects in Sbottom Decays From a theoretical point of view, the production process pp โ†’b 1b * 1 with subsequent dual decaysb 1 โ†’ bฯ‡ 0 1 can be described using two approximations. Because the sbottoms are scalars, their production and decay matrix elements can be separated by an approximate Breit-Wigner propagator. Furthermore, the sbottom width ฮ“b 1 = 0.53 GeV is sufficiently small to safely assume that even extending the Breit-Wigner approximation to a narrow-width description should result in percent-level effects, unless cuts force the sbottoms to be off-shell. For this entire LHC section we require basic cuts for the bottom quark, whether it arises from sbottom decays or from QCD jet radiation: p T,b > 20 GeV and |ฮท b | < 4. We require any two bottom jets to be separated by โˆ†R bb > 0.4. There are no additional cuts, for example on missing transverse energy, because we do not attempt a signal vs. background analysis. Instead, we focus on the approximations which enter the signal process calculation. To stress the importance of properly understanding the signal process' distributions, we show those for p max T,b and p T / for the signal process gg โ†’ bbฯ‡ 0 1ฯ‡ 0 1 and for the main SM background pp โ†’ bbฮฝฮฝ in Fig. 1. As expected, all final-state particles are considerably harder for the signal process. This is due to heavy intermediate sbottoms in the final state. Historically, these kinds of distributions for QCD backgrounds have played an important role illustrating progress in the proper description of jet radiation, a discussion we turn to in the next section. The p T / distribution is only a parton-level approximation, i.e. the transverse momentum of theฯ‡ 0 1ฯ‡ 0 1 or ฮฝฮฝ pair and does not include b decays. However, we expect the b-decay contributions to be comparably small and largely balanced between the two sbottom decays. The effects of the Breit-Wigner approximation compared to the complete set of off-shell diagrams are shown in Fig. 2. After basic cuts the cross section for the process gg โ†’b 1b * 1 โ†’ bbฯ‡ 0 1ฯ‡ 0 1 is 1120 fb. Because of the roughly 250 GeV mass difference between the decaying sbottom and the final-state neutralino, even the softer b jet p T distribution peaks at 100 GeV. As expected from phase space limitations, the harder of the b jets is considerably more central, but for both of the final-state bottom jets an additional tagging-inspired cut |ฮท b | < 2.5 would capture most events. Including all off-shell contributions, i.e. studying the complete process gg โ†’ bbฯ‡ 0 1ฯ‡ 0 1 , leads to a small cross section increase, to 1177 fb after basic cuts. The additional events are concentrated at softer jet transverse momenta (p T,b 60 GeV) and alter the shape of the distributions sizeably. The diagrams which can contribute to off-shell effects are, for example, bottom quark pair production in association with a slightly off-shell Z, where the Z decays to two neutralinos. The remaining QCD process gg โ†’ bb produces much softer b jets, because of the lack of heavy resonances. Luckily, this considerable distribution shape change is mostly in a phase space region plagued by large background, as shown in Fig. 1, therefore will be removed in an analysis. On the other hand, there is no guarantee that off-shell effects will always lie in this kind of phase space region, and in Fig. 2 we can see that the Breit-Wigner approximation is by no means perfect. Bottom-Jet Radiation Just as with light-flavor squarks in qq scattering, LHC could produce sbottom pairs from a bb initial state. Bottom densities [67] and SUSY signatures at the LHC are presently undergoing careful study [68]. However, for heavy Higgs production it was shown that bottom densities are the proper description for processes involving initial-state bottom quarks. The comparison between gluon-induced [69] and bottom-induced [70] processes backs the bottom-parton approach, as long as the bottom partons are defined consistently [71]. The bottom-parton picture for Higgs production becomes more convincing the heavier the final state particles are [72], i.e. precisely the kinematic configuration we are interested in for SUSY particles [68]. Sbottom pair production is the ideal process for a first attempt to study the effects of bottom jet radiation on SUSY-QCD signatures. In the fixed-flavor scheme (only light-flavor partons) the leading-order production process for sbottom pairs is 2 โ†’ 2 gluon fusion. If we follow fixed-order perturbation theory, the radiation of a jet is part of the NLO corrections [35]. This jet is likely to be an initial-state gluon, radiated off the gg or qq initial states. Crossing the final-and initial-state partons, qg scattering would contribute to sbottom pair production at NLO, adding a light-flavor quark jet to the final state. The perturbative series for the total rate is stable, and as long as the additional jet is sufficiently hard (p T,j 50 GeV), the ratio of the inclusive cross sections is small: ฯƒbb j /ฯƒbb โˆผ 1/3 [6]. With the radiation of two jets (at NNLO in the fixed-flavor scheme), the situation becomes more complicated. We know that QCD jet radiation at the LHC is not necessarily softer than jets from SUSY cascade decays [6]. This jet radiation can manifest itself as a combinatorial background in a cascade analysis. Here we study the energy spectrum of bottom jets from the decayb 1 โ†’ bฯ‡ 0 1 , so additional bottom jets from the initial state lead to combinatorial background. Once we radiate two jets from the dominant gg initial state, bottom jets appear as initial-state radiation (ISR). In the total rate this process can be included just by using the variable-flavor scheme in the leading-order cross section, as discussed above. As expected, the rate for the production process gg โ†’ bbbbฯ‡ 0 1ฯ‡ 0 1 of 130.7 fb is considerably suppressed compared to the 1177 fb for inclusive (off-shell) sbottom pair production. Again, we require p T,b > 20 GeV. The b-jet multiplicity is expected to decrease once we require harder b-jets in a proper analysis. The reduction factor for two additional bottom jets is โˆผ 1/3 ร— 1/3, as quoted above from Ref. [6] for general jet radiation. However, we include considerably softer b jets as compared to the 50 GeV light-flavor jets which lead to a similar reduction factor. The reason is that high-mass final states at the LHC are most efficiently produced in quark-gluon scattering, and in our analysis we are limited to gluons for both incoming partons. From our more conceptual point of view, the crucial question is how to identify the decay b jets, which carry information on the SUSY mass spectrum [65]. Because the ISR b jets arise from gluon splitting, they are predominantly soft and forward in the detector. To identify the decay b quarks we can try to exclude the most forward and softest of the four b jets in the event, to reduce the combinatorial background. In Fig. 3 we show the ordered p T,b spectra of the four final-state sbottoms. Because of kinematics we would expect that it should not matter if we order the sbottoms according to p T,b or |ฮท b |, at least for grouping into initial-state and decay jet pairs. However, we see that this kinematical argument is not well suited to remove combinatorial backgrounds. Only the most forward b jet is indeed slightly softer than the other three, but the remaining three p T,b distributions ordered according to |ฮท b | are indistinguishable. After discussing the combinatorial effects of additional b jets in the final state, the important question is whether additional b-jet radiation alters the kinematics of sbottom production and decay. In Fig. 4 we show the p max T,b and the p T / distributions for bbฯ‡ 0 1ฯ‡ 0 1 and bbbbฯ‡ 0 1ฯ‡ 0 1 production at the LHC; those most likely to be useful in suppressing SM backgrounds. The soft ends of the p T,b distributions do not scale because in the 4b case the hardest b jet becomes less likely to be a decay b-jet. Instead, a soft decay b quark will be replaced with a harder initial-state b jet in our distribution. The 4b distribution peaks at lower p T,b because the minimum cut on p T,b of the initial-state b jets eats into the steep gluon densities. At very large values of p T,b this effect becomes relatively less important, and the two distributions scale with each other. The p T / distributions, however, are sensitive to p T,b . If both b-jets come from heavy particle decays, the decay can alter their back-to-back kinematics. In contrast, additional light particle production balances out the event, leading to generally smaller p T / values. We might be lucky in the final analysis, because a proper analysis after background rejection cuts will be biased toward small p T / , thus will be less sensitive to b-jet radiation and combinatorial backgrounds. Sbottom Production at an ILC At an ILC we would be able to obtain more accurate mass and cross section measurements, provided the collider energy is sufficient to produce sbottom pairs. This is due to the much cleaner lepton collider environment, relative to a hadron collider -even though the lower rate can statistically limit measurements. For this study we again choose the parameter point described in Appendix C. There, the sbottom mass is low, but the appearance of various Higgs and neutralino backgrounds complicates the analysis. With sbottom production we encounter a process where multiple channels and their interferences contribute to the total signal rate; this is more typical than not. We are forced to understand off-shell effects to perform a sensible precision analysis. Assuming 800 GeV collider energy, the production channelsb 1b * 1 andb 1b * 2 are open. From the squark-mixing matrix it can be seen that the lighter of the two sbottoms,b 1 , predominantly is right-handed. Its main decay mode is to bฯ‡ 0 1 . Therefore, as with sbottom production at the LHC, the principal final state to be studied is bb plus missing energy. At the LHC, sbottom pair production dominates this final state because it is the only strongly-interacting production channel. In contrast, sbottom pair production at an ILC would proceed via electroweak interactions. Hence, all electroweak SUSY and SM processes that contribute to the same final state need to be considered. In particular, the following 2 โ†’ 2 production processes contribute to e + e โˆ’ โ†’ bbฯ‡ 0 1ฯ‡ 0 1 : All cross sections, in different approximations as well as in a complete calculation including all interferences, are displayed in Table 2. Once we fold in the branching ratios, fewer processes contribute significantly, namely: The SM process e + e โˆ’ โ†’ bbฮฝ iฮฝi (i = e, ยต, ฯ„ ) is dominated by W W fusion to Z/h (followed by Z/h โ†’ bb) and by Zh/ZZ pair production. It represents a significant irreducible background, as a neutrino cannot be distinguished from the lightest neutralino in high-energy collisions. Thus, we refer to this final state with neutrinos as the SM background. Numerical Approximations It is instructive to compare various levels of approximation found in the literature before moving to a complete treatment of the process. The simplest approximation for resonant production and decay is to multiply the production cross section by the appropriate branching fraction. This narrow width approximation (NWA) is expected to hold as long as ฮ“/m โ‰ช Table 2: SUSY cross sections contributing to e + e โˆ’ โ†’ bbฯ‡ 0 1ฯ‡ 0 1 (left) and the SM background e + e โˆ’ โ†’ bbฮฝฮฝ (right). The columns assume: on-shell production; same, including the branching ratio into bbฯ‡ 0 1ฯ‡ 0 1 and bbฮฝฮฝ; and with a Breit-Wigner propagator. The incoherent sum is shown at the bottom. In the SM case, only the 2 โ†’ 3 processes are summed, to avoid double-counting. The exact tree-level result includes all Feynman diagrams and interferences. The last line shows the effect of initial-state radiation (ISR) and beamstrahlung. account off-shell corrections that originate from the nontrivial resonance kinematics. However, the Breit-Wigner amplitude is not gauge-invariant off-resonance, thus the precise result depends on the choice of gauge (unitarity gauge in our calculations). Both, this approximation and the NWA neglect interferences with off-resonant diagrams. To obtain the full tree-level result, all Feynman graphs and their interferences must be taken into account, and an unambiguous breakdown into resonance channels is no longer possible. Perturbation theory breaks down at the poles of intermediate on-shell states. The emerging divergences have to be regularized, for example via finite particle widths which unitarize the amplitude. Not surprisingly, naรฏvely including particle widths violates gauge invariance, but schemes exist which properly address this problem [30]. All our codes use the fixed-width scheme, which includes the finite width even in the spacelike region and avoids problems of gauge invariance in the processes we consider here. Finally, in many cases the effects of initial-state radiation (ISR) and beamstrahlung are numerically of the same order of magnitude as the full resonance and interference corrections, or even larger, and therefore need to be addressed. Particle Widths As discussed before, we must include finite widths for all intermediate particles that can become on-shell. For the processes discussed here this includes the neutral Higgs and Z bosons, the neutralinos, and the sbottoms. It is tempting to merely treat the widths as externally fixed numerical parameters. This, however, can lead to a mismatch: consider a tree-level process with an intermediate resonance with mass M and total width ฮ“. The tree-level cross section contains a factor 1 (p 2 โˆ’ M 2 ) 2 + M 2 ฮ“ 2 . In the vicinity of the pole a factor 1/ฮ“ is picked up. If ฮ“ โ‰ช M, this contribution to the cross section can be approximated by the on-shell production cross section multiplied by the branching ratio for resonance decay into the desired final state X, i.e. BR X = ฮ“ X /ฮ“ (cf. Sec. 2.4). While the total width ฮ“ is an external numerical parameter, the partial width ฮ“ X is implicitly computed by the integration program at tree-level during cross section evaluation. This can lead to a noticeable mismatch, especially if the external full width is calculated with higher-order corrections. Formally, the use of loop-improved widths induces an order mismatch in any leading-order calculation, which, in principle, is allowed. However, in reality, dominant corrections might reside in both the decay (width) calculation and the production process, canceling each other in the full result. The NLO corrections to the full process that would remedy the problem are generally unavailable, at least in a form suitable for event generation [74]. To illustrate this reasoning, consider a case where the resonance has only one decay channel. Then, in the narrow-width limit, the factorized result is reproduced only if the tree-level width is taken as an input computed from exactly the same parameters as the complete process. While this looks like a trivial requirement, it should be stressed that most MSSM decay codes return particle widths that include higher orders, either explicitly or implicitly through the introduction of running couplings and mass parameters. Similarly, for the Z boson width one is tempted to insert the measured value, which in the best of all worlds corresponds to the all-orders perturbative result. To avoid the problems mentioned above, in this paper we calculate all relevant particle widths in the same tree-level framework used for the full process. For completeness, we list them in Tab. 4 of Appendix C, corresponding to the SLHA input file used for the collider calculation. Our leading-order widths agree with those of sdecay [75]. Testing the Narrow Width Approximation An estimate of the effects of the NWA and of Breit-Wigner propagators is shown in Tab. 2. In replacing on-shell intermediate states by Breit-Wigner functions in the SUSY processes (left panel) the total cross section increases by 15%. Breaking the cross section down into individual contributions, it becomes apparent that this increase is mainly due to the heavy neutralino channels. In contrast, the Z, Higgs and sbottom channels are fairly well-described by the onshell approximation of Eq. (1). Including the complete set of all tree-level Feynman diagrams with all interferences results in a decrease of 11%. Obviously, continuum and interference effects are non-negligible and must be properly taken into account. Similar considerations apply to the SM background, e + e โˆ’ โ†’ bbฮฝฮฝ, shown in the right panel of Tab. 2. At a collider energy of 800 GeV, the SM process is dominated by weak boson fusion, while pair production (ZZ/ZH) borders on negligible. For the total cross section, the NWA Finally, we compute the effect of ISR and beamstrahlung: the SUSY cross section increases by 15% -a general effect seen for processes dominated by particle pair production well above threshold. (In that range the cross sections are proportional to 1/ล and therefore profit from the reduction in effective energy due to photon radiation.) In contrast, for the SM background, adding ISR and beamstrahlung amounts to a reduction by 8%. This is expected for a t-channeldominated process with asymptotically flat energy dependence. Apart from total cross sections, it is crucial to understand off-shell effects in distributions. They are significant in the neutralino channels e + e โˆ’ โ†’ฯ‡ 0 1ฯ‡ 0 i (i = 2, 3, 4), the dominant SUSY backgrounds to our sbottom signal. For this mass spectrum, theฯ‡ 0 2 has a three-body decay to qqฯ‡ 0 1 ; here the focus is on q = b. The higgsino-likeฯ‡ 0 3 has a two-body decayฯ‡ 0 3 โ†’ Zฯ‡ 0 1 with a branching fraction close to 100% [75]. In the complete calculation, neither the decayingฯ‡ 0 3 nor the intermediate Z is forced onshell. Continuum effects play a role. This explains the differences in the decay spectrum between the full calculation and the approximation using Breit-Wigner propagators, as seen in Fig. 5. There, we include neutralino pair production, e + e โˆ’ โ†’ฯ‡ 0 1ฯ‡ 0 3 . In Fig. 5 we show the bb invariant mass spectrum for the process e + e โˆ’ โ†’ฯ‡ 0 1ฯ‡ 0 3 โ†’ bbฯ‡ 0 1ฯ‡ 0 1 . Assuming a two-bodแปน ฯ‡ 0 3 decay, one would expect a sharp Breit-Wigner Z resonance at 91.18 GeV. Instead, the resonance is not Breit-Wigner-like and is surrounded by a nearly flat continuous distribution at both high and low masses. Clearly, this would not be accounted for by a factorized productiondecay approximation. In fact, it stems from a highly off-shell three-body decayฯ‡ 0 3 โ†’ bbฯ‡ 0 1 via an intermediate sbottom. As a background to sbottom pair production, this process gives the dominant contribution, because we can easily cut against on-shell neutralino production. The significant low-mass tail explains the 30% enhancement for this channel seen in Tab. 2. Similar reasoning holds for other channels. The results in Tab. 2 also demonstrate that photon radiation, both in the elementary process (ISR) and as a semi-classical interaction of the incoming beams (beamstrahlung), cannot be neglected. For the numerical results, ISR is included using the third-order leading-logarithmic approximation [76], and beamstrahlung using the TESLA 800 parameterization in circe [77]. In both cases the photon radiation is predominantly collinear with the incoming beams and therefore invisible. Therefore, all distributions depending on missing momentum, i.e. the momentum of the final-state neutralinos, are distorted by such effects. In the left panel of Fig. 6 we show the missing invariant-mass spectrum for the full process e + e โˆ’ โ†’ bbฯ‡ 0 1ฯ‡ 0 1 without ISR and beamstrahlung. Two narrow peaks are clearly visible, corresponding to the one light and two (unresolved) heavy Higgs bosons. These peaks sit on top of a continuum reaching a maximum around 500 GeV, dominantly stemming from neutralino and sbottom pairs. We include ISR and beamstrahlung in the right panel of Fig. 6. They tend to wash out the two sharp peaks, with a long tail to higher invariant masses. Without explicitly showing it, we emphasize that the same happens to the SM background, where the Z boson decays invisibly into ฮฝฮฝ. Isolating the sbottom-pair signal According to Tab. 2, the dominant contribution to the bbฯ‡ 0 1ฯ‡ 0 1 final state at an ILC is neutralino pair production. To study the sbottom sector, its contribution needs to be isolated with kinematic cuts. In addition, vector boson fusion into Z and Higgs bosons represent non-negligible backgrounds, and have to be reduced accordingly. We see that Higgs boson and heavy sbottom production are of minor importance. An obvious cut for background reduction is on the reconstructed bb invariant mass. Fig. 7 shows the distribution for the full process, with all Feynman diagrams and including ISR and beamstrahlung. SM contributions (light gray) and the MSSM (dark) must be superimposed to obtain the complete signal and background result, since neutrinos cannot be distinguished from neutralinos. The spectrum depicted in Fig. 7 This cut retains mostly sbottom-pair signal events, with some continuum background. In the crude NWA (just the simple production channelsb 1b * 1 ,ฯ‡ 0 1ฯ‡ 0 2 ,ฯ‡ 0 1ฯ‡ 0 3 and W + W โˆ’ โ†’ Z/h, ZZ, Zh, HA, . . .; times decay matrix elements), these cuts would remove the entire background, while only marginally affecting the signal. We show the effect of applying this cut in Tab. 3 using the various approximations. In the full calculation we retain 60% of the signal rate. While in the on-shell approximation this cut would remove 100% of the peaked backgrounds, our complete calculation including Breit-Wigner propagators retains a whopping 2.3 fb (SUSY) and 2.1 fb (SM). Surprisingly, the exact tree-level cross section without ISR is considerably smaller than that: 0.5 fb (SUSY, signal+background) and 1.8 fb (SM). Obviously, for the background SUSY processes the Breit-Wigner approximation is misleadingly wrong if we force the phase space into the sbottom-signal region. Only the full calculation gives a reliable result. In the absence of backgrounds, the b jet energy spectrum from sbottom decays exhibits a box-like shape corresponding to the decay kinematics ofb 1 โ†’ bฯ‡ 0 1 . Assuming that mฯ‡0 1 is known from a threshold scan, the edges of the box would allow a simple kinematical fit to yield a precise determination of mb 1 . The realistic E b distribution appears in Fig. 8 Table 3: SUSY cross sections contributing to e + e โˆ’ โ†’ bbฯ‡ 0 1ฯ‡ 0 1 (left) and the SM background e + e โˆ’ โ†’ bbฮฝฮฝ (right). The left column is the Breit-Wigner approximation without cuts. The right column is after the M bb cuts of Eq. (7). We show the results for the incoherent sum of channels, the complete result with all interferences, and the same with ISR and beamstrahlung. panel we show the E b spectrum for the full process without cuts, including all interferences, and taking ISR and beamstrahlung into account. The large background precludes any identification of a box shape. The right panel displays the same distribution after the M bb cuts of Eq. (7) and compares it with the ideal case (no background, no ISR, no cuts) in the same normalization. The SUSY contribution after cuts (dark area) shows the same kinematical limits as the ideal box, but the edges are washed out by the combined effects of cuts, ISR/beamstrahlung, and continuum background. However, the signal sits atop a sizable leftover SM background. As argued above, this background cannot be realistically simulated by simply concatenating particle production and decays. Without going into detail, we note that for further improvement of the signal-to-background ratio, one could use beam polarization (reducing the W + W โˆ’ โ†’ bb continuum) or a cut on missing invariant mass (to suppress Z โ†’ ฮฝฮฝ). For a final verdict on the measurement of sbottom properties in this decay channel, a realistic analysis must also consider fragmentation, hadronization and detector effects. NLO corrections (at least; if not NNLO) to the signal process must be taken into account to gain some idea about realistic event rates. Summary and Outlook Phenomenological and experimental (Monte Carlo) analyses for new physics at colliders are usually approached at a level of sophistication which does not match the know-how we have from the Standard Model. For supersymmetric signals at the LHC and an ILC we have carefully studied effects which occur beyond simple 2 โ†’ 2 cross section analyses, using sbottom pair Figure 8: The E b spectrum of the full process e + e โˆ’ โ†’ bb+E / , including all interferences and off-shell effects, plus ISR and beamstrahlung. The light gray histogram is the SM background, dark gray the sum of SUSY processes. The left panel is before the cut of Eq. (7), while the right panel includes the cut. Also in the right panel we show the idealized case (red) of on-shell sbottom production without ISR or beamstrahlung. The SM background is again shown in light gray, while the dark gray shows the sbottom contribution alone. production as a simple example process. At the LHC, the reconstruction of decay kinematics is the source of essentially all information on heavy new particles. Any observable linked to cross sections instead of kinematical features is bound to suffer from much larger QCD uncertainties. Typical experimental errors from jet energy scaling are of the same order as finite-width effects in the total cross section. However, in relevant distributions, off-shell effects can easily be larger. QCD off-shell effects also include additional jet radiation from the incoming state. Usually, jet radiation is treated by parton showers in the collinear approximation. For processes with bottom jets in the final state we tested this approximation by computing the effects of two additional bottom jets created through gluon splitting in the initial state. The effects on the rate are typically below 10%, and kinematical distributions do indeed change. In our case, distinguishing between initial-state bottom jets and decay bottom jets via rapidity and transverse momentum characteristics does not look promising. Sbottom pair production at the LHC has the fortunate feature that most of these off-shell effects and combinatorial backgrounds can be removed together with the SM backgrounds, but this feature is by no means guaranteed for general SUSY processes. At an ILC, the extraction of parameters from kinematic distributions is usually more precise compared to more inclusive measurements. In contrast to the LHC, the typical size ฮ“/M of offshell effects exceeds the present ILC design experimental precision. It is therefore mandatory for multi-particle final states to include the complete set of off-shell Feynman diagrams in ILC studies, since they can alter signal distributions drastically. This was impressively demonstrated by our study of sbottom pair production where we found up to 400% corrections to production rates from off-shell effects, after standard cuts. Irreducible SM backgrounds to missing energy signals can strongly distort the shapes of energy and invariant mass distributions. Hence, if we would attempt to extract masses and mass differences from invariant mass distributions at an ILC, we find that we must take into account off-shell effects and additional many-particle intermediate states which can change cross sections dramatically. Simulation of initial state radiation and beamstrahlung is mandatory to describe shapes of resonances and distributions in a realistic linear collider environment. To compute the effects described above we implemented the MSSM Lagrangian and the proper description of Majorana particles in the matrix element generators madgraph/madevent, o'mega/whizard and amegic++/sherpa. To carefully check these extensions we compared several hundred SUSY production processes numerically, as well as performed a number of unitarity and gauge invariance checks. All results, as well as the SLHA input file, are given in the Appendix -we are confident that this list of processes can serve as a standard reference to generally check MSSM implementations in collider physics or phenomenology tools. A Input Parameters Used in the Comparison Here we list the input parameters we used, which are in the blocks relevant for our purposes from the SLHA output of the softsusy program: # SOFTSUSY1.9 # B.C. Allanach, Comput. Phys. Commun. 143 (2002) Parameters used with a different value than specified in the above SLHA file are M W = 80.419 GeV, M Z = 91.188 GeV. We set all SUSY particle widths to zero, since there are no SUSY particles in the s-channel. (The spectrum generator softsusy does not calculate the widths of the SUSY particles. This is instead done by the program sdecay [75].) The only widths used in our comparison are set by hand, ฮ“ W = 2.048 GeV and ฮ“ Z = 2.446 GeV. All Higgs widths have been set to zero, as well as the electron mass. The third generation quark masses have been given the values m t = 178.0 GeV and m b = 4.6 GeV. For the strong coupling we take ฮฑ s (M Z ) = 0.118. The G F โˆ’ M Z โˆ’ ฮฑ scheme has been used for the SM parameters.
Centrifuge: rapid and sensitive classification of metagenomic sequences Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. Microbes such as archaea and bacteria are found virtually everywhere on earth, from soils and oceans to hot springs and deep mines (Keller and Zengler 2004). They are also abundant on and inside living creatures, including a variety of niches on the human body such as the skin and the intestinal tract (Human Microbiome Project Consortium 2012). These invisible life forms perform a vast range of biological functions; they are indispensable for the survival of many species; and they maintain the ecological balance of the planet. Many millions of prokaryotic species exist (Schloss and Handelsman 2004), although only a small fraction of them (<1% in soil and even fewer in the ocean) can be isolated and cultivated (Amann et al. 1995). High-throughput sequencing of microbial communities, known as metagenomic sequencing, does not require cultivation and therefore has the potential to provide countless insights into the biological functions of microbial species and their effects on the visible world. In 2004, the RefSeq database contained 179 complete prokaryotic genomes, a number that grew to 954 genomes by 2009 and to 4278 by December 2015. Together with advances in sequencing throughput, this ever-increasing number of genomes presents a challenge for computational methods that compare DNA sequences to the full database of microbial genomes. Analysis of metagenomics samples, which contain millions of reads from complex mixtures of species, necessitates a compact and scalable indexing scheme for classifying these sequences quickly and accurately. Most of the current metagenomics classification programs either suffer from slow classification speed, a large index size, or both. For example, machine-learning-based approaches such as the Naive Bayes Classifier (NBC) (Rosen et al. 2008) and PhymmBL Salzberg 2009, 2011) classify <100 reads per minute, which is too slow for data sets that contain millions of reads. In contrast, the pseudoalignment approach employed in Kraken (Wood and Salzberg 2014) processes reads far more quickly, more than 1 million reads per minute, but its exact k-mer matching algorithm requires a large index. For example, Kraken's 31-mer database requires 93 GB of memory (RAM) for 4278 prokaryotic genomes, considerably more memory than today's desktop computers contain. Fortunately, modern read-mapping algorithms such as Bowtie (Langmead et al. 2009;Langmead and Salzberg 2012) and BWA Durbin 2009, 2010) have developed a data structure that provides very fast alignment with a relatively small memory footprint. We have adapted this data structure, which is based on the Burrows-Wheeler transform (Burrows and Wheeler 1994) and the Ferragina-Manzini (FM) index (Ferragina and Manzini 2000), to create a metagenomics classifier, Centrifuge, that can efficiently store large numbers of genome sequences, taxonomical mappings of the sequences, and the taxonomical tree. Database sequence compression We implemented memory-efficient indexing schemes for the classification of microbial sequences based on the FM-index, which also permits very fast search operations. We further reduced the size of the index by compressing genomic sequences and building a modified version of the FM-index for those compressed genomes, as follows. First, we observed that for some bacterial species, large numbers of closely related strains and isolates have been sequenced, usually because they represent significant human pathogens. Such genomes include Salmonella enterica with 138 genomes, Escherichia coli with 131 genomes, and Helicobacter pylori with 73 genomes available (these figures represent the contents of RefSeq as of December 2015). As expected, the genomic sequences of strains within the same species are likely to be highly similar to one another. We leveraged this fact to remove such redundant genomic sequences, so that the storage size of our index can remain compact even as the number of sequenced isolates for these species increases. Figure 1 illustrates how we compress multiple genomes of the same species by storing near-identical sequences only once. First, we choose the two genomes (G 1 and G 2 in the figure) that are most similar among all genomes. We define the two most similar genomes as those that share the greatest number of k-mers (using k = 53 for this study) after k-mers are randomly sampled at a rate of 1% from the genomes of the same species. In order to facilitate this selection process, we used Jellyfish (Marcais and Kingsford 2011) to build a table indicating which k-mers belong to which genomes. Using the two most similar genomes allows for better compression as they tend to share larger chunks of genomic sequences than two randomly selected genomes. We then compared the two most similar genomes using nucmer (Kurtz et al. 2004), which outputs a list of the nearly or completely identical regions in both genomes. When combining the two genomes, we discard those sequences of G 2 with โ‰ฅ99% identity to G 1 and retain the remaining sequences to use in our index. We then find the genome that is most similar to the combined sequences from G 1 and G 2 and combine this in the same manner as just described. This process is repeated for the rest of the genomes. As a result of this concatenation procedure, we obtained dramatic space reductions for many species; e.g., the total sequence was reduced from 661 to 74 Mbp (11% of the original sequence size) in S. enterica and from 655 to 107 Mbp (16%) in E. coli (see Table 1). Overall, the number of base pairs from โˆผ4300 bacterial and archaeal genomes was reduced from 15 to 9.1 billion base pairs Figure 1. Compression of genome sequences before building the Centrifuge index. All genomes are compared and similarities are computed based on shared 53-mers. In the figure, genomes G 1 and G 2 are the most similar pair. Sequences of G 2 that are โ‰ฅ99% identical to G 1 are discarded, and the remaining "unique" sequences from G 2 are added to genome G 1 , creating a merged genome, G 1+2 . Similarity between all genomes is recomputed using the merged genomes. Sequences <99% identical in genome G 3 are then added to the merged genome, creating genome G 1+2+3 . This process repeats for the entire Centrifuge database until each merged genome has no sequences โ‰ฅ99% identical to any other genome. (Gbp). The FM-index for these compressed sequences occupies 4.2 GB of memory, which is small enough to fit into the main memory (RAM) on a conventional desktop computer. As we demonstrate in the Supplemental Methods and Supplemental Table S1, this compression operation has only a negligible impact on classification sensitivity and accuracy. Classification based on the FM-index The FM-index provides several advantages over k-mer-based indexing schemes that store all k-mers in the target genomes. First, the size of the k-mer table is usually large; for example, Kraken's kmer table for storing all 31-mers in โˆผ4300 prokaryotic genomes occupies โˆผ100 GB of disk space. Second, using a fixed value for k incurs a tradeoff between sensitivity and precision: Classification based on exact matches of large k-mers (e.g., 31 bp) provides higher precision but at the expense of lower sensitivity, especially when the data being analyzed originate from divergent species. To achieve higher sensitivity, smaller k-mer matches (e.g., 20-25 bp) can be used; however, this results in more false-positive matches. The FM-index provides a means to exploit both large and small k-mer matches by enabling rapid search of k-mers of any length, at speeds comparable to those of k-mer table indexing algorithms (see Results). Using this FM-index, Centrifuge classifies DNA sequences as follows. Suppose we are given a 100-bp read (note the Centrifuge can just as easily process very long reads, assembled contigs from a draft genome, or even entire chromosomes). We search both the read (forward) and its reverse complement from right to left (3 โ€ฒ to 5 โ€ฒ ) as illustrated in Figure 2A. Centrifuge begins with a short exact match (16-bp minimum) and extends the match as far as possible. In the example shown in Figure 2A, the first 40 bp match exactly, with a mismatch at the 41st base from the right. The rightmost 40-bp segment of the read is found in six species (A, B, C, D, E, and F) that had been stored in the Centrifuge database. The algorithm then resumes the search beginning at the 42nd base and stops at the next mismatch, which occurs at the 68th base. The 26-bp segment in the middle of the read is found in species G and H. We then continue to search for mappings in the rest of the read, identifying a 32-bp segment that matches species G. Note that only exact matches are considered throughout this process, which is a key factor in the speed of the algorithm. We perform the same procedure for the reverse complement of the read which, in this example, produces more mappings with smaller lengths (17, 16, 28, 18, and 17) compared to the forward strand. Based on the exact matches found in the read and its reverse complement, Centrifuge then classifies the read using only those mappings with at least one 22-bp match. Figure 2A shows three segment mappings on the forward strand read and one on the read's reverse complement that meet this length threshold. Centrifuge then scores each species using the following formula, which assigns greater weight to the longer segments: After assessing a variety of formulas, we empirically found that the sum of squared lengths of segments provides the best classification precision. Because almost all sequences of 15 bp or shorter occur in the database by chance, we subtract 15 from the match length. Other values such as 0 and 7 bp work almost as well, while higher values such as 21 bp result in slightly lower precision and sensitivity. For the example in Figure 2, species A, B, C, D, E, and F are assigned the highest score (625), based on the relatively long 40-bp exact match. Species G and H get lower scores because they have considerably shorter matches, even though each has two distinct matches. Note that H has mappings on both the read and its reverse complement, and in this case, Centrifuge chooses the strand that gives the maximum score, rather than using the summed score on both strands, which might bias it toward palindromic sequences. Centrifuge can assign a sequence to multiple taxonomic categories; by default, it allows up to five labels per sequence. (Note that this strategy differs from Kraken, which always chooses a single taxonomic category, using the lowest common ancestor of all matching species.) In Figure 2, six different species match the read equally well. In order to reduce the number of assignments, Centrifuge traverses up the taxonomic tree. First, it considers the genus that includes the largest number of species, which, in this example (Fig. 2B), is genus I, which covers species A, B, and C. It then replaces these three species with the genus, thereby reducing the number of assignments to four (genus I plus species D, E, and F). If more than five taxonomic labels had remained, Centrifuge would repeat this process for other genera and subsequently for higher taxonomic units until it reduced the number of labels to five or fewer. The user can easily change the default threshold of five labels per sequence; for example, if this threshold is set to one, then Centrifuge will report only the lowest common ancestor as the taxonomic label, mimicking the behavior of Kraken. In the example shown in Figure 2, this label would be at the family level, which would lose some of the more specific information about which genera and species the reads matched best. If the size of the index is not a constraint, then the user can also use Centrifuge with uncompressed indexes, which classify reads using the same algorithm. Although considerably larger, the uncompressed indexes allow Centrifuge to classify reads at the strain or genome level; e.g., as E. coli K12 rather than just E. coli. Abundance analysis In addition to per-read classification, Centrifuge performs abundance analysis at any taxonomic rank (e.g., strain, species, genus). Because many genomes share near-identical segments of DNA with other species, reads originating from those segments will be classified as multiple species. Simply counting the number of the reads that are uniquely classified as a given genome (ignoring those that match other genomes) will therefore give poor estimates of that species' abundance. To address this problem, we define the following statistical model and use it to find maximum likelihood estimates of abundance through an Expectation-Maximization (EM) algorithm. Detailed EM solutions to the model have been previously described and implemented in the Cufflinks (Trapnell et al. 2010) and Sailfish (Patro et al. 2014) software packages. Similar to how Cufflinks calculates gene/transcript expressions, the likelihood for a specific configuration of species abundance ฮฑ, given the read assignments C, is defined as follows: where R is the number of the reads, S is the number of species, ฮฑ j is Figure 2. Classification of reads. (A) The figure shows how the score for a candidate at the species level is calculated. Given a 100-bp read, both the read (forward) and its reverse complement from right to left are searched. Centrifuge first identifies a short exact match, then continues until reaching a mismatch: The first 40-bp segment exactly matches six species (A, B, C, D, E, F), followed by a mismatch at the 41st base; the second 26-bp segment matches two species (G and H), followed by a mismatch at the 68th base; and the third 32-bp segment matches only species G. This procedure is repeated for the reverse complement of the read. Centrifuge assigns the highest score (625) to species A, B, C, D, E, and F. (B) Centrifuge then traverses up the taxonomic tree to reduce the number of assignments, first by considering the genus that includes the largest number of species, genus I, which covers species A, B, and C, and then replacing these three species with the genus. This procedure results in reducing the number of assignments to four (genus I plus species D, E, and F). the abundance of species j, summing up to 1 over all S species, l j is the average length of the genomes of species j, and C ij is 1 if read i is classified to species j and 0 otherwise. To find the abundances ฮฑ that maximize the likelihood function L(ฮฑ|C ), Centrifuge repeats the following EM procedure as also implemented in Cufflinks until the difference between the previous estimate of abundances and the current estimate, S j=1 |a j โˆ’ a โ€ฒ j |, is less than 10 โˆ’10 . Expectation (E-step): where n j is the estimated number of reads assigned to species j. Maximization (M-step): where a โ€ฒ j is the updated estimate of species j's abundance. ฮฑ โ€ฒ is then used in the next iteration as ฮฑ. Results We demonstrated the performance of Centrifuge in four different settings involving both real and simulated reads and using several databases with different sizes, specifically one consisting of โˆผ4300 prokaryotic genomes (index name: p, index size: 4.2 GB), another with โˆผ4300 prokaryotic genomes plus human and viral genomes (p + h + v, 6.9 GB), and a third comprised of NCBI nucleotide sequences (nt, 69 GB). We compared the sensitivity and speed of Centrifuge to one of the leading classification programs, Kraken (v0.10.5-beta) (Wood and Salzberg 2014). We also included MegaBLAST (Zhang et al. 2000) in our assessment, as it is a very widely used program that is often used for classification. In terms of both sensitivity and precision of classification, Centrifuge demonstrated similar accuracy to the other programs we tested. Centrifuge's principal advantage is that it provides a combination of fast classification speed and low memory requirements, making it possible to perform large metagenomics analyses on a desktop computer using p or p + h + v index. For example, Centrifuge took only 47 min on a standard desktop computer to analyze 130 paired-end RNA sequencing runs (a total of 26 GB) from patients infected with Ebola virus (Baize et al. 2014;Gire et al. 2014;Park et al. 2015) as described below. Centrifuge's efficient indexing scheme makes it possible to index the NCBI nucleotide collection (nt) database, which is a comprehensive set of sequences (>36 million nonredundant sequences, โˆผ110 billion bp) collected from viruses, archaea, bacteria, and eukaryotes, and enables rapid and accurate classification of metagenomic samples. Comparison of Centrifuge, Kraken, and MegaBLAST on simulated reads from 4278 prokaryotic genomes We created a simulated read data set from the 4278 complete prokaryotic genomes in RefSeq (Pruitt et al. 2014) that were used to build the database, p. From these genomes, we generated 10 million 100-bp reads with a per-base error rate of 3% using the Mason simulator, v0.1.2 (Luke et al. 2005). We used an error rate higher than found in Illumina reads (โ‰ค0.5%) in order to model the high mutation rates of prokaryotes. Reads were generated randomly from the entire data set; thus, longer genomes had proportionally more reads. The full set of genomes is provided in Supplemental Table S2. We built indexes for each of the respective programs. Kraken and MegaBLAST require 100 GB and 25 GB of space (respectively) for their indexes. In contrast, Centrifuge requires only 4.2 GB to store and index the same genomes. The run-time memory footprint of MegaBLAST is small (Table 2) because it does not read the entire database into memory, in contrast to Kraken and Centrifuge. We classified the reads with Centrifuge, Kraken, and MegaBLAST and calculated sensitivity and precision at the genus and species levels for each program (Table 2). Centrifuge and MegaBLAST often report multiple assignments for a given read, while Kraken instead reports the lowest common ancestor. To make our evaluation consistent across the programs, we only considered uniquely classified reads. Here, we define sensitivity as the number of reads that are correctly classified divided by the total number of reads. Precision (also called positive predictive value) is defined as the number of correctly classified reads divided by the number of predictions made (i.e., reads that have no match and are not classified at a given taxonomic rank or below are not counted). At the species level, MegaBLAST provides the highest sensitivity at 78.8%, followed by Centrifuge (76.9%) and then Kraken (73.9%). Overall sensitivity is relatively low because many reads are assigned to multiple species and considered as unclassified in our evaluation. MegaBLAST provides the highest precision, 99.4%, followed closely by Kraken at 99%, then Centrifuge at 98.4%. At the genus level, MegaBLAST provides the highest sensitivity at 93.4%, followed by Centrifuge (93.1%) and then by Kraken (90.4%) (Supplemental Table S2). All three programs had near-perfect precision at the genus level, from 99.6% to 99.9%. Kraken was the fastest program on these data, classifying about 1,062,000 reads per minute (rpm), followed by Centrifuge, which was approximately one-half as fast at 563,000 rpm. MegaBLAST is far slower, processing only 327 rpm. As a side note, fast alignment programs such as Bowtie 2 (Langmead and Salzberg 2012) and BWA (Li and Durbin 2009) can be used for classifying reads, though they were not designed for that purpose. To explore such repurposing, we built a Bowtie 2 index and used Bowtie 2 on the simulated reads. Bowtie 2 is In Centrifuge, we used only uniquely classified reads to compute accuracy. To measure speed, we used 10 million reads for Centrifuge and Kraken and 100,000 reads for MegaBLAST. We ran all programs on a Linux system with 1 TB of RAM using one CPU (2.1 GHz Intel Xeon). very fast, processing >56,000 reads/minute, but is still only onetenth as fast as Centrifuge. Bowtie 2 also requires 21 GB of RAM, five times more than required by Centrifuge. Bowtie 2 has classification sensitivity and precision comparable to Centrifuge (e.g., sensitivity of 96.8% and precision of 99.1% at the genus level). In addition to the above per-read classification, Centrifuge estimates abundance at various taxonomic ranks. Centrifuge's abundance assessment closely matches the true abundance distribution of genomes in the simulated reads (Supplemental Fig. S1) at the species level (Pearson's correlation coefficient of 0.919) and the genus level (correlation of 0.986). Comparison of Centrifuge and Kraken performance on real data sets from sequencing reads of bacterial genomes To test our method on real sequencing data sets, we downloaded 530 DNA sequencing data sets from the Sequence Read Archive (SRA). We selected them according to whether the SRA samples had been assigned a taxonomic identifier that belongs to a genus for which we have at least one genome in the database. All of the data sets were generated by whole-genome shotgun projects using recent Illumina platforms; 225 were sequenced on HiSeq and 305 on MiSeq instruments, with mean read lengths of 100 and 218 bp, respectively. Supplemental Table S3 contains a complete list of the SRA identifiers, taxonomy IDs, number of reads, and classification results. In total, these data contain over 560 million reads, with an average of 1,061,536 reads per sample. For this experiment, we compared Centrifuge and Kraken but omitted MegaBLAST because it would take far too long to run. Kraken was chosen as the standard for comparison because it demonstrated superior accuracy over multiple other programs in a recent comparison of metagenomic classifiers (Lindgreen et al. 2016). Figure 3 and Supplemental Figure S2 show the results for classification sensitivity, accuracy, speed, and memory usage using the database p of โˆผ4300 prokaryotic genomes. On average, Centrifuge had slightly higher sensitivity (0.6% higher) than Kraken. Perhaps due to its use of a longer exact match requirement (31 bases), Kraken had slightly higher precision (2%) than Centrifuge. The lower accuracy of both programs on some data sets may be due to: (1) substantial differences between the genome that we have in the database and the strain that was sequenced; (2) numerous contaminating reads from the host or reagents; or (3) a high sequencing error rate for a particular sample. For example, SRR2225903 is labeled as a strain of Acinetobacter, but 85% of the reads are assigned to Escherichia. SRR1656428 is labeled as a clinical isolate of Shigella dysenteriae, but 92% of the reads are classified as Klebsiella (note that for this experiment the taxonomy ID has since been updated by NCBI, but the name has not changed). In other instances (such as SRR1656029 and SRR1655687, labeled as clinical isolates of Ferrimonas balearica and Kytococcus sedentarius, respectively), we could not match taxonomic IDs to a substantial fraction of the reads, even when searching against the nt database. The reads might have come from a species that has no close relative in the database or could not be assigned due to poor quality. Overall accuracy for both programs was very similar. Kraken was slightly faster, with an average run time of 39.3 sec per genome, while Centrifuge required 50.9 sec per genome (both using eight cores). Application of Centrifuge for analyzing samples with Ebola virus and GB virus C co-infections on a desktop To demonstrate the speed, sensitivity, and applicability of Centrifuge on a real data set, we used data from the Ebola virus disease (EVD) outbreak. The 2013-2015 EVD outbreak in West Africa cost the lives of over 11,000 people as of August 26, 2015 (WHO Ebola Situation Report, http://apps.who.int/ebola/ebolasituation-reports). In an international effort to research the disease and stop its spread, several groups sequenced the Ebola virus collected from patients' blood samples and released their data sets online (Baize et al. 2014;Gire et al. 2014;Park et al. 2015). The genomic data were used to trace the disease and mutations in the Ebola genome and inform further public health and research efforts. Lauck et al. (2015) reanalyzed one of the data sets (Gire et al. 2014) in order to assess the prevalence and effect of GB virus C co-infection on the outcome of EVD. We analyzed 130 paired-end sequencing runs from 49 patients reported in Gire et al. (2014) using Centrifuge to look for further co-infections. This data set has a total of 97,097,119 reads (26 GB of FASTA files). The accession IDs of these data are provided in Supplemental Table S4. For this analysis, we used the database p + h + v containing all prokaryotic genomes (compressed), all viral genomes, and the human genome (total index size: 6.9 GB). Running on a desktop computer (quad-core, Intel Core i5-4460 @ 3.2 GHz with 8 GB RAM), Centrifuge completed the analysis of all samples in 47 min with four cores. RNA-sequencing (Mortazavi et al. 2008) requires more steps than DNA-sequencing, including the reversetranscription of RNA to DNA molecules, which introduces sequencing biases and artifacts. In order to handle these additional sources of errors and remove spurious detections, we filtered the results to include only reads that have a matching length of at least 60 bp on the 2 ร— 100-bp reads. Figure 4 shows our classification results for the 49 patients. Centrifuge detects between 3853 and 6,781,684 Ebola virus reads per patient. As reported by Lauck et al., we also detected co-infection of Ebola virus and GB virus C in many of the patients. Centrifuge identified at least one read from this virus in 27 of the 49 patients; nine patient samples had 50 or more reads. Nine patients had between one and 10 reads matching the Hepatitis B virus, and in one sample, over 1000 reads aligned uniquely to this virus. This Hepatitis B co-infection has not been reported previously, demonstrating the inherent advantage of us-ing a metagenomics classification tool, which can also detect offtarget species. Application of Centrifuge for analyzing Oxford Nanopore MinION reads of fruitshake using nt database As a test of Centrifuge's nt database, we used it to analyze sequences from a mixture of common fruits and vegetables sequenced using long-read single-molecule technology. The mixture included more than a dozen common foods: grape, blueberry, yam (sweet potato), asparagus, cranberry, lemon, orange, iceberg lettuce, black pepper, wheat (flour), cherry tomato, pear, bread (wheat plus other ingredients), and coffee (beans). The "fruitshake" mixture was blended together, DNA was extracted, and sequencing was run on an Oxford Nanopore MinION. The number of reads generated from the fruitshake sample was 20,809, with lengths ranging from 90 to 13,174 bp and a mean length of 893 bp. Although MinION platforms produce much longer reads than Illumina platforms, MinIONs' high sequencing error rates (estimated at 15%) (Jain et al. 2015) prevent reads from containing long exact matches and increase the chance of noisy and incorrect matches. We initially labeled 8236 reads using Centrifuge. In order to reduce false-positive assignments for these error-prone reads, we filtered out those reads that scored โ‰ค300 and had match lengths โ‰ค50 bp, resulting in 3617 reads ultimately classified. Table 3 shows 14 species to which at least five reads are uniquely assigned, encompassing many of the species included in the sample, such as wheat, tomato, lettuce, grape, barley, and pear. Note that as with any real sample, the true composition of the reads is unknown; we present these results here to illustrate (1) the use of the large nt database, and (2) the use of Centrifuge on long, high-error-rate reads. Although apple was not known to be present in the sample, the five reads assigned to apple might have been due to similarity between the apple genome and the pear genome. Twenty-six reads were identified as sheep and eight as cow, which were confirmed separately by BLAST searches. These could represent sample contamination or possibly contaminants in the sheep and cow assemblies. Missing species can be explained either by low abundance in the sample or because their genomes are substantially different from those in the Centrifuge nt database. Figure 4. Heat map of the most abundant species in Ebola samples. The color scale encodes species abundance (the number of unique reads normalized by genome size), ranging from yellow (<0.1% of the normalized read count) to red (100%), with white representing an abundance of zero. All species that have a normalized read count over 1% in any of the samples are shown. Zaire ebolavirus dominates the samples; however, there is also a signal for other viruses in some of the patients-namely GB virus C and Hepatitis B virus. Discussion Centrifuge requires a relatively small index for representing and searching โˆผ4300 prokaryotic genomes, only 4.2 GB, lean enough to fit the memory of a personal desktop. These space-optimized indexing schemes also make it possible to index the NCBI nucleotide sequence database that includes a comprehensive set of sequences collected from viruses, archaea, bacteria, and eukaryotes. Identical sequences have been removed to make it nonredundant, but even after this reduction, the database contains over 36.5 million sequences with a total of โˆผ109 billion base pairs (Gbp). This rapidly growing database, called nt, enables the classification of sequencing data sets from hundreds of plant, animal, and fungal species as well as thousands of viruses and bacteria and many other eukaryotes. Metagenomics projects often include substantial quantities of eukaryotic DNA, and a prokaryotes-only index cannot identify these species. The challenge in using a much larger database is the far greater number of unique k-mers that must be indexed. For example, using Kraken's default k-mer length of 31 bp, the nt database contains โˆผ57 billion distinct k-mers. Although it employs several elegant techniques to minimize space, Kraken still requires 12 bytes per k-mer, which means it would require an index size of 684 GB for the full nt database. Reducing the k-mer size helps only slightly: With k = 22, Kraken would require an index of 520 GB. Either of these indexes would require a specialized computer with very large main memory. Centrifuge's index is based on the space-efficient Burrows-Wheeler transform, and as a result, it requires only 69 GB for the nt database, less than the raw sequence itself. BLAST and MegaBLAST are currently the only alternative methods that can classify sequences against the entire nt database; thus, we compared Centrifuge with MegaBLAST using our simulated read data, described above. MegaBLAST uses a larger index, requiring 155 GB on disk, but it does not load the entire index into memory and requires only 16 GB of RAM, while Centrifuge requires 69 GB. However, Centrifuge classified reads at a far higher speed: In our experiments on the Mason simulation data, it processed โˆผ372,000 reads/min, over 3500 times faster than MegaBLAST, which processed only 105 reads/min. Using the much larger nt database instead of the prokaryotic database on the Mason simulated reads (Table 2) does not decrease the classification precision and sensitivity of both programs at the genus level, with Centrifuge's sensitivity only marginally decreasing by 3.2%. As the prokaryotic and nt databases continue to rapidly expand and provide more comprehensive coverage, further difficulties arise in analyzing sequencing data. For example, two major challenges remain to be addressed in the statistical estimation of abundance (Lu et al. 2016). First, the RefSeq database includes many genomes nearly identical to one another, which makes it extremely difficult to distinguish those genomes present in the sample from those that are not. For example, many strains of Chlamydia trachomatis are almost identical (>99.99%) to one another (e.g., Chlamydia trachomatis D/UW-3/CX and Chlamydia trachomatis strain Ia/CS190/96). Second, the microbial taxonomy is sometimes not based on genomic sequence similarity and contains taxonomically misnamed or misplaced species (Federhen 2015). Incorrectly positioned species (or strains) can contribute to inaccurate ancestor assignment (e.g., genus or family) in abundance estimations. For example, a genome initially identified as Anabaena variabilis ATCC 29413 was reassigned to the genus Nostoc, not Anabaena (Thiel et al. 2014). In conclusion, Centrifuge is a rapid and sensitive classifier for microbial sequences with low memory requirements and a speed comparable to the fastest systems. Centrifuge classifies 10 million reads against a database of all complete prokaryotic and viral genomes within 20 min using one CPU core and requiring <8 GB of RAM. Furthermore, Centrifuge can also build an index for NCBI's entire nt database of nonredundant sequences from both prokaryotes and eukaryotes. The search requires a computer system with 128 GB of RAM but runs over 3500 times faster than MegaBLAST. Data access Centrifuge is available as free, open-source software from https:// github.com/infphilo/centrifuge/archive/centrifuge-genomeresearch.zip and provided in Supplemental Data S1. The fruitshake sequencing data from this study have been submitted to the NCBI BioProject database (http://www.ncbi.nlm.nih.gov/ bioproject/) under accession number PRJNA343503. The table shows 14 genomes to which at least five reads sequenced from the fruitshake sample were uniquely assigned. Common names in bold represent species known to be present in the mixture.
Vinogradov systems with a slice off Let $I_{s,k,r}(X)$ denote the number of integral solutions of the modified Vinogradov system of equations $$x_1^j+\ldots +x_s^j=y_1^j+\ldots +y_s^j\quad (\text{$1\le j\le k$, $j\ne r$}),$$ with $1\le x_i,y_i\le X$ $(1\le i\le s)$. By exploiting sharp estimates for an auxiliary mean value, we obtain bounds for $I_{s,k,r}(X)$ for $1\le r\le k-1$. In particular, when $s,k\in \mathbb N$ satisfy $k\ge 3$ and $1\le s\le (k^2-1)/2$, we establish the essentially diagonal behaviour $I_{s,k,1}(X)\ll X^{s+\epsilon}$. In memoriam Klaus Friedrich Roth Abstract. Let I s,k,r (X ) denote the number of integral solutions of the modified Vinogradov system of equations x j 1 + ยท ยท ยท + x j s = y j 1 + ยท ยท ยท + y j s (1 j k, j = r ), with 1 x i , y i X (1 i s). By exploiting sharp estimates for an auxiliary mean value, we obtain bounds for I s,k,r (X ) for 1 r k โˆ’ 1. In particular, when s, k โˆˆ N satisfy k 3 and 1 s (k 2 โˆ’ 1)/2, we establish the essentially diagonal behaviour I s,k,1 (X ) X s+ฮต . ยง1. Introduction. Systems of symmetric diagonal equations are, by orthogonality, intimately connected with mean values of exponential sums, and consequently find numerous applications in the analytic theory of numbers. In this paper we consider the number I s,k,r (X ) of integral solutions of the system of equations with 1 x i , y i X (1 i s). This system is related to that of Vinogradov in which the equations (1.1) are augmented with the additional slice x r 1 + ยท ยท ยท + x r s = y r 1 + ยท ยท ยท + y r s , and may be viewed as a testing ground for progress on systems not of Vinogradov type. Relatives of such systems have been employed in work on the existence of rational points on systems of diagonal hypersurfaces as well as cognate paucity problems (see for example [2][3][4]). The main conjecture for the system (1.1) asserts that whenever r, s, k โˆˆ N, r < k and ฮต > 0, then I s,k,r (X ) X s+ฮต + X 2sโˆ’(k 2 +kโˆ’2r )/2 . (1.2) Here and throughout, the constants implicit in Vinogradov's notation may depend on s, k, and ฮต. It is an easy exercise to establish a lower bound for I s,k,r (X ) that shows the estimate (1.2) to be best possible, save that when k > 2 one may expect to be able to take ฮต to be zero. Our focus in this memoir is the diagonal regime I s,k,r (X ) X s+ฮต , and this we address with some level of success in the case r = 1. THEOREM 1.1. Let s, k โˆˆ N satisfy k 3 and 1 s (k 2 โˆ’ 1)/2. Then for each ฮต > 0, one has I s,k,1 (X ) X s+ฮต . In view of the main conjecture (1.2), one would expect the conclusion of Theorem 1.1 to hold in the extended range 1 s (k 2 + k โˆ’ 2)/2. Previous work already in the literature falls far short of such ambitious assertions. Work of the second author from the early 1990s shows that I s,k,r (X ) X s+ฮต only for 1 s k (see [7,Theorem 1]). Meanwhile, as a consequence of the second author's resolution of the main conjecture in the cubic case of Vinogradov's mean value theorem [9, Theorem 1.1], one has the bound I s,3,1 (X ) X s+ฮต for 1 s 4 (see [8,Theorem 1.3]). This conclusion is matched by that of Theorem 1.1 above in the special case k = 3. The ideas underlying recent progress on Vinogradov's mean value theorem can, however, be brought to bear on the problem of estimating I s,k,r (X ). Thus, it is a consequence of the second author's work on nested efficient congruencing [10, Corollary 1.2] that one has I s,k,r (X ) X s+ฮต for 1 s k(k โˆ’ 1)/2. Such a conclusion could also be established through methods related to those of Bourgain, Demeter and Guth [1], though the necessary details have yet to be elucidated in the published literature. Both the aforementioned estimate I 4,3,1 (X ) X 4+ฮต , and the new bound reported in Theorem 1.1 go well beyond this work based on efficient congruencing and l 2 -decoupling. Indeed, when r = 1 we achieve an estimate tantamount to square-root cancellation in a range of 2s-th moments extending the interval 1 s k(k โˆ’ 1)/2 roughly half way to the full conjectured range 1 s (k 2 + k โˆ’ 2)/2. Our strategy for proving Theorem 1.1 is based on the proof of the estimate I 4,3,1 (X ) X 4+ฮต in [8,Theorem 1.3], though it is flexible enough to deliver estimates for the mean value I s,k,r (X ) with r 1, as we now outline. For each integral solution x, y of the system (1.1) with 1 x, y X , one has the additional equation for some integer h with |h| s X r . We seek to count all such solutions with h thus constrained. For each integer z with 1 z X , we find that whenever x, y, h satisfy (1.1) and (1.3), then one has where ฯ‰ j is 0 for 1 j < r and j r for r j k, and in which we write u i = x i +z and v i = y i +z (1 i s then the overcounting by z may be reversed to show that there is significant cancellation in the system (1.1) underpinning the mean value I s,k,r (X ). This brings us to consider the number of solutions of the system with |h i | s X r and 1 z i X (1 i 2t). This auxiliary mean value may be analysed through the use of multiplicative polynomial identities engineered using ideas related to those employed in [7]. The reader may be interested to learn the consequences of this strategy when r is permitted to exceed 1. The conclusion of Theorem 1.1 is in fact a special case of a more general result which, for r 2, unfortunately fails to deliver diagonal behaviour. THEOREM 1.2. Let r, s, k โˆˆ N satisfy k > r 1 and where ฮบ is an integer satisfying 1 ฮบ (k โˆ’ r + 2)/2. Then for each ฮต > 0, one has I s,k,r (X ) X s+(r โˆ’1)(1โˆ’1/(2ฮบ))+ฮต . When r > 1, although we do not achieve diagonal behaviour, we do improve on the estimate I s,k,r (X ) X s+r +ฮต that follows for 1 s k(k + 1)/2 from the main conjecture in Vinogradov's mean value theorem via the triangle inequality. When r > 2, the bound for I s,k,r (X ) obtained in the conclusion of Theorem 1.2 remains weaker than what could be obtained by interpolating between the aforementioned bounds I s,k,r (X ) X s+ฮต (1 s k(k โˆ’ 1)/2) and I s,k,r (X ) X s+r +ฮต (1 s k(k + 1)/2). The former bound is, however, yet to enter the published literature. In ยง2 we speculate concerning what bounds might hold for a class of mean values associated with the system (1.5). In particular, should a suitable analogue of the main conjecture hold for this auxiliary mean value, then the conclusion of Theorem 1.2 would be valid with a value of ฮบ now permitted to be as large as We refer the reader to Conjecture 2.2 below for precise details, and we note in particular the constraint (2.4). When r = 1 and k โ‰ก 0 or 3 modulo 4, this would conditionally establish the estimate I s,k,1 (X ) X s+ฮต in the range 1 s (k 2 + k โˆ’ 2)/2, and hence the main conjecture (1.2) in full for these cases. When r > 1, this conditional result establishes a bound slightly stronger than I s,k,r (X ) X s+r โˆ’1 when 1 s (k 2 +kโˆ’4)/2, which seems quite respectable. We begin in ยง2 by announcing an auxiliary mean value estimate generalizing that associated with the system (1.5). This we establish in ยง ยง3-6, obtaining a polynomial identity in ยง3 of appropriate multiplicative type, establishing a lemma to count integral points on auxiliary equations in ยง4, and classifying solutions according to the vanishing of certain sets of coefficients in ยง5. In ยง6 we combine these ideas with a divisor estimate to complete the proof of this auxiliary estimate. Finally, in ยง7, we provide the details of the argument sketched above which establishes Theorems 1.1 and 1.2. Throughout, the letters r , s and k will denote positive integers with r < k, and ฮต will denote a sufficiently small positive number. We take X to be a large positive number depending at most on s, k and ฮต. The implicit constants in the notations of Landau and Vinogradov will depend at most on s, k, ฮต, and the coefficients of fixed polynomials that we introduce. We adopt the following convention concerning the number ฮต. Whenever ฮต appears in a statement, we assert that the statement holds for each ฮต > 0. Finally, we employ the nonstandard convention that whenever G : Here and elsewhere, we use vector notation liberally in a manner that is easily discerned from the context. ยง2. An auxiliary mean value. Our focus in this section and those following lies on the system of equations (1.5), since this is intimately connected with the Vinogradov system missing the slice of degree r . Since little additional effort is required to proceed in wider generality, we establish a conclusion in which the monomials z jโˆ’r (r j k) in (1.5) are replaced by independent polynomials f j (z). We begin in this section by introducing the notation required to state our main auxiliary result. Let t be a natural number. When 1 j t, consider a non-zero polynomial f j โˆˆ Z[x] of degree k j . We say that f = ( f 1 , . . . , f t ) is well-conditioned when the degrees of the polynomials f j satisfy the condition and there is no positive integer z for which f 1 (z) = ยท ยท ยท = f t (z) = 0. Let X be a positive number sufficiently large in terms of t, k and the coefficients of f . We define the exponential sum g(ฮฑ; X ) by putting g(ฮฑ; X ) = |h| X r 1 z X e(h( f 1 (z)ฮฑ 1 + ยท ยท ยท + f t (z)ฮฑ t )). By orthogonality, the mean value A s,r (X ; f ) counts the number of integral solutions of the system of equations Finally, we define the mean value with |h i | X r and 1 z i X (1 i 2s). The system (2.3) plainly generalizes (1.5). Our immediate goal is to establish the mean value estimate recorded in the following theorem. THEOREM 2.1. Let r , s and t be natural numbers with t 2s โˆ’ 1. Then whenever f is a well-conditioned t-tuple of polynomials having integral coefficients, one has A s,r (X ; f ) X r (2sโˆ’1)+1+ฮต . Note that when r = 1, the conclusion of this theorem is tantamount to exhibiting square-root cancellation in the mean value (2.2), so is essentially best possible. Indeed, even in situations wherein r > 1, the solutions of (2.3) in which z 1 = z 2 = ยท ยท ยท = z 2s make a contribution to A s,r (X ; f ) of order X ยท (X r ) 2sโˆ’1 , and so the conclusion of Theorem 2.1 is again essentially best possible. Henceforth, we restrict our attention to the situation described by the hypotheses of Theorem 2.1. Thus, we may suppose that t 2s โˆ’ 1, and that f is a well-conditioned t-tuple of polynomials f j โˆˆ Z[x] with deg( f j ) = k j 0. It seems not unreasonable to speculate that the estimate claimed in the statement of Theorem 2.1 should remain valid when s is significantly larger than (t + 1)/2. The total number of choices for the 2s pairs of variables h i , z i occurring in the system (2.3) is of order (X r +1 ) 2s . Meanwhile, the t equations comprising (2.3) involve monomials having typical size of asymptotic order X r +k j (1 j t). Thus, for large s, one should expect that Keeping in mind the diagonal solutions discussed above, one is led to the following conjecture. CONJECTURE 2.2. Let r , s and t be natural numbers, and suppose that f is a well-conditioned t-tuple of polynomials having integral coefficients, with deg( f j ) = k j (1 j t). Then one has In the special case in which t = k โˆ’r + 1 and k j = j โˆ’ 1 (1 j t) relevant to the system (1.5), this conjectural bound reads In such circumstances, one finds that We finish this section by remarking that the estimate A s,r (X ; f ) X 2r s is fairly easily established when t 2s, a stronger condition than that imposed in Theorem 2.1, as we now sketch. We may suppose that t = 2s without loss, and in such circumstances the equations (2.3) may be interpreted as a system of 2s linear equations in 2s variables h i . There are O(X 2s ) choices for the variables z i , By applying the theory of Schur functions (see Macdonald [5, Ch. I]) as in the proof of [6, Lemma 1], one finds that where the polynomial (z; f ) is asymptotically definite, meaning that whenever , then we may fix h i , and interpret the system as a mean value of exponential sums, applying the triangle inequality. An application of Hรถlder's inequality reveals that if such solutions dominate, then and the desired conclusion follows. Meanwhile, if z i is sufficiently large for each index i, then | (z; f )| is strictly positive and hence (2.5) can hold only when z i = z j for some indices i and j with 1 i < j 2s. By symmetry we may suppose that i = 2s โˆ’ 1 and j = 2s, and then we obtain from (2.3) the new system of equations This new system is of similar shape to (2.3), and we may apply an obvious inductive argument to bound the number of its solutions. Here, we keep in mind that given h 2sโˆ’1 , there are O(X r ) possible choices for h 2sโˆ’1 and h 2s . Thus we conclude that if this second class of solutions dominates, then one has This completes our sketch of the proof that when t = 2s, the total number of solutions counted by A s,r (X ; f ) is O(X 2r s ). The reader will likely have no difficulty in refining this argument to deliver the conclusion of Theorem 2.1 when t = 2s. The structure of the polynomials h f j (z) underlying the mean value A s,r (X ; f ) permits polynomial identities to be constructed of utility in constraining solutions of the underlying system of equations (2.3). In this section we construct such identities. For the sake of concision, when n is a natural number and 1 j t, we define the polynomial ฯƒ j,n = ฯƒ j,n (z; h) by putting is a wellconditioned (2n + 1)-tuple of polynomials having integral coefficients. Then there exists a polynomial n (w) โˆˆ Z[w 1 , . . . , w 2n+1 ] whose total degree and coefficients depend at most on n, k and the coefficients of f, having the property that identically in z and h, and yet Proof. We apply an argument similar to that of [7, Lemma 1] based on a consideration of transcendence degrees. Let K = Q(ฯƒ 1,n , . . . , ฯƒ 2n+1,n ). Then K โŠ† Q(z 1 , . . . , z n , h 1 , . . . , h n ), so that K has transcendence degree at most 2n over Q. It follows that the 2n + 1 polynomials ฯƒ 1,n (z; h), . . . , ฯƒ 2n+1,n (z; h) cannot be algebraically independent over Q. Consequently, there exists a nonzero polynomial n โˆˆ Z[w 1 , . . . , w 2n+1 ] satisfying the property (3.1). It remains now only to confirm that a choice may be made for this nontrivial polynomial n in such a manner that property (3.2) also holds. In order to establish this claim, we begin by considering any non-zero polynomial n of smallest total degree satisfying (3.1). Suppose, if possible, that n (ฯƒ 1,n+1 , . . . , ฯƒ 2n+1,n+1 ) is also identically zero. Then the polynomials and must also be identically zero for 1 i n + 1. Write in which we evaluate the right-hand side at Then it follows from an application of the chain rule that the vanishing of the polynomials (3.3) and (3.4) implies the relations Notice here that we have deliberately omitted the index i = n + 1 from the relations (3.5), since this is superfluous to our needs. In order to encode the coefficient matrix associated with the system of linear equations in u described by the relations (3.5) and (3.6), we introduce a block matrix as follows. We define the n ร— (2n + 1) matrix , and then define the (2n + 1) ร— (2n + 1) matrix D n via the block decomposition We claim that det(D n ) is not identically zero as a polynomial. The confirmation of this fact we defer to the end of this proof. With the assumption det(D n ) = 0 in hand, one sees that the system of equations (3.5) and (3.6) has only the trivial solution u = 0 over K . However, since n (w) is a non-constant polynomial, at least one of the derivatives โˆ‚ โˆ‚w j n (w 1 , . . . , w 2n+1 ) (1 j 2n + 1) must be non-zero. Suppose that the partial derivative with respect to w J is nonzero. Then there exists a non-constant polynomial * having the property that, since u J = 0, one has * n (ฯƒ 1,n (z; h), . . . , ฯƒ 2n+1,n (z; h)) = 0. But the total degree of * n is strictly smaller than that of n , contradicting our hypothesis that n has minimal total degree. We are therefore forced to conclude that the relation (3.2) does indeed hold. We now turn to the problem of justifying our assumption that det(D n ) = 0. We prove this assertion for any well-conditioned (2n + 1)-tuple of polynomials f by induction on n. Observe first that when n = 0, one has det Equipped with this notation, we define the minors In this way, we discern that for appropriate choices of ฯƒ (a) โˆˆ {1, โˆ’1}, the precise nature of which need not detain us, one has det(D n ) = aโˆˆI ฯƒ (a)U (a)V (a). 2}) is not identically zero. In view of (2.1), moreover, if the leading coefficients of f 1 and f 2 are c 1 and c 2 , respectively, then the leading monomial in U ({1, 2}) is By relabelling indices and then applying the inductive hypothesis for the It follows that U ({1, 2}) is also not identically zero. Also, since no other minor of the shape U (a), with a โˆˆ I and a = {1, 2}, has degree k 1 + k 2 โˆ’ 1 or greater with respect to z 1 , we deduce that det(D n ) is not identically zero. This confirms the inductive hypothesis for the index n and completes the proof of our claim for all n. Henceforth, when n 1, we consider a fixed choice for the polynomials n (w) โˆˆ Z[w 1 , . . . , w 2n+1 ], of minimal total degree, satisfying the conditions (3.1) and (3.2). It is useful to extend this definition by taking 0 (w) = w. We may now establish our fundamental polynomial identity. (3.7) Proof. In the case n = 0, the product over i and j on the right-hand side of (3.7) is empty, and by convention we take this empty product to be 1. In such terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1112/S0025579317000134 Downloaded from https://www.cambridge.org/core. University of Bristol Library, on 04 Dec 2017 at 16:20:30, subject to the Cambridge Core circumstances, we see that 0 (ฯƒ 1,1 (z 1 ; h 1 )) = h 1 f 1 (z 1 ), and the conclusion of the lemma is immediate. In light of these observations, it is apparent that The quotient of the former polynomial by the latter cannot be zero, since this former polynomial is non-zero, by virtue of property (3.2) of Lemma 3.1. We therefore conclude that a non-zero polynomial n (z; h) โˆˆ Z[z, h] does indeed exist satisfying (3.7). This completes the proof of the lemma. It seems quite likely that additional potentially useful structure might be extracted from the polynomial identities provided by Lemma 3.2. For example, the relation Proof. We may write ฯˆ(z, h) = a d (z)h d + ยท ยท ยท + a 1 (z)h + a 0 (z), with a i โˆˆ Z[z] of degree at most d for 0 i d. The solutions to be counted are of two types. Firstly, one has solutions (z, h) with |z| X for which a i (z) = 0 for some index i, and secondly one has solutions for which a i (z) = 0 (0 i d). Given any fixed one of the (at most) 2X + 1 possible choices of z in a solution of the first type, one finds that h satisfies a non-trivial polynomial equation of degree at most d, to which there are at most d integral solutions. There are consequently at most d(2X + 1) solutions of this first type. On the other hand, whenever (z, h) is a solution of the second type, then z satisfies some non-trivial polynomial equation a i (z) = 0 of degree at most d. Since this equation has at most d integral solutions and there are at most 2X r + 1 possible choices for h, one has at most d(2X r + 1) solutions of the second type. The conclusion of the lemma now follows. We now announce an initial classification of intermediate coefficients. We define sets T n,m โŠ† Z[z 1 , . . . , z m , h 1 , . . . , h m ] for 0 m n + 1 inductively as follows. First, let T n,n+1 denote the singleton set containing the polynomial n (ฯƒ 1,n+1 (z; h), . . . , ฯƒ 2n+1,n+1 (z; h)). (4.1) Next, suppose that we have already defined the set T n,m+1 , and consider an element ฯˆ โˆˆ T n,m+1 . We may interpret ฯˆ as a polynomial in z m+1 and h m+1 with coefficients ฯ†(z 1 , . . . , z m ; h 1 , . . . , h m ). We now define T n,m to be the set of all non-zero polynomials ฯ† โˆˆ Z[z 1 , . . . , z m , h 1 , . . . , h m ] occurring as coefficients of elements ฯˆ โˆˆ T n,m+1 in this way. Note in particular that since the polynomial (4.1) is not identically zero, it is evident that each set T n,m is non-empty. This classification of coefficients yields a consequence of Lemma 4.1 of utility to us in ยง6. Suppose also that there exists ฯ† โˆˆ T n,m having the property that ฯ†(z 1 , . . . , z m ; h 1 , . . . , h m ) = 0. Proof. It follows from the iterative definition of the sets T n,m that any element ฯ† โˆˆ T n,m occurs as a coefficient polynomial of an element ฯˆ โˆˆ T n,m+1 , when viewed as a polynomial in h m+1 and z m+1 . Fixing any one such polynomial ฯˆ, we find that for the fixed choice of z 1 , . . . , z m , h 1 , . . . , h m presented by the hypotheses of the lemma, the polynomial ฯˆ(z; h) is a non-trivial polynomial in z m+1 , h m+1 . We therefore conclude from Lemma 4.1 that N m (X ) X r . This completes the proof of the lemma. ยง5. Classification of solutions. We now address the classification of the set S of all solutions of the system of equations with 1 z X and |h| X r . This we execute in two stages. Our discussion is eased by the use of some non-standard notation. When (i 1 , . . . , i m ) is an m-tuple of positive integers with 1 i 1 < ยท ยท ยท < i m 2s, we abbreviate (z i 1 , . . . , z i m ) to z i and (h i 1 , . . . , h i m ) to h i . In the first stage of our classification, when 0 n < s, we say that (z, h) โˆˆ S is of type S n when: (i) for all (n + 1)-tuples (i 1 , . . . , i n+1 ) with 1 i 1 < ยท ยท ยท < i n+1 2s, one has n (ฯƒ 1,n+1 (z i ; h i ), . . . , ฯƒ 2n+1,n+1 (z i ; h i )) = 0; and (ii) for some n-tuple ( j 1 , . . . , j n ) with 1 j 1 < ยท ยท ยท < j n 2s, one has nโˆ’1 (ฯƒ 1,n (z j ; h j ), . . . , ฯƒ 2nโˆ’1,n (z j ; h j )) = 0. Here, we interpret the condition (ii) to be void when n = 0. Finally, we say that (z, h) โˆˆ S is of type S s when the condition (ii) holds with n = s. It follows that every solution (z, h) โˆˆ S is of type S n for some index n with 0 n s. We denote the set of all solutions of type S n by S n . In the second stage of our classification, when 1 n < s we subdivide the solutions (z, h) โˆˆ S n as follows. When 0 m n, we say that a solution (z, h) โˆˆ S n is of type T n,m when condition (ii) holds for the n-tuple j, and: (iii) for all (m + 1)-tuples (i 1 , . . . , i m+1 ) with 1 i 1 < ยท ยท ยท < i m+1 2s and i l โˆˆ { j 1 , . . . , j n } (1 l m + 1), and for all ฯˆ โˆˆ T n,m+1 , one has ฯˆ(z i ; h i ) = 0; and terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1112/S0025579317000134 Downloaded from https://www.cambridge.org/core. University of Bristol Library, on 04 Dec 2017 at 16:20:30, subject to the Cambridge Core (iv) for some m-tuple (ฮน 1 , . . . , ฮน m ) with 1 ฮน 1 < ยท ยท ยท < ฮน m 2s and ฮน l โˆˆ { j 1 , . . . , j n } (1 l m), and for some ฯ† โˆˆ T n,m , one has ฯ†(z ฮน ; h ฮน ) = 0. Here, we interpret the condition (iv) to be void when m = 0. It follows that whenever (z, h) โˆˆ S n with 1 n < s, then it is of type T n,m for some index m with 0 m n. As before, we introduce the notation S n,m to denote the set of all solutions of type T n,m . We thus have the decomposition Having enunciated our classification of solutions in the previous section, we are equipped to estimate the number of solutions of the system (5.1) with 1 z X and |h| X r . This will establish Theorem 2.1, since by discarding superfluous equations if necessary, we may always suppose that t = 2s โˆ’ 1. Before embarking on the main argument, we establish a simple auxiliary result. is a polynomial of degree k 1. Let u be an integer with 1 u k, and let h i and a i be fixed integers for 1 i u with h = 0 and a i = a j (1 i < j u). Then for any integer n, the equation has at most k solutions in z. Proof. It suffices to show that the polynomial in z on the left-hand side of (6.1) has positive degree. We therefore assume the opposite and seek a contradiction. Suppose that f is given by where c k = 0. The polynomial on the left-hand side of (6.1) takes the shape In particular, we see directly that d k can vanish only if h 1 + ยท ยท ยท + h u = 0. Let i be a positive integer with i < k, and suppose that one has h 1 a kโˆ’ j 1 + ยท ยท ยท + h u a kโˆ’ j u = 0 (6.2) for all integers j with i < j k. Then the vanishing of d i implies that (6.2) holds also for j = i. Proceeding inductively in this way, we deduce that (6.2) is terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1112/S0025579317000134 Downloaded from https://www.cambridge.org/core. University of Bristol Library, on 04 Dec 2017 at 16:20:30, subject to the Cambridge Core satisfied for the entire range 1 j k. Restricting attention to the system of equations with indices k โˆ’ u + 1 j k, we find that this system of equations can hold simultaneously only when either h = 0, or else In the latter case, one has a i = a j for some indices i and j with 1 i < j u. Both these cases are excluded by the hypotheses of the statement of the lemma, so the system of equations (6.2) cannot hold for all 1 j k, and hence the polynomial F is non-trivial of positive degree. Consequently, the equation (6.1) has at most deg(F) k solutions in z. The proof of Theorem 2.1. We begin by examining the solutions of (5.1) of type S 0 , recalling that 1 z X and |h| X r . When (z, h) โˆˆ S 0 , one has h i f 1 (z i ) = 0 for 1 i 2s. Suppose that the indices i for which h i = 0 are i 1 , . . . , i a , and the indices j for which h j = 0 are j 1 , . . . , j b . In particular, one has a+b = 2s. By relabelling variables, if necessary, there is no loss of generality in supposing that j = (1, . . . , b) and i = (b + 1, . . . , 2s). There are O(X 2sโˆ’b ) possible choices for h i and z i with b + 1 i 2s, since h i = 0 for these indices i. Meanwhile, for 1 j b, one has f 1 (z j ) = 0, and so there are at most k 1 possible choices for z j . For each fixed such choice, since the polynomials f 1 , . . . , f t are well-conditioned, we find that f l (z j ) = 0 for some index l with 2 l t. Thus, the variables h 1 , . . . , h b satisfy a system of t linear equations in which there are non-vanishing coefficients. We deduce that when b 1, there are O((X r ) bโˆ’1 ) possible choices for h j and z j with 1 j b. Finally, combining these estimates for all possible choices of i and j, we discern that Next we consider the solutions of (5.1) of type S s . When (z, h) โˆˆ S s , there is an s-tuple i with 1 i 1 < ยท ยท ยท < i s 2s for which one has sโˆ’1 (ฯƒ 1,s (z i ; h i ), . . . , ฯƒ 2sโˆ’1,s (z i ; h i )) = 0. Write i for the s-tuple (i 1 , . . . , i s ) with 1 i 1 < ยท ยท ยท < i s 2s for which (6.6) in which h i , a i and n j are all fixed for all indices i and j. Consider the polynomial with index j = 1 of largest degree k 1 2s โˆ’ 2. If a i is zero for any index i, then we have z 1 = z i . Meanwhile, if a i = a j for any indices i and j with 2 i < j s, one sees that z i = z j . Consequently, in either of these scenarios, and also in the situation with h = 0, one finds via (6.5) that N (z i ; h i ) = 0, contradicting our assumption that N (z i ; h i ) = 0. We may thus safely assume that the conditions of Lemma 6.1 are satisfied for the polynomial f 1 with a 1 = 0. By the conclusion of the lemma, it follows that there are at most k 1 choices for z 1 satisfying (6.6), and hence card S s X (r +1)s+ฮต . (6.7) Next we consider the set S n,m for a given pair of indices n and m with 1 n < s and 0 m n. For any (z, h) โˆˆ S n,m , condition (ii) holds for some n-tuple j. By relabelling variables, if necessary, we may suppose that j = (1, . . . , n). Write j for the (2s โˆ’ n)-tuple (n + 1, . . . , 2s). Then given any one fixed choice of the variables z j , h j , we have nโˆ’1 (ฯƒ 1,n (z j ; h j ), . . . , ฯƒ 2nโˆ’1,n (z j ; h j )) = nโˆ’1 (ฯƒ 1,2sโˆ’n (z j ; โˆ’h j ), . . . , ฯƒ 2nโˆ’1,2sโˆ’n (z j ; โˆ’h j )) = 0. Thus, there is a fixed non-zero integer N with the property that nโˆ’1 (ฯƒ 1,n (z j ; h j ), . . . , ฯƒ 2nโˆ’1,n (z j ; h j )) = N , and we deduce from Lemma 3.2 that From here, the argument applied above in the case n = s may be employed mutatis mutandis to conclude that there are O(X ฮต ) possible choices for h 1 , . . . , h n , z 1 โˆ’ z 2 , . . . , z 1 โˆ’ z n . If we put a i = z i โˆ’ z 1 (2 i n) and a 1 = 0, then we find just as in our earlier analysis that z 1 satisfies a non-trivial polynomial equation of degree at most k 1 , whence there are at most k 1 choices for z 1 . We therefore conclude that, given any one fixed choice of z j , h j , the number of choices for z j , h j is O(X ฮต ). It thus remains to count the number of choices for z j and h j . Note in particular that, since (z, h) โˆˆ S n,m , we have the additional information that conditions (iii) and (iv) are satisfied. We may therefore suppose that there exists some ฯ† โˆˆ T n,m , and some m-tuple (ฮน 1 , . . . , ฮน m ) with n + 1 ฮน 1 < ยท ยท ยท < ฮน m 2s, for which With a fixed choice of ฮน, we may suppose further that for all i satisfying n + 1 i 2s and i โˆˆ {ฮน 1 , . . . , ฮน m }, and for all ฯˆ โˆˆ T n,m+1 , one has Given any such ฮน and ฯ†, there are O(X (r +1)m ) possible choices for z ฮน , h ฮน , with 1 z ฮน X and |h ฮน | X r , satisfying (6.8). We claim that for any fixed such choice, the number of possible choices for the integers z i and h i with n + 1 i 2s and i โˆˆ {ฮน 1 , . . . , ฮน m } is O((X r ) 2sโˆ’nโˆ’m ). In order to confirm this claim, observe that there is a polynomial ฯˆ โˆˆ T n,m+1 having the property that some coefficient of ฯˆ(z 1 , . . . , z m+1 ; h 1 , . . . , h m+1 ), considered as a polynomial in z m+1 and h m+1 , is equal to ฯ†(z 1 , . . . , z m ; h 1 , . . . , h m ). It then follows from (6.8) that the equation (6.9) is a non-trivial polynomial equation in z i and h i . We therefore deduce from Lemma 4.2 that for each fixed choice of z ฮน and h ฮน under consideration, and for each i with n + 1 i 2s and i โˆˆ {ฮน 1 , . . . , ฮน m }, there are O(X r ) possible choices for z i and h i satisfying (6.9). Thus we infer that there are O(X r (2sโˆ’nโˆ’m) ) possible choices for z i and h i with n + 1 i 2s for each fixed choice of z ฮน , h ฮน . Since the number of choices for ฮน and ฯ† โˆˆ T n,m is O(1), the total number of choices for z j and h j available to us is O(X (r +1)m ยท X r (2sโˆ’nโˆ’m) ). Furthermore, our discussion above showed that for each fixed such choice of z j , h j , the number of possible choices for z j , h j is O(X ฮต ). Thus altogether we conclude that card S n,m X r (2sโˆ’n)+m+ฮต . (6.10) By combining our estimates (6.3), (6.7) and (6.10) via (5.2), we discern that card S X (2sโˆ’1)r +1 + X (r +1)s+ฮต + Our preparations now complete, we establish the mean value estimates recorded in Theorems 1.1 and 1.2. Let X be a large positive number, and suppose that s and k are natural numbers with k 2 and 1 s (k 2 โˆ’ 1)/2. We define the exponential sum g r (ฮฑ; X ) by putting g r (ฮฑ; X ) = |h| s X r 1 z X e r r hฮฑ r + r + 1 r hzฮฑ r +1 + ยท ยท ยท + k r hz kโˆ’r ฮฑ k . (7.1) Also, when 1 d k, we put Then, with the standard notation associated with Vinogradov's mean value theorem in mind, we put We note that the main conjecture in Vinogradov's mean value theorem is now known to hold for all degrees. This is a consequence of work of the second author for degree 3, and for degrees exceeding 3 it follows from the work of Bourgain, Demeter and Guth (see [ In addition, one finds via orthogonality that for each integer ฮบ, one has where f j (z) = z kโˆ’r +1โˆ’ j (1 j k โˆ’ r + 1). LEMMA 7.1. When s is a natural number, one has I s,k,r (X ) Proof. Define ฮด j to be 1 when j = r , and 0 otherwise. We start by noting that the mean value I s,k,r (X ) counts the number of integral solutions of the system of equations with 1 x i , y i X (1 i s) and |h| s X r . We remark that the constraint on We next consider the effect of shifting every variable by an integer z with 1 z X . By the binomial theorem, for any shift z, one finds that (x, y) is a solution of (7.4) if and only if it is also a solution of the system where ฯ‰ j is 0 for 1 j < r and j r for r j k. Thus, for each fixed integer z with 1 z X , the mean value I s,k,r (X ) is bounded above by the number of integral solutions of the system with 1 u, v 2X and |h| s X r . On applying orthogonality, we therefore infer that I s,k,r (X ) where f(ฮฑ; z) = |h| s X r e(ฯ‰ r hฮฑ r + ฯ‰ r +1 hzฮฑ r +1 + ยท ยท ยท + ฯ‰ k hz kโˆ’r ฮฑ k ). The proof of the lemma is completed by reference to (7.1). The proof of Theorem 1.2. Let s, k and r be integers with k > r 1. Also, let ฮบ be a positive integer with ฮบ (k โˆ’ r + 2)/2. Observe that it suffices to restrict attention to the special case since one may interpolate via Hรถlder's inequality to recover the conclusion of the theorem for smaller values of s. Put v = r (r โˆ’ 1) 4ฮบ and u = s โˆ’ v. Furthermore, set so that s = v + w . In particular, we have w u. where U 1 = |h k (ฮฑ; 2X )| (u/w)k(k+1) dฮฑ (7.7) and U 2 = |h k (ฮฑ; 2X ) r (r โˆ’1) g r (ฮฑ; X ) 2ฮบ | dฮฑ. (7.8) A comparison of (7.7) with (7.2) leads us via (7.3) to the estimate U 1 X (u/w)k(k+1)/2+ฮต . (7.9) Meanwhile, by orthogonality, we discern from (7.8) that U 2 counts the number of integral solutions of the system of equations with 1 x, y 2X , 1 z X and |h| s X r . By interpreting (7.11) through the prism of orthogonality, it follows from (7.2) that the number of available choices for x and y is bounded above by J r (r โˆ’1)/2,r โˆ’1 (2X ). For each fixed such choice of x and y, it follows from (7.10) via orthogonality and the triangle inequality that the number of available choices for z and h is at most A ฮบ,r (s X ; f ). Thus we deduce from (7.3) and Theorem 2.1 that U 2 J r (r โˆ’1)/2,r โˆ’1 (2X )A ฮบ,r (s X ; f ) X r (r โˆ’1)/2+r (2ฮบโˆ’1)+1+ฮต . (7.12) On substituting (7.9) and (7.12) into (7.6), we infer that I s,k,r (X ) X ฮตโˆ’1 (X (u/w)k(k+1)/2 ) 1โˆ’1/(2ฮบ) (X 2r ฮบ+1+r (r โˆ’3)/2 ) 1/(2ฮบ) X s+ +ฮต , This completes the proof of Theorem The proof of Theorem 1.1. The conclusion of Theorem 1.1 is an immediate consequence of Theorem 1.2 in the special case r = 1. Making use of the notation of the statement of the latter theorem, we note that when k = 2l + 1 is odd, one may take ฮบ = (k +1)/2 = l +1, and we deduce that I s,k,1 (X ) X s+ฮต provided that s is a natural number not exceeding Meanwhile, when k = 2l is even, one may instead take ฮบ = l, and the same conclusion holds provided that s is a natural number not exceeding The desired conclusion therefore follows in both cases, and the proof of Theorem 1.1 is complete.
Psychological impact on healthcare workers, general population and affected individuals of SARS and COVID-19: A systematic review and meta-analysis Background Any infectious disease outbreak may lead to a negative detrimental psychological impact on individuals and the community at large, however; there was no systematic review nor meta-analysis that examined the relationship between the psychological/mental health impact of SARS and COVID-19 outbreak in Asia. Methods and design A systematic search was conducted using PubMed, EMBASE, Medline, PsycINFO, and CINAHL databases from 1/1/2000 to 1/6/2020. In this systematic review and meta-analysis, we analyzed the psychological impact on confirmed/suspected cases, healthcare workers and the general public during the Severe Acute Respiratory Syndrome (SARS) outbreak and Coronavirus disease (COVID-19) epidemics. Primary outcomes included prevalence of depression, anxiety, stress, post-traumatic stress disorder, aggression, sleeping problems and psychological symptoms. Result Twenty-three eligible studies (N = 27,325) were included. Random effect model was used to analyze the data using STATA. Of these studies, 11 were related to the SARS outbreak and 12 related to COVID-19 outbreaks. The overall prevalence rate of anxiety during SARS and COVID-19 was 37.8% (95% CI: 21.1โ€“54.5, P < 0.001, I2 = 96.9%) and 34.8% (95% CI: 29.1โ€“40.4), respectively. For depression, the overall prevalence rate during SARS and COVID-19 was 30.9% (95% CI: 18.6โ€“43.1, P < 0.001, I2 = 97.3%) and 32.4% (95% CI: 19.8โ€“45.0, P < 0.001, I2 = 99.8%), respectively. The overall prevalence rate of stress was 9.4% (95% CI: โˆ’0.4 โˆ’19.2, P = 0.015, I2 = 83.3%) and 54.1% (95% CI: 35.7โ€“72.6, P < 0.001, I2 = 98.8%) during SARS and COVID-19, respectively. The overall prevalence of PTSD was 15.1% (95% CI: 8.2โ€“22.0, P < 0.001) during SARS epidemic, calculated by random-effects model (P < 0.05), with significant between-study heterogeneity (I2 = 93.5%). Conclusion The SARS and COVID-19 epidemics have brought about high levels of psychological distress to individuals. Psychological interventions and contingent digital mental health platform should be promptly established nationwide for continuous surveillance of the increasing prevalence of negative psychological symptoms. Health policymakers and mental health experts should jointly collaborate to provide timely, contingent mental health treatment and psychological support to those in need to reduce the global disease burden. Systematic review registration CRD42020182787, identifier PROSPER. Introduction It is somewhat unsurprising that respiratory infectious diseases epidemics such as Severe Acute Respiratory Syndrome (SARS), Middle-Eastern Respiratory Syndrome (MERS), Ebola and COVID-19 have led to unprecedented global hazards jeopardizing individuals' physical and psychological wellbeing (1). Respiratory infectious diseases refer to virus spreading from person to person directly via aerosols/droplet nuclei, small droplets or virus laden secretions from larger droplets; or indirectly by contact with contaminated surfaces transmitted by airborne and droplet through our daily activities of living (2). The rapid transmission of these respiratory infectious diseases has inevitably triggered public fear of being infected, partly attributed to insufficient supply of personal protective gears and contact with confirmed/suspected cases (3). Without effective vaccine to curb the disease, contingent public health preventive measures including social distancing, quarantines, lockdown (4) may indirectly reinforce perceived social isolation, loneliness, anxiety and depression (5). Precisely, we selected SARS and COVID-19 as the primary research focus in this paper. SARS is a viral respiratory disease caused by SARSassociated coronavirus. It was first identified in November 2002 in Guangdong province of southern China and soon after, SARS was also transmitted to Toronto, Hong Kong, Taipei, Singapore, Hanoi and Vietnam. The case fatality for suspected cases of SARS was โˆผ3%. There were 8,098 confirmed cases in total, with 774 deaths during the 2003 SARS epidemic (6). Coronavirus disease is an infectious disease caused by a newly discovered coronavirus which has been declared a pandemic by the World Health Organization in March 2020 (7). Since October 2020, there have been over 40 million confirmed COVID-19 confirmed cases and 1.1 million deaths across the world (8). The case fatality of COVID-19 was โˆผ2.8%. Notwithstanding the soaring number of infected cases, COVID-19 has also triggered great economic recession across different countries. A cross-sectional study conducted during the COVID-19 pandemic in China (n = 1,599) showed that nearly 50% of the respondents rated their psychological beings as "moderately poor" to "severely poor" (9). Other studies also showed that natural disasters and social unrest may induce different levels of psychological distress (10). Respiratory infectious diseases have detrimental negative impact on the psychological wellbeing of the general public, healthcare workers and confirmed/suspected patients, especially at the initial stage of unprecedented outbreak. For instance, prevalence of depression among the general public was 37.4% (11), whilst 38.6 and 51.1% of healthcare workers and confirmed cases, respectively reported anxiety during the COVID-19 pandemic (12, 13). Existing systematic reviews on respiratory infectious disease primarily focused on a specific population, for example, healthcare workers (3); general public (14) during the COVID-19 pandemic or disease patients (15,16) during the SARS epidemic. Nonetheless, there is no systematic review examining the relationship between respiratory infectious disease epidemics outbreaks and mental health in different populations. Thus, this research gap gives us the impetus to conduct this systematic review and meta-analysis. The aims of this systematic review were threefold: first, to provide an integrated picture on how the SARS epidemics and COVID-19 pandemic affect mental wellbeing of confirmed/suspected patients, healthcare workers and the general public; second, to identify psychological impact and psychiatric symptoms on different populations in relation to the SARS and COVID-19 outbreak; third, to provide insights on the mental health needs of those affected individuals during the outbreak. Eligibility criteria The inclusion criteria for this systematic review included English full text observational studies which investigated the psychological impact of respiratory infectious disease outbreak (e.g., COVID-19, SARS). Sampling included confirmed/suspected patients with respiratory infectious diseases, general population, and healthcare workers, who experienced psychological symptoms during and after respiratory infectious diseases outbreak. Studies that included samples with other co-morbidity other than respiratory diseases were excluded. Outcomes measurements Outcome measurements for this systematic review included prevalence of depression, anxiety, stress and post-traumatic stress. Study selection The initial search yielded a primary pool of articles. Records were excluded if they did not meet the inclusion criteria. All records were saved in the Endnote software for removal of duplicates and blinded screening. Title and abstract screening were manually conducted by two independent reviewers to identify potentially eligible studies before full-text screening to check for their eligibility. Should there be any disagreement in the selection of articles, consensus was reached by the involvement of a senior researcher in the project team. Data extraction process Data were extracted from qualified studies after screening. In each study, the following information was retrieved and saved in an excel file which included: (1) authors and publication year; (2) study site; (3) study design; (4) sample size; (5) type of infectious disease; (6) target population; (7) demographic characteristics of the participants; (8) data analysis method; (9) measurement tools and cut off value; (10) prevalence of psychological symptoms and associated factors. Quality appraisal Quality appraisal of the selected studies was performed by using the Joanna Briggs Institute (JBI) Critical Appraisal tools for observational studies, including cohort studies and cross-sectional studies from the Faculty of Health Sciences at the University of Adelaide (18). JBI assessed the study design, recruitment strategy, confounding factor identification, reliability of outcome measurement and statistical analysis. The quality appraisal of each study would be calculated by number of "Yes" options/ total number of applicable questions) ร— 100%. Extracted paper was considered "low quality" if JBI results was <49%, "moderate quality" if fell between 50 and 69%. Paper(s) received >70% would be considered as "high quality" (19). Data synthesis/analysis Data obtained from the included articles were stratified into several groups according to the types of respiratory infectious disease. Data of each group were used for the pooled prevalence calculation and the 95% confidence interval (95% CI) by using STATA statistical software version 11.0. Forest plots were used to demonstrate the pooled prevalence and 95% CI for different groups. Prevalence of psychological symptoms were presented in frequency (%), with 95% confidence interval (CI). A generic inverse variance method with a random effect model was used to estimate pooled prevalence rates. Random effect models were deemed appropriate when the number of studies included in the meta-analysis was low (<10). The I 2 statistic was also used to quantify the percentage of total variation in the study estimated due to heterogeneity. I 2 values between 25 and 50% were considered as "low" heterogeneity, "moderate" heterogeneity if I 2 fell between 50 and 75%; and 75% as "high" heterogeneity. A p-value of <0.05 was considered as heterogeneity (20). We . /fpubh. . further performed subgroup analyses to synthesis our data. Tables were synthesized for each category according to different respiratory infectious disease, including the study population, psychopathological symptoms and associated factors, and measurement tools. Statistical analyses were conducted with STATA software version 11.0. Additionally, meta-regression was done to investigate the source of heterogeneity. Visual assessment of publication bias was analyzed using funnel plot. Egger's test was also conducted to minimize the risk of statistically significant publication bias due to asymmetric funnel plot. A p-value of <0.05 was considered as statistically significant publication bias (21). Search result A total of 10,550 publications were identified, of which, 4,344 duplicates were removed. Another 6,075 studies were further excluded as they did not meet our inclusion criteria after abstract and title screening. It left down to 131 full-text studies assessed for eligibility. We excluded another 108 articles which ended up with 23 articles eligible for this systematic review and meta-analysis ( Figure 1). Study Characteristics Study characteristics and key study findings were summarized in Tables 1, 2. The sample size of these 23 studies (N = 27,325, 59.3% female) ranged from 65 to 8,079 participants. Of these studies, 11 studies (47.8%) were related to the SARS outbreak and 12 studies (52.2%) COVID-19 outbreak. All study participants were 18 years old or more. Only two studies used a cohort study design. All the remaining studies adopted cross-sectional design. With the exception of one study from Canada, all other study sites originated from Asian countries [Asia (n = 22), China (n = 9), Hong Kong (n = 5), Frontiers in Public Health frontiersin.org . /fpubh. . Quality appraisal results The JBI Critical Appraisal Checklist for Cross-Sectional Studies was utilized to assess 20 cross-sectional studies. Of which 17 articles were ranked as "High Quality" and 3 "Low Quality" (Table 3). Whereas, the JBI Critical Appraisal Checklist for Cohort Studies was used to assess 2 cohort studies. 1 study was ranked as "Moderate Quality" and another "Low Quality" (Table 4). Overall pooled prevalence of anxiety, depression and stress during SARS epidemic and COVID-pandemic Table 5). Heterogeneity investigation The level of significance was high after subgroup analysis (I 2 = 98.1). We did not perform meta-regression to investigate the source of heterogeneity due to collinearity of the studies. Publication bias Funnel plot and egger's test were computed to examine publication bias. Each study's effect size was plotted against the standard error. Visual inspection reviewed symmetrical funnel plot and no significant evidence of publication bias was detected (P-value = 0.80). Publication bias Funnel plot and egger's test were computed to examine publication bias. Each study's effect size was plotted against the standard error. Asymmetrical funnel plot was observed on visual inspection, as one study laid on the left side whilst eight studies laid on the right side of the line representing the pooled prevalence ( Figure 5). Additionally, we performed egger's test to investigate publication bias which resulted significant evidence of publication bias (P-value = 0.04). Lastly, we performed trim and feel analysis to estimate the number of missing studies that might exist, which helped reducing and adjusting publication bias ( Figure 6). Publication bias Funnel plot and egger's test were computed to examine publication bias. Each study's effect size was plotted against the standard error. Asymmetrical funnel plot was observed on visual inspection, as one study laid on the left side and nine studies on the right side of the line representing the pooled prevalence (Figure 8). We performed trim and feel analysis to estimate the number of missing studies that might exist, which helped reducing and adjusting publication bias (Figure 9). Stress A total of 5 studies indicated stress as a psychological impact for respiratory pandemics. All of them were conducted among the medical staff, 2 of them were under SARS and 3 of them were under COVID-19. Studies utilized different validated scales as measurement of depression including Depression Anxiety and Stress Scales (DASS-21), Impact of Event Scale-Revised (IES-R), Perceived Stress Scale (PSS-14) and Symptom Checklist-90-Revised (SCL90-R). Prevalence of stress during the COVID-pandemic The prevalence rate of stress was reported in three studies and it ranged from 32 (Figure 11). Prevalence of PTSD, distress and sleep problems during SARS epidemic and COVIDpandemic Apart from anxiety, depression and stress, PTSD and other psychological impacts such as distress and sleeping problems were reported in 8 studies. Of which, 6 studies investigated the prevalence of PTSD in healthcare workers (13,16,26,28) during the SARS epidemic and it ranged from 2.0 to 41.7%. The analytic pooling of these rates generated an overall prevalence of 15.1% (95% CI: 8.2-22.0), P < 0.001, calculated by random-effects model (P < 0.05), with significant betweenstudy heterogeneity (I 2 = 93.5%). Another 2 studies investigated FIGURE The funnel plot to test publication bias of nine studies of pooled prevalence of depression during SARS pandemic, . FIGURE The result of trim and fell analysis for pooled prevalence of depression during SARS pandemic, . PTSD on affected individuals (16,25). The prevalence of PTSD was higher among affected individuals [23.4% (95% CI โˆ’11.6-58. 3)] compared to healthcare workers [12.7% (95% CI 4.6-20.7)]. Nevertheless, affected individual was not comparable with the general population due to unavailability of data in the meta-analysis. Moreover, the prevalence of distress among affected individuals was 68%, which was higher than healthcare workers (23.4%) during SARS period. In contrast, prevalence of sleeping problems among healthcare workers was 36.1% during COVID-19 pandemic and this figure was higher than that of SARS (28.4%) (Figure 12). Discussion In this systematic review and meta-analysis, we aimed to critically examine on how the SARS and COVID-19 outbreak affect the mental wellbeing of different population (i.e., general public, healthcare workers, and affected individuals) during the initial stage of unprecedented outbreak. In our study, the pooled prevalence of anxiety during SARS and COVID-19 were 37.8 and 34.8%, respectively. The pooled prevalence of depression during SARS and COVID-19 were 30.9 and 32.4 %, respectively. According to a recent report published by the World Health Organization (36), the global prevalence of anxiety and depression in 2015 was 3.6 and 4.4%, respectively, which were lower than our findings. It was evident that infectious diseases outbreaks had caused negative detrimental impacts on different populations. The severity of the psychological impact between SARS and COVID-19 was somewhat similar in a way that the prevalence of anxiety in both outbreaks were slightly higher than depression. Our findings, however, contradicted with those findings by (36) as their global prevalence of anxiety was lower than depression. Nonetheless, our findings were in line with a recent research conducted by (37) that the prevalence of anxiety and depression were 12.1 and 5.3%, respectively, despite our prevalence of anxiety during SARS and COVID-19 was more than 3-fold than that of (37). Regarding the healthcare workers, the psychological impact of COVID-19 was greater than SARS. For example, the pooled prevalence of stress during COVID-19 was higher compared to SARS. It was somewhat unsurprising as the state government and institutional support were protective factors to maintain good team spirit and resilience to combat any infectious disease outbreak (26). The sudden surge of COVID-19 pandemic with its rapid rate of transmission and high contagion in the globe, coupled with insufficient personal protective equipment and shortage of manpower were significant risk factors jeopardizing the mental health of frontline healthcare workers (38). As a matter of fact, the infection rates of COVID-19 among healthcare workers were three times more than that of SARS in China. By March 2020, there were more than 3,000 healthcare . FIGURE The prevalence of depression in the general population and among healthcare workers during COVID-pandemic. FIGURE The funnel plot to test publication bias of ten studies of pooled prevalence of depression during COVID-pandemic, . workers infected with COVID-19 in China (11) compared to only 1,000 infected healthcare workers infected with SARS in China (39). Besides, the psychological impact on affected individuals was more severe than that of healthcare workers. It was evident that the mortality and morbidity rate was high in SARS and that increased the perceived risk of different populations during COVID-19 pandemic (5). Perceived risk may also vary depending on job nature and educational attainment. Healthcare workers presumably had lower perceived risk as they were professionally trained in the management of public health crisis (40). According to past research that investigated the impact of SARS on SARS survivors, over 60% rated their perceived life threat as "moderately to extremely serious" (16). The traumatic experience of those SARS survivors may put them in a more vulnerable position when they were confronted with another public health crisis. Lastly, the psychological impact on healthcare workers was more severe than the general public in COVID-19. Healthcare workers had a much higher chance of exposure and susceptibility to this new virus compared to the general public as the former had direct patient care to confirmed/suspected COVID-19 patients (41). Due to shortage of manpower, some frontline healthcare workers had to work long hours shifts without decent supply of personal protective equipment in the clinical settings. As such, the risk of infection and perceived stress level was higher among healthcare workers. Due to high contagion nature of COVID-19, healthcare workers may have persistent fear of transmitting the virus to their families and friends and thus, they tended to self-isolate themselves or in quarantines when they were off work. Prolonged self-isolation without social support may worsen their mental wellbeing leading to increased level of stress and depression during the COVID-19 pandemic (42). Implications The psychological impact brought by infectious disease outbreaks should not be under-estimated. Public health policymakers may consider developing a surveillance and monitoring system worldwide to continuously monitor the situation of an infectious disease outbreak (43). With the development of surveillance systems, stakeholders are more capable to detect and tackle public health emergency globally. Insufficient knowledge and unclear information of any disease epidemic may exacerbate anxiety and depression in the general public (44, 45). Thus, the general public should be wellinformed about the etiology, symptoms of the respiratory infectious disease, preventive measures (e.g., social distancing, face masks wearing, proper handwashing) and treatment of any infectious diseases outbreaks to reduce their level of anxiety, stress and depression (46). Myths and misconceptions should be promptly clarified by the health authority to reduce the anxiety level of the public. Psychological intervention such as remote counseling, telecare and effective online stressreduction strategies should be promoted during the pandemic era to maintain the mental wellbeing of different populations (14). Health authority should increase the transparency of professional mental health seeking online platform via digital Limitations There were several limitations needed to be addressed. At the time of reporting, COVID-19 pandemic still exists and thus, we cannot include the latest publications in our systematic review and meta-analysis beyond June 2020 (our cut-off period registered in PROSPER). Nevertheless, we used PubMed and the same search terms to identify the latest publication from 1 June 2020 and 30 July 2021. A total of 14 articles were identified (N = 9,706). Of which, 4 papers were on affected individuals (47-50) (n = 811) and another 4 [(51-54)] on healthcare workers (n = 2,298); 6 on general public (55-60) (n = 6,597) across Asia (Taiwan & Australia), Europe (Italy, Poland & Turkey) and other countries (USA, Brazil, & Saudi Arabia). Prevalence of anxiety ranged from 8.1 to 92.1% while prevalence of depression ranged from 2.1 to 50%. Prevalence of stress ranged from 6.84 to 48.3%. Prevalence of PTSD ranged from 11.0 to 40.3% across these extracted studies (please refer to Supplementary Tables 1-3). There seems to be a huge variation regarding the prevalence of depression, anxiety, stress and PTSD, this phenomenon is likely to be attributed by the number of infected suspected COVID-19 cases during the study period. Of particular note is that there is only 1 cross-sectional study conducted on healthcare workers in Taiwan (51) which compared perceived stress between COVID-19 and SARS. All the other 13 selected studies were all focused on COVID-19. It is noteworthy that these recent studies utilized various psychological measurement tools which makes metaanalysis impossible. Second, we encountered difficulty in comparing affected individuals and general population between COVID-19 and SARS due to unavailability of data. Third, there was a high heterogeneity of results attributed to the use of different measurement tools and variables in selected articles. Fourth, almost all selected studies in this review used cross-sectional design and thus, the long-term psychological impact on different populations cannot be examined. Lastly, there was only one study originated from Canada, and the remaining 22 papers were sourced from Asia. Results from our systematic review and meta-analysis could be biased and thus, needed to be interpreted with caution. Majority of studies were Asian oriented, where the quarantine measures adopted were somewhat similar, such as compulsory facemask wearing, social distancing, and stay home advice. All these measures, collectively, influenced the negative mental wellbeing of studied population. As a result, independent effect of individual countries' precautionary measure were unable to be totally reflected in the selected studies and hence, the variation in psychological wellbeing among individuals residing in different countries was not compared. Conclusion The epidemics of SARS and COVID-19 has brought about high levels of negative detrimental impact to individuals and the community at large. Psychological interventions and contingent digital mental health platform should be promptly established nationally for continuous surveillance of the increasing prevalence of negative psychological symptoms. Health policymakers and mental health experts should jointly collaborate to provide timely, contingent psychiatric and psychological support to those in need to reduce the global disease burden.
Building a completely positive factorization A symmetric matrix of order n is called completely positive if it has a symmetric factorization by means of a rectangular matrix with n columns and no negative entries (a so-called cp factorization), i.e., if it can be interpreted as a Gram matrix of n directions in the positive orthant of another Euclidean space of possibly different dimension. Finding this factor therefore amounts to angle packing and finding an appropriate embedding dimension. Neither the embedding dimension nor the directions may be unique, and so many cp factorizations of the same given matrix may coexist. Using a bordering approach, and building upon an already known cp factorization of a principal block, we establish sufficient conditions under which we can extend this cp factorization to the full matrix. Simulations show that the approach is promising also in higher dimensions. Introduction A symmetric matrix is called completely positive, if it admits a symmetric rectangular matrix factorization with no negative entries; Berman and Shaked-Monderer (2003) is a monograph which focuses on linear-algebraic and graph theoretic properties of this matrix class. The concept-the notion was probably coined by Hall (1963), see also Diananda (1962)-had its origins from applications in combinatorics Dedicated to Walter J. Gutjahr Hall 1963). Further fields of application include physics, biology and statistics (Markovian models of DNA evolution, Kelly 1994), project management (stochastic and robust optimization, Natarajan 2011), and economic modeling (see Gray and Wilson 1980), and in recent years optimization applications became increasingly important. We need some notation. By R n we denote n-dimensional Euclidean space, by R n + its positive orthant, (column) vectors v โˆˆ R n are always in boldface while v denotes their transpose (rows). The zero matrix of appropriate size is always denoted by O, and A โ‰ค O for a matrix A of the same size means that no entry A i j is positive, while A โ‰ค B means A โˆ’ B โ‰ค O. I d denotes the d ร— d identity matrix, with its ith column e i โˆˆ R d . For a scalar t, we denote by t + = max {0, t} while for a matrix A we put A + = [(A i j ) + ] i, j . Further, we designate by A โ€ข2 = [(A i j ) 2 ] i, j the Hadamard square of A. The cone of all symmetric positive-semidefinite matrices of some fixed order is denoted by P, and the cone of all symmetric matrices with no negative entries by N . Finally, let X 1/2 โˆˆ P denote the symmetric square-root of a matrix X โˆˆ P. Note that even if X โ‰ฅ O, we may have negative entries in X 1/2 . If however X 1/2 โ‰ฅ O, then X belongs to the the cone C of all (symmetric) completely positive matrices C = X โˆˆ P : X = F F for some possibly rectangular matrix F โ‰ฅ O . We call X = F F a completely positive (cp) factorization. One immediate geometric interpretation is as follows: write F = [f 1 . . . , f n ] with f i โˆˆ R m + . Then X i j = f i f j for all i, j, so X is the Gram matrix of the directions f i . In other words, finding F (and m) amounts to find a space R m and directions in its positive orthant R m + such that X describes (length and) angles of this direction, i.e. solving an angle packing problem. The minimal number of rows in F yielding a cp factorization of X is called the cprank of X . In light of above interpretation, determining the cp-rank means to find the smallest embedding dimension such that the angle packing problem has a solution. This embedding dimension, i.e., the cp-rank can exceed the order n of X (i.e., the number of directions). But the cp-rank is bounded by n+1 2 โˆ’ 4 โˆผ n 2 2 , and this bound is asymptotically tight Shaked-Monderer et al. 2015), refuting a conjecture suggesting n 2 /4 published 20 years ago (Drew et al. 1994). An alternative format of cp factorization can be obtained using F rather than F, i.e., writing Thus, searching for a minimal cp factorization would amount searching for the shortest sum in above additive decomposition into rank-one matrices x i x i built upon non-negative vectors x i โˆˆ R n + . With this algebraic approach, we may see why having such a cp factorization is important: suppose X * emerges as the solution of a copositive optimization problem (see below) which is a conic approximation or reformulation of, say, a non-convex quadratic optimization prob-lem z * = min xโˆˆR n + x Qx : Ax = b over a polyhedron. This is an NP-hard problem class. It turns out that under weak assumptions (Burer 2009), any of the vectors x i from a rank-one summand x i x i occurring in a cp factorization of X * will be an optimal (or an approximate) solution to z * . So both representations have their advantages and can easily be transformed into each other. In the sequel, we will adhere to the format X = F F suggested by the angle packing interpretation. As indicated above, finding a cp factorization of a given matrix can yield good or even optimal solutions to hard optimization problems. Moreover, characteristics like embedding dimension for the angle packing problem will give important information on the geometry of the related conic problem. Recall that in any linear optimization problem over a convex set, the solution (if it exists) is attained at the boundary of the feasible set, and indeed all the complexity of the reformulated hard problems is shifted to the analysis of that boundary. However, unlike the boundary of the feasible sets for LPs and SDPs (both problem classes solvable in polynomial time to arbitrary accuracy), this boundary can contain matrices X of full rank and those with all entries strictly positive. The cp-rank and more general, any cp factorization of X , can give more information on X with respect to this geometrical structure and at the same time provide alternative (approximate) solutions. More detail will be provided in Sect. 2 below. In this paper we aim at obtaining a cp factorization of a symmetric (n + 1) ร— (n + 1) matrix Y = H H by a bordering approach: we assume that we know a cp factorization for a principal n ร— n submatrix X = F F of Y , and derive sufficient conditions under which we can specify a suitable factor H . This is the content of Sects. 3 and 7. These sufficient conditions generalize and complement previous investigations of the same kind in Salce and Zanardo (1993), leading to a structurally different cp factorization (essentially, the role of a lower block-triangular factor in Salce and Zanardo (1993) is now played by an upper block-triangular one). In Sect. 4, we take an optimizationinspired approach, leading to LP-or QP-based relaxations of the main property, in order to systematically find out whether or not this new sufficient condition is satisfied. This approach also may enable us to efficiently search for constellations (i.e., selecting the bordering row) where the condition is met. A small empirical study provided in Sect. 5 shows that our approach is more promising. In Sect. 6 we show that our approach indeed suffices to obtain a cp factorization for all completely positive 3 ร— 3 matrices, establishing in an elementary way the well-known fact that the cp-rank of these does not exceed three. This has been known before, but our approach seems less involved than the previous arguments. Inspired by this, we move on in Sect. 7 to discuss extensions in higher dimensions. Motivation and preprocessing Since the explicit introduction of copositive optimization (or copositive programming) by Bomze et al. (2000), Quist et al. (1998), we observe a rapid evolution of this field. One reason for the success is culminating in the important paper (Burer 2009) where it is shown that every mixed-binary (fractional) quadratic optimization problem can be written as a copositive optimization problem, which is a linear optimization problem over the cone C subject to linear constraints, see Amaral and Bomze (2015), Amaral et al. (2014), Bomze and Jarre (2010) and Burer (2009), and recently many similar copositive representation results followed. For some surveys, we refer to Bomze (2012) Bomze et al. (2012), Burer (2012) and Dรผr (2010). The terminology copositive optimization has its justification as the dual cone of C coincides with the cone of copositive matrices of the same order. Recall that a symmetric n ร— n matrix is said to be copositive if it generates a quadratic form taking no negative values over the positive orthant R n + . The usual conic approximation algorithms for solving a copositive (or completely positive) optimization problem use (subsets of) the outer approximation P โˆฉ N of C. However, often copositive optimization problems are used to reformulate hard (mixed-binary) quadratic optimization problems which in turn may encode combinatorial optimization problems (see Bomze et al. 2000;Burer 2009;Natarajan 2011;Quist et al. 1998 and references therein). The optimal solution of the latter is encoded by an r ร— n matrix F with no negative entries, in a cp factorization X = F F โˆˆ C. Once we arrive at a solution X โˆˆ P โˆฉ N of the relaxation, we should try to find this F, not only to show that in this instance, the relaxation is exact, but also to retrieve the solution of the original problem (the so-called rounding procedure). Very few recent papers deal with approximating C from within: a theoretical characterization of interior points of C is presented in Dickinson (2010) and Dรผr and Still (2008), while algorithmic aspects of factorization are the focus of Jarre and Schmallowsky (2009). Suppose that we want to find an explicit factorization of an (n + 1) ร— (n + 1) matrix Y which we scale such that building upon an already known cp factorization of the principal block X . Note that as C โŠ† P, we can immediately spare our efforts if one diagonal entry of Y is negative. Similarly, positive-semidefiniteness of Y implies that a zero diagonal entry forces the whole row and column to be zero, in which case we can remove it and continue with a smaller principal submatrix with strictly positive diagonal elements. In the end, we just have to enlarge the factor H by suitably adding zero columns, to obtain the original Y . Hence we may and do assume Y ii > 0 for all i. Next, we observe that with any positive-definite diagonal matrix and any factorization Y = H H , we get another one of Y = (H ) (H ) of the same size. As also โˆ’1 is positive-definite, we may use ii = Y โˆ’1/2 ii > 0 and concentrate on the case where the diagonal of Y contains only unity entries. This is the starting point of (1). Next we proceed as in Dickinson and Dรผr (2012) which offers an algorithmic procedure to obtain a minimal cp factorization, applied to Y with a special sparsity pattern. As a preliminary step, we check the necessary condition Y โˆˆ P โˆฉ N ; since X โˆˆ C โŠ† P โˆฉ N , we merely must check x โˆˆ R n + to ensure Y โˆˆ N ; and X โˆ’ xx โˆˆ P to ensure Y โˆˆ P, by Schur complementation. Note that positive-semidefiniteness of Y โ‰ฅ O and Y ii = 1 implies Y i j โˆˆ [0, 1] for all i, j, therefore any possible cp factorization matrix H (i.e. satisfying H H = Y ) also must have entries between zero and one. So we may and do assume in the sequel that x โˆˆ (ker X ) โŠฅ or equivalently, that where X + denotes the Moore/Penrose generalized inverse (MPGI) of any matrix. Various cp factorization strategies The cp factorization problem has received considerable attention as a special (symmetric) variant of the nowadays heavily researched nonnegative matrix factorization (NMF) problem. For a recent survey on NMF see, e.g. Wang and Zhang (2013). In this context, the cp factorization problem is also addressed as Symmetric NMF, e.g. in He et al. (2011) where a simple parallelizable iterative procedure is proposed which is shown to converge to a stationary point (not the global solution) of the (nonconvex) least squares approximation problem min H Y โˆ’ H H , with application to probabilistic clustering in large instances. This article fits into the tradition of convergence analysis in Matheuristics, as performed masterly in Gutjahr (1995); see also Gutjahr (2010). In contrast to these approaches, we focus on finite, not on iterative methods, although possibly employing iterative solutions to (easy) subproblems. For many other approaches, we refer to Anstreicher and Burer (2010), Berman and Hershkowitz (1987), Berman and Xu (2004), Berman and Rothblum (2006), Dickinson and Dรผr (2012), Shaked-Monderer (2009), Shaked-Monderer (2013) and Sponsel and Dรผr (2014) and the recent review , to cite just a few. Another recent paper (Zhou and Fan 2015) deals with algorithmic strategies for factorization based upon conic optimization, for random instances of relatively moderate order n โ‰ฅ 10. The original cp factorization problem can also be seen directly: to obtain Y = H H with an s ร— (n + 1) matrix H โ‰ฅ O, solve a system of n+2 2 quadratic equations in s(n + 1) nonnegative variables. As detailed above, we can bound only s โ‰ค n+2 2 โˆ’ 4 and hence need in the worst case 1 2 (n 3 + 4n 2 โˆ’ 3n + 6) nonnegative variables, which from an algorithmic perspective is practically prohibitive even for small instances. As already announced, we here assume that we know the cp factorization Since X i j โˆˆ [0, 1], also F i j โˆˆ [0, 1] for all entries. As mentioned earlier, cp-rank(X ) is defined as the smallest r such that (3) holds. Hence given any F satisfying (3), we always have cp-rank(X ) โ‰ค r , and since the cp-rank may also be of order n 2 , we can have r > n if n > 4. See Bomze et al. (2014) for recent bounds in the range n โ‰ค 11 and Bomze et al. (2015) for all larger n. Recall that for the n ร— r matrix F , the MPGI is given by This concept enables us to study the linear equation system F y = x in y. It always has a solution since F (F ) + x = F F X + x = X X + x = x by (2), and the general solution is of the form where the latter inequality follows from X โˆ’ xx โˆˆ P. Hence there is always a solution y to F y = x with y y โ‰ค 1. This was proved, e.g., in Salce and Zanardo (1993, Lem.1.1,Cor.1.3). Sometimes y = p is the only choice, but for r > n there could be better choices, see below. If p happens to have no negative entries, Y is said to have the property of positivity of least squares solution (PLSS) in Berman and Shaked-Monderer (2003, pp.98ff). PLSS ensures that the lower triangular block factorization reviewed in Sect. 3.1 below works, but this property is quite restrictive as will be documented by our empirical study. Anyhow, if X is diagonal as in a related article on cp factorization (Kalofolias and Gallopoulos 2012), PLSS holds: Lower triangular blocks Complete positivity of Y as in (1) is characterized in (Salce and Zanardo 1993, Prop.1.4) as follows (with a slight change of notation): there is an r ร— n matrix F 0 with no negative entries, and a vector y 0 โˆˆ R r + with y 0 y 0 = 1 such that F 0 y 0 = x and X = F 0 F 0 . Since completely positive factorizations are by no means unique, knowledge of F in (3) does not imply that the above F 0 and y 0 are known. In particular, it is not guaranteed that F = F 0 or y = y 0 . They can have even different sizes. If we would like to search for (F 0 , y 0 ) directly, this amounts to solving a system of, again, n+1 2 + n + 1 = n+2 2 quadratic equations in now (r + 1)n โ‰ค 1 2 (n 3 + n 2 โˆ’ 6n) nonnegative variables, slightly less than the original problem but still prohibitively demanding. Anyhow, assume now that there is a nonnegative solution y โˆˆ R r + to F y = x with y y โ‰ค 1. Then we can use factors H with lower block triangular structure as follows: This can be checked by straightforward calculation. From an algorithmic perspective it could pay to first try (5), e.g. by solving the linear optimization problem or even the convex QP variant with an objective y y. Since H has one more row than F, the cp-rank increment from X to Y cannot exceed one, if F provides a minimal cp factorization. Moreover, if in this situation y y = 1, then the cp-rank of Y is even equal to that of X , as observed in Berman and Shaked-Monderer (2003, Exerc. 3.7, p.146). Upper triangular blocks Unfortunately, unless F = F 0 by chance, problem (6) can be infeasible or its feasible set can have empty intersection with the unit ball. So we will propose an alternative approach where we can allow for factorizations X = F F and vectors y โˆˆ R r such that F y = x, where some entries of y may be negative. In this case, there are always solutions inside the unit ball, as detailed above after (4). where โ‰ค is understood entrywise. Then gives an explicit completely positive factorization of Y . Proof It is easy to verify that I r โˆ’ yy โˆˆ P and that So ฯ•(y y)G = ฯ•(y y)F โˆ’y(F y) = ฯ•(y y)F โˆ’yx , and (a) follows. The matrix product in (b) equals 1 x x xx + G G , and the lower right block is xx (7) is trivially satisfied, and (8) and (5) coincide. But only (7) also still holds for small max j x j > 0 and fixed positive F, whereas (6) could be immediately rendered infeasible even for small positive departures of x from o, because feasibility of (6) is a homogeneous property; notice that from the 1600 cases generated in Sect. 5, only 9 satisfied y โ‰ฅ o. This would not hurt for other purposes, e.g. Natarajan and Teo (2017) aiming at a more general factorization, but for a lower triangular factorization (5) it is essential. So only Theorem 3.1 may be viewed as a quantitative perturbation result, dealing with departures from the trivial block-diagonal case x = o. As mentioned, cp factorizations need not be unique, and with respect to some criteria, the one proposed above need not be optimal. However, if the cp factorization of X was already (close to) minimal, then also above factorization is (close to) minimal, because the increment of embedding dimension for the angle packing problem is one. Section 7 below presents strategies to increase this increment, but for staying close to minimal, above strategy should be tried first. The next section deals with condition (7). We will specify algorithmic approaches to satisfying this condition, and also show how to obtain an explicit factorization for all completely 3 ร— 3 matrices in this way. En route to satisfying (7) The function ฯ• as defined in Theorem 3.1 is decreasing, concave, and satisfies ฯ•(0) = 2 as well as ฯ•(1) = 1. Therefore we have the following estimates The approximation of lowest order uses the constant underestimation in (10), which results in linear constraints: so, any y โˆˆ R m with satisfies (7). The first-order approximation uses the linear underestimation in (10). This yields (inhomogeneous) convex quadratic constraints: Likewise, (12) implies (7). Finally, we can rewrite (7) without square roots: it is evident by elementary calculations that is equivalent to (7). As mentioned above, only the last conditions in (11), (12) or (13) can be violated by y = p. To increase our chances of satisfying (7), we therefore employ an optimization approach in that we allow for y = p + Pu as in (4) with the orthoprojector onto ker (F ) = (imF) โŠฅ . Now y y โ‰ค 1 is no longer guaranteed, but we know by p = F X + x โŠฅ Pu and P P = P that and (Salce and Zanardo 1993, Lem. 1.1,Cor. 1.3) guarantees that at least p p โ‰ค 1. Hence we consider the optimization problems min y y : y โˆˆ R r , y satisfies ( ) , where ( ) stands for (11), (12) or (13). Problem (18) can be rewritten as a convex quadratically constrained QP, introducing more variables and more (linear) constraints: Some empirical evidence If F is square (implying cp-rank(X ) โ‰ค n) and nonsingular, y = F โˆ’ x is the unique solution to F y = x (here F โˆ’ = (F ) โˆ’1 ). Then the problems (16), (17) and (18) are of no use, and one can check directly whether or not y = p satisfies (11), (12), or (13). In a small simulation study, 1600 such square F matrices with positive entries were generated randomly and X = F F. For a parameter ฯƒ โˆˆ {1, 1.1, 2, 100}, a vector x โˆˆ R n + was drawn at random and rescaled such that x X โˆ’1 x = 1 ฯƒ . Obviously ฮป min (X โˆ’ xx ) increases with ฯƒ while ฯƒ = 1 means that the generated matrix Y โˆˆ P โˆฉN is singular. It turns out that large values of ฯƒ favour the proposed approach even for relatively large instances, but even in singular cases a success is often encountered, particularly for moderate dimensions (n + 1 โˆˆ {5, 20, 100} was chosen). Overall we observe successes in more than 63% of the cases, which increases to 77% if ฯƒ โ‰ฅ 2. The details are given in Table 1. The four numbers in every cell count how often the conditions y โ‰ฅ o, (11), (12), and (13) are met for y = p. The last column gives the success rate across all n, cumulated over all cases with x X โˆ’1 x โ‰ฅ 1 ฯƒ . The figures reported in Table 1 are quite encouraging. Remember that we generated matrices Y โˆˆ P โˆฉ N which have a completely positive block X with cp-rank not exceeding n, but we are not sure that Y is completely positive. So the decrease of success rates with increasing dimension have to be discounted by the probability that matrices Y of this kind are completely positive with cp-rank not exceeding n + 1. In the generation process, we did not use the construction as in Salce and Zanardo (1993) with nonnegative s ร— n matrix F 0 and y 0 โˆˆ R s + with y 0 y 0 = 1, generating X = F 0 F 0 and x = F 0 y 0 to ensure complete positivity of Y , for the following reason: then we either would have a trivial success for the simulation (if we choose F such Table 2 Success percentages for the problems (6)|(16) with x X + x = 1 ฯƒ . For every cell 100 random (n + 1) ร— (n + 1)-instances were generated ฯƒ n = 4, m = 9 n = 19, m = 99 n = 99, m = 199 succ.for โ‰ฅ ฯƒ 1. or else we would have to pick an essentially different F such that F F = X , which is not obvious at all. Now we turn to cp-ranks possibly exceeding dimension. Here, we must use the problem (6) and one of the problems (16), (17), or (18). For simplicity, and because the differences in Table 1 were not that pronounced with the different approaches, we chose the convex, linearly constrained quadratic problem (16). The MatLab solvers linprog and quadprog were used. We basically follow the same experimental scheme, but for numerical reasons we restrict attention to the non-singular cases ฯƒ โ‰ฅ 1.1. For dimension n + 1 = 5, we allow for m + 1 = n+1 2 = 10 even larger than the maximal cp-rank, for dimension n + 1 = 20 we use (m + 1) = (n + 1) 2 /4 = 100 suggested by Drew et al. (1994), and for dimension n + 1 = 100 we simply double m + 1 = 2(n + 1). For smaller n, we also observed that increasing m in the range between 2n and n 2 /2 increases success probability, so that the generated instances indeed are a priori not too easy for the proposed method. Nevertheless, the previously observed patterns persist and possibly get even more pronounced. Details can be found in Table 2. Finally it is worth mentioning that Kaykobad's sufficient condition (Kaykobad 1987) of diagonal dominance was almost never met in both experiments; this diagonal dominance criterion follows from the results in Salce and Zanardo (1993), in a straightforward way, but appears to be too stringent a sufficient condition on average even for small dimensions. A similar statement holds for the PLSS property discussed shortly before Sect. 3.1. The case of n = 4 is particularly interesting, as we have C = P โˆฉ N in dimension four, but strict inclusion in dimension 5. The papers (Berman and Xu 2004;Burer et al. 2009;Dong and Anstreicher 2010;Loewy and Tam 2003) discuss the 5 ร— 5 case. For illustration, we now specify five instances generated by above construction with ฯƒ = 2. Recall in these cases always necessarily y = p, as F is square nonsingular. illustrates that the condition p โ‰ฅ o from Salce and Zanardo (1993) can be satisfied even if our condition (13) is violated, so that it really pays to combine all methods (although in higher dimensions, importance of the lower-triangular factorization decreases). Explicit factorization of low order matrices We start by discussing the (doubly nonnegative) 2 ร— 2 case. If a diagonal entry is zero, there is only one positive factorization (of rank zero or one). Else, we may and do rescale as before so that that X = 1 a a 1 โˆˆ P โˆฉ N . We may assume 0 โ‰ค a < 1, because otherwise (a = 1 and) X = ee with e = [1, 1]. In this case, there is a factorization Now let us proceed to the three-dimensional case. We basically show that either of the factorizations (8) or (5) apply with the same F. So we again consider which is equivalent to stipulate X โˆˆ P โˆฉ N ; x โˆˆ R 2 + ; and X โˆ’ xx โˆˆ P. Again we may and do assume diag X = e = [1, 1] , and X = F F with F as in (19). If X is singular, then X = ee ; the condition ee โˆ’ xx โˆˆ P implies e = ฮฑx for some ฮฑ โ‰ฅ 1 and is the cp factorization of the form (8). However, if X is nonsingular, so is F and can have either sign. Next we distinguish cases according to the sign of y 2 . If y 2 โ‰ฅ 0, then y โˆˆ R 2 + and we arrive at the factorization of the form (5): If, however, y 2 < 0, then the matrix I 2 โˆ’ yy โˆˆ P โˆฉ N (and this is true only if y has at most two nonzero entries of opposite sign, which is guaranteed only for n = 2!), so we may factorize by taking square roots. Indeed, from (9) we have Q = [I 2 โˆ’ yy ] 1/2 = I n โˆ’ ฮฒyy with ฮฒ = 1 ฯ•(y y) โ‰ค 1 which again has no negative entry in this particular case of order two when y 1 y 2 โ‰ค 0. By consequence, and this time straighforwardly as both factors of G are nonnegative, So we obtain the desired factorization of the form (8). This establishes also the well-known fact that the cp-rank of any completely positive 3 ร— 3 matrix is at most three. The elementary argument here differs from the more involved one in Berman and Shaked-Monderer (2003, Cor. 2.13, p. 126) which however establishes factorizations of triangular type whereas the above F is usually more dense, e.g. for the data of Berman and Shaked-Monderer (2003, Ex.2.19,p.126). For alternative, still different factorization results for all matrices of rank 3 (which can result in r โ‰ฅ 4 if n โ‰ฅ 4, cf. Berman and Shaked-Monderer (2003, Ex. 3.1, p. 140)) see Barioli and Berman (2003), Brandts and Kล™รญลพek (2016). Towards larger cp-rank increments Now, if we start with a cp-rank not exceeding n, as for n = 2 or n = 3, and apply the construction of Theorem 3.1 in a recursive way to building completely positive factorization, we can reach only matrices with the same property (cp-rank less or equal order). Indeed, H has r + 1 rows if G has r , like F. The same is intrinsically true if we aim for the factorization (5). Nevertheless, this still has some justification as the solutions of copositive optimization problems arising from most applications are expected to have a low cp-rank. An admittedly heuristic argument goes as follows: imitating the proof of Shaked-Monderer et al. (2015, Thm.3.4), one can show that for every matrix Y โˆˆ C one can construct a matrix Y โˆˆ โˆ‚C with the same or larger cp-rank. But for a boundary point Y = H H we always can find a copositive matrix S โˆˆ C * \ {O} which is orthogonal to it, i.e., trace( Y S) = 0. It follows that all columns of H must be global minimizers (with optimal value zero) of the quadratic form x Sx over the standard simplex x โˆˆ R n + : i x i = 1 . A recent study on this problem (Bomze 2017), corroborated by asymptotic probabilistic results (Chen and Peng 2015;Chen et al. 2013;Kontogiannis and Spirakis 2006 shows that very few S have many global minimizers (although in the worst case there can even coexist an exponential number of them). Anyhow, we can extend the strategy of (22) towards larger cp-rank increments as follows if X โˆ’ xx is completely positive. Note that below arguments do not rely on knowledge of the cp-rank. Indeed, instead of a minimal cp factorization of X we can start with any one. We will aim at enlarging the number of rows of G which will play the same role in H as in (8). As an aside, we note that by this construction, we get two alternative factorizations for X : the starting one, X = F F and the resulting one, X = xx + G G =F F whereF = [x | G ] has one more column than G , so the latter won't be the minimal one if we succeed with our strategy. The same was already true for Theorem 3.1 where H had one more row than F. First we explain why we can assume without loss of generality that for a solution y โˆˆ R n (with some negative entries) to F y = x, we have equality y = 1: Proposition 7.1 Consider a solution y to F y = x with ฮฝ := y < 1 and definฤ“ y := 1 ฮฝ y as well asx := 1 โˆš ฮฝ x,X := ฮฝ X . Then observing in addition the alternative representation which can be used to build H when starting from the original factorization X = F F, butF seems more natural as it occurs anyhow when buildingH . So let us assume in the sequel that y = 1. The next step will imitate the case y 1 y 2 < 0 in the previous section: decompose y = [u | โˆ’v ] such that u โˆˆ R k + \ {o} and v โˆˆ R m + \ {o} with k + m = r if y โˆˆ R r , i.e., if F is an r ร— n matrix. As u u + v v = y y = 1 we thus have 0 < u < 1 and 0 < v < 1, hence both I k โˆ’ uu and I m โˆ’ vv are positive-definite (but their square roots will have negative entries unless k = 1 or m = 1). We rescale u and v and imitate the high-cp-rank construction by Shaked-Monderer et al. (2013, Prop. 2 .1): consider a matrix with km โ‰ค r 2 /4 rows and k + m = r columns. So, if the signs of entries of y are well balanced, this leaves a chance for a larger increment in cp-rank. Theorem 7.2 With above assumptions and notations, suppose that F F = X and y โˆˆ R r solves F y = x with y = 1. To obtain Q as in (23), decompose where S is a k ร— n matrix and T is an m ร— n matrix. Denote by C := I k โˆ’ uu Proof First recall (9) and that ฯ•(u u) = 1 + v due to u u + v v = y 2 = 1, which yields C 2 = I k โˆ’ uu and likewise D 2 = I m โˆ’ vv . Next abbreviate by f = v u and by g = u v . Then Q = f โŠ— C |D โŠ— g as defined in (23), and so that we have g = C โˆ’1 u and likewise f = D โˆ’1 v. To establish (a), tedious but straightforward calculations yield Q โˆ’1 F = 1 ฮณ f โŠ— (C S) + ฮณ (DT ) โŠ— g, and therefore we arrive at (25). For (b), first note that by definition f f = ฮณ 2 and g g = 1 ฮณ 2 . Next recall that C g = Cg = u and D f = Df = v by construction, so that we obtain Hence G G = F (I r โˆ’ yy )F = X โˆ’ xx . (c) follows as before. Similar strategies as in Sect. 4 can be employed to satisfy (25) which can be rephrased (withลซ = u u In case of square nonsingular F, the only choice for y = p, so the decomposition (24) is predetermined and above sufficient condition can be checked easily. Take for example the data [F 4 , x 4 ] for Y 4 at the end of Sect. 5. After rescaling by ฮฝ = y 4 = 0.7070 to [F 4 ,x 4 ] as in Proposition 7.1, we havฤ“ which not necessarily is completely positive. Observe that the choice [p|R ] = F would lead back to solving F y = x with y = [ฮฑ, q ] โˆˆ R r + . The difference is that, on one hand now y 2 = ฮฑ 2 + q q = 1 automatically by construction (but the previous y โ‰ค 1 posed no restriction as we have seen in Proposition 7.1). On the other hand, now the north-west corner of H can be less than one, which opens more possibilities to proceed similarly as above. Given above empirical evidence, this may be a promising avenue of future research.
Classification of age-related macular degeneration using convolutional-neural-network-based transfer learning Background To diagnose key pathologies of age-related macular degeneration (AMD) and diabetic macular edema (DME) quickly and accurately, researchers attempted to develop effective artificial intelligence methods by using medical images. Results A convolutional neural network (CNN) with transfer learning capability is proposed and appropriate hyperparameters are selected for classifying optical coherence tomography (OCT) images of AMD and DME. To perform transfer learning, a pre-trained CNN model is used as the starting point for a new CNN model for solving related problems. The hyperparameters (parameters that have set values before the learning process begins) in this study were algorithm hyperparameters that affect learning speed and quality. During training, different CNN-based models require different algorithm hyperparameters (e.g., optimizer, learning rate, and mini-batch size). Experiments showed that, after transfer learning, the CNN models (8-layer Alexnet, 22-layer Googlenet, 16-layer VGG, 19-layer VGG, 18-layer Resnet, 50-layer Resnet, and a 101-layer Resnet) successfully classified OCT images of AMD and DME. Conclusions The experimental results further showed that, after transfer learning, the VGG19, Resnet101, and Resnet50 models with appropriate algorithm hyperparameters had excellent capability and performance in classifying OCT images of AMD and DME. . Additionally, nearly 750,000 individuals aged 40 or older suffer from diabetic macular edema (DME) [3], a vision-threatening form of diabetic retinopathy that causes fluid accumulation in the central retina. Many researchers have attempted to develop effective artificial intelligence algorithms by using medical images to diagnose key pathologies of AMD and DME quickly and accurately. Naz et al. [4] addressed the problem of automatically classifying optical coherence tomography (OCT) images to identify DME. They proposed a practical and relatively simple approach to using OCT image information and coherent tensors for robust classification of DME. The features extracted from thickness profiles and cysts were tested using 55 diseased and 53 normal OCT scans in the Duke Dataset. Comparisons revealed that the support vector machine with leave-one-out had the highest accuracy of 79.65%. For identifying DME, however, acceptable accuracy (78.7%) was achieved by using a simple threshold based on the variation in OCT layer thickness. Najeeb et al. [5] used a computationally inexpensive single layer convolutional neural network (CNN) structure to classify retinal abnormalities in retinal OCT scans. After training using an open-source retinal OCT dataset containing 83,484 images from patients, the model achieved acceptable classification accuracy. In a multi-class comparison (choroidal neovascularization (CNV), DME, Drusen, and Normal), the model achieved 95.66% accuracy. Nugroho [6] used various methods, including histogram of oriented gradient (HOG), local binary pattern (LBP), DenseNet-169, and ResNet-50, to extract features from OCT images and compared the effectiveness of handcrafted and deep neural network features. The evaluated dataset contained 32,339 instances distributed in four classes (CNV, DME, Drusen, and Normal). The accuracy values for the deep neural network-based methods (88% and 89% for DenseNet-169 and ResNet-50, respectively) were superior to those for the nonautomatic feature models (50% and 42% for HOG and LBP, respectively). The deep neural network-based methods also obtained better results in the underrepresented class. In Kermany et al. [7], a diagnostic tool based on a deep-learning framework was used to screen patients with common treatable blinding retinal diseases. By using transfer learning, the deep-learning framework could train a neural network with a fraction of the data required in conventional approaches. When an OCT image dataset was used to train the neural network, accuracy in classifying AMD and DME was comparable to that of human experts. In a multi-class comparison among CNV, DME, Drusen, and Normal, the framework achieved 96.1% accuracy. In Perdomo et al. [8], an OCT-NET model based on CNN was used for automatically classifying OCT volumes. The OCT-NET model was evaluated using a dataset of OCT volumes for DME diagnosis using a leave-one-out cross-validation strategy. Accuracy, sensitivity, and specificity all equaled 93.75%. The above results of research in AMD indicate that automatic classification accuracy needs further improvement. Therefore, the motivation of this study was to find CNN-based models and their appropriate hyperparameters that use transfer learning to classify OCT images of AMD and DME. The CNN-based models were used for transfer learning included an 8-layer Alexnet model [9], a 22-layer Googlenet model [10], 16-and 19-layer VGG models (VGG16 and VGG19, respectively; [11]), and 18-, 50-and 101-layer Resnet models (Resnet18, Resnet50, and Resnet101, respectively; [12]). The algorithm hyperparameters included optimizer, mini-batch size, max-epochs, and initial learning rate. The experiments showed that, after transfer learning, the VGG19, Resnet101, and Resnet50 models with their appropriate algorithm hyperparameters had excellent performance and capability in classifying OCT images of AMD and DME. This paper is organized as follows. The research problem is described in Sect. 2. Section 3 describes the research methods and steps. Section 4 presents and discusses the results of experiments performed to evaluate performance in classifying OCT images of AMD and DME. Finally, Sect. 5 concludes the study. AMD and DME The macula, which is located in the center of the retina, is essential for clear visualization of nearby objects such as faces and text. Various eye problems can degrade the macula and, if left untreated, can even cause loss of vision. Age-related macular degeneration is a medical condition that can cause blurred vision or loss of vision in the center of the visual field. Early stages of AMD are often asymptomatic. Over time, however, gradual loss of vision in one or both eyes may occur. Loss of central vision does not cause complete blindness but can impair performance of daily life activities such as recognizing faces, driving, and reading. Macular degeneration typically occurs in older people. The classifications of AMD are early, intermediate, and late. The late type is further classified as "dry" and "wet" [13]. In the "dry" type, which comprises 90% of AMD cases, retinal deterioration is associated with formation of small yellow deposits, known as Drusen, under the macula. In the "wet" AMD type, abnormal blood vessel growth (i.e., CNV) occurs under the retina and macula. Bleeding and fluid leakage from these new blood vessels can then cause the macula to bulge or lift up from its normally flat position, thus distorting or destroying central vision. Under these circumstances, vision loss may be rapid and severe. A DME is characterized by breakdown of blood vessel walls in the retina resulting in accumulation of fluid and proteins in the retina. The resulting distortion of the macula then causes visual impairment or loss of visual acuity. One precursor of DME is diabetic retinopathy, in which blood vessel damage in the retina causes visual impairment [5]. OCT images of AMD and DME In this study, all OCT images of AMD and DME used in the experiments were obtained from Kermany et al. [14]. The images were divided into four classes: CNV, DME, Drusen, and Normal. Figure 1 shows representative images of the four OCT classes. Considered problem The considered problem was how to classify large numbers of different OCT images of CNV, DME, Drusen, and Normal efficiently and accurately. Since OCT images of CNV, DME, Drusen, and Normal can differ even for the same illness, a specialist or machine learning is needed to assist the physician in classifying the images. Methods The research methods and steps were collecting data, processing OCT images of AMD and DME, selecting a pre-trained network for transfer learning, classifying OCT images of AMD and DME by CNN-based transfer learning, comparing performance among different CNN-based transfer learning approaches, and comparing performance with other approaches in classifying OCT images of AMD and DME. The detailed steps were as follows. Collecting data and processing OCT images of AMD and DME The OCT images of AMD and DME in Kermany et al. [14] were split into a training set and a testing set of images. The training set had 83,484 images, including 37,205 CNV images, 11,348 DME images, 8,616 Drusen images, and 26,315 images of a normal eye condition. The testing set used for network performance benchmarking contained 968 images, 242 images per class. To maintain compatibility with the CNN-based architecture, each OCT image was processed as a 224 ร— 224 ร— 3 image, where 3 is the number of color channels. Selecting pre-trained network for transfer learning Transfer learning is a machine learning method in which a model developed for a task is reused as the starting point for a model developed for another task. In transfer learning, pre-trained models are used as the starting point for performing computer vision and natural language processing tasks. Transfer learning is widely used because it reduces the computation time, the computational resources, and the expertise needed to develop neural network models for solving these problems [15]. In his NeurIPS 2016 tutorial, Ng [16] highlighted the potential uses of transfer learning and predicted that, after supervised learning, transfer learning will be the next major commercial application of machine learning. In transfer learning, a pre-trained model is used to construct a predictive model. Thus, the first step is to select a pre-trained CNV DME Drusen Normal Fig. 1 Representative optical coherence tomography images of the CNV, DME, Drusen, and Normal classes. CNV choroidal neovascularization, DME diabetic macular edema source model from available models. The pool of candidate models may include models developed by research institutions and trained using large and complex datasets. The second step is to reuse the model. The pre-trained model can then be used as the starting point for a model used to perform the second task of interest. This may involve using all or parts of the model, depending on the modeling technique used. The third step is to tune the model. Depending on the input-output pair data available for the task of interest, the user may consider further modification or refinement of the model. The widely used commercial software program Matlab R2019 by MathWorks has been validated as effective for pre-training neural networks for deep learning. The starting point for learning a new task was pretraining, in which the image classification network was pretrained to extract powerful and informative features from natural images. Most pre-trained networks were trained with a subset of the ImageNet database [17] used in the ImageNet Large-Scale Visual Recognition Challenge [18]. After training on more than 1 million images, the networks could classify images into 1000 object categories, e.g., keyboard, coffee mug, pencil, and various animals. Transfer learning in a network with pre-training is typically much faster compared to a network without pre-training. Classifying OCT images of AMD and DME by CNN-based transfer learning Fine-tuning a pre-trained CNN with transfer learning is often faster and easier than constructing and training a new CNN. Although a pre-trained CNN has already learned a rich set of image features, it can be fine-tuned to learn features specific to a new dataset, in this case, OCT images of AMD and DME. Fine-tuning a network is slower and requires more effort than simple feature extraction. However, since the network can learn to extract a different feature set, the final network is often more accurate. The starting point for fine tuning deeper layers of the pre-trained CNNs for transfer learning (i.e., Alexnet, Googlenet, VGG16, VGG19, Resnet18, Resnet50, and Resnet101) was training the networks with a new data set of OCT images of AMD and DME. Figure 2 is a flowchart of the CNN-based transfer learning procedure. Classification performance in comparison with other approaches The accuracy, precision, recall (i.e., sensitivity), specificity, and F 1 -score values were used to compare performance with other approaches. Precision was assessed by positive predictive value (number of true positives over number of true positives plus number of false positives). Recall (sensitivity) was assessed by true positive rate (number of true positives over the number of true positives plus the number of false negatives). Specificity was measured by true negative rate (number of true negatives over the number of false positives plus the number of true negatives). The F 1 -score, a function of precision and recall, was used to measure prediction accuracy when classes were very imbalanced. In information retrieval, precision is a measure of the relevance of results while recall is a measure of the number of truly relevant results returned. The formula for F 1 -score is Results The proposed CNN-based transfer learning method with appropriate hyperparameters was experimentally used to classify OCT images of AMD and DME. The OCT images in Kermany et al. [14] were used to train models and to test their performance. The experimental environment was Matlab R2019 and its toolboxes developed by The MathWorks. The network training options were the options available in the Matlab toolbox for CNNbased transfer learning with algorithm hyperparameters, i.e., 'Optimizer' , 'MiniBatch-Size' , 'MaxEpochs' (maximum number of epochs), and 'InitialLearnRate' . The experimental data for OCT images of AMD and DME included a training set and a testing set. To maintain compatibility with the CNN-based architecture, each OCT image was processed as a 224 ร— 224 ร— 3 image, where 3 is the number of color channels. Table 1 shows the training and testing sets of OCT images of AMD and DME. For training, different CNN-based models require different algorithm hyperparameters. The hyperparameter values are set before the learning process begins. Table 2 shows the selected CNN-based models with algorithm hyperparameters. The training option was use of 'sgdm' , a stochastic gradient descent with a momentum optimizer. MiniBatchSize used a mini-batch with 40 observations at each iteration. MaxEpochs set the maximum number of epochs for training. InitialLearnRate was an option for dropping the learning rate during training. For each CNN-based model, Table 3 shows the accuracy in each experiment, the average accuracy for all experiments, and the standard deviation (SD) in accuracy in classifying OCT images of AMD and DME. Data are shown for five independent runs of the experiments performed in the training set and in the testing set. Table 3 shows that the average accuracy in the testing set ranged from 0.9750 to 0.9942 when using the CNN-based models with appropriate hyperparameters for transfer learning. For the testing set, the VGG19, Resnet101, and Resnet50 models had average accuracies of 0.9942, 0.9919, and 0.9909, respectively, which were all very high (all exceeded 0.99). Moreover, the SDs in accuracy obtained by VGG19 and Resnet101 were all 0.0005. That is, the VGG19 and Resnet101 had the most robust performance in classifying OCT images of AMD and DME. accuracy for the training set, and the black line shows the progressive improvement in accuracy for the testing set. Figures 4 and 5 show how model training progressively improved accuracy in Resnet101 and Resnet50, respectively. The training option was sgdm optimizer. Mini-BatchSize used 40 observations at each iteration. Iterations per epoch were 2087. Max-Epochs were set to 5. Therefore, the maximum iterations were 10,435(= 2087 ร— 5). The blue line shows the progressive improvement in accuracy when using the training set, and the black line shows the progressive improvement in accuracy when using the testing set. The accuracy metric was used to measure the transfer learning performance of the CNN-based models. Precision, recall, specificity, and F 1 -score were further used to validate classification performance. The results were depicted by creating a confusion matrix of the predicted labels versus the true labels for the respective disease classes. Tables 4, 5 and 6 show the confusion matrices used in multi-class comparisons of Normal, CNV, DME, and Drusen for VGG19, Resnet101, and Resnet50 for the testing data. Table 4 shows that, in Experiment #5, VGG19 achieved an accuracy of 0.9948 with an average precision of 0.9949, an average recall of 0.9948, an average specificity of 0.9983, and an average F 1 -score of 0.9948. Table 5 shows that, in Experiment #5, Resnet101 achieved an accuracy of 0.9928 with an average precision of 0.9928, an average recall of 0.9928, an average specificity of 0.9976, and an average F 1 -score of 0.9928. Table 6 indicates that, in Experiment #5, Resnet50 achieved an accuracy of 0.9917 with an average precision of 0.9918, an average recall of 0.9917, an average specificity of 0.9972, and an average F 1 -score of 0.9917. Next, the performance of the proposed CNN-based transfer learning approach in classifying OCT images of AMD and DME was compared with the results reported in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. Table 7 shows the confusion matrix for Normal, CNV, DME, and Drusen obtained by Kermany et al. [7]. The model in Kermany et al. [7] achieved an accuracy of 0.9610 with an average precision of 0.9610, an average recall of 0.9613, an average specificity of 0.9870, and an average F 1 -score of 0.9610. Table 8 shows the confusion matrix for Normal, CNV, DME, and Drusen obtained by Najeeb et al. [5]. The model in Najeeb et al. [5] achieved an accuracy of 0.9566 with an average precision of 0.9592, an average recall of 0.9566, an average specificity of 0.9855, and an average F 1 -score of 0.9563. For the testing set, Table 9 shows the classifier accuracy, average precision,average recall/sensitivity, average specificity, and average F 1 -score obtained by the different CNNbased models. When the testing set was used in Experiment #5, the accuracies obtained by VGG19, Resnet101, and Resnet50 were 0.9948, 0.9928, and 0.9917, respectively, which are all very high and were superior to the accuracies obtained by the models in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. In Experiment #5, other measures (i.e., average precision, average recall/sensitivity, average specificity, and average F 1 -score) obtained byVGG19, Resnet101, and Resnet50 were higher than those obtained by the models in Kermany et al. [7], Najeeb et al. [5], and Nugroho [6]. That is, by using transfer learning with Discussions In this study, the appropriate algorithm hyperparameters for CNN-based transfer learning were very important for classifying OCT images of AMD and DME. This phenomenon was demonstrated by experiments in which the VGG19, Resnet50, and Resnet101 models achieved a classification accuracy exceeding 99%. If an inappropriate combination of algorithm hyperparameters is used, the classification accuracy will be reduced. For example, the algorithm hyperparameters for Googlenet transfer learning and the results in Table 10 indicates that an appropriate set of hyperparameters can provide good performance for transfer learning, where Optimizer of sgdm and InitialLearnRate of 10 -4 are identical. Therefore, the combination of algorithm hyperparameters of the third case (i.e., Optimizer of sgdm, MiniBatch-Size of 40, MaxEpochs of 5, and InitialLearnRate of 10 -4 ) was selected for the study because it achieved high accuracy in the training and testing sets. Tables 11 and 12 show the algorithm hyperparameters for Resnet50 and Resnet101 transfer learning and their respective results. Tables 11 and 12 show that, if all other hyperparameter are identical (Optimizer of sgdm, MiniBatchSize of 40, and InitialLearnRate of 10 -4 ), changing MaxEpochs from 3 to 5 improves accuracy for the test set by more than 0.99. Therefore, this combination of algorithm hyperparameters (i.e., Optimizer of sgdm, MiniBatchSize of 40, MaxEpochs of 5, and InitialLearnRate of 10 -4 ) was selected for Resnet50 and Resnet101 transfer learning in classifying OCT images of AMD and DME. Figure 6 displays four sample images with predicted labels and the predicted probabilities of images with those labels. The results for four randomly selected sample images were very similar to the results for the predicted category, and the probabilities of prediction approached 100%, indicating that the model established by CNN-based transfer learning had high classification ability. Presently, CNN-based transfer learning is very efficient and stable [19,20]. The key to successful image classification is ensuring that the original images are correctly classified. This phenomenon was demonstrated by experiments in this study in which the CNN-based model achieved a classification accuracy exceeding 99%. Therefore, CNNbased transfer learning with appropriate hyperparameters has the best performance in classifying OCT images of AMD and DME. Conclusions This study used CNN-based transfer learning with appropriate algorithm hyperparameters for effectively classifying OCT images of AMD and DME. The main contribution of this study is the confirmation that suitable CNN-based models with their algorithm hyperparameters can use transfer learning to classify OCT images of AMD and DME. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from:
Socioโ€economic disadvantage is associated with heavier drinking in high but not middleโ€income countries participating in the International Alcohol Control Study Abstract Introduction and Aims To investigate if socioโ€economic disadvantage, at the individualโ€ and countryโ€level, is associated with heavier drinking in some middleโ€ and highโ€income countries. Design and Methods Surveys of drinkers were undertaken in some highโ€ and middleโ€income countries. Participating countries were Australia, England, New Zealand, Scotland (highโ€income) and Peru, Thailand and Vietnam (middleโ€income). Disadvantage at the countryโ€level was defined as per World Bank (categorised as middleโ€or highโ€income); individualโ€level measures were (i) years of education and (ii) whether and individual was under or over the poverty line in each country. Measures of heavier drinking were (i) proportion of drinkers that consumed 8+ drinks and (ii) three drinking risk groups (lower, increasing and higher). Multiโ€level logistic regression models were used. Results Individualโ€level measures of disadvantage, lower education and living in poverty, were associated with heavier drinking, consuming 8+ drinks on a typical occasion or drinking at the higher risk level, when all countries were considered together. Drinkers in the middleโ€income countries had a higher probability of consuming 8+ drinks on a typical occasion relative to drinkers in the highโ€income countries. Interactions between countryโ€level income and individualโ€level disadvantage were undertaken: disadvantaged drinkers in the middleโ€income countries were less likely to be heavier drinkers relative to those with less disadvantage in the highโ€income countries. Discussion and Conclusions Associations between socioโ€economic disadvantage and heavier drinking vary depending on countryโ€level income. These findings highlight the value of exploring crossโ€country differences in heavier drinking and disadvantage and the importance of including countryโ€level measurements to better elucidate relationships. Introduction Several studies have been undertaken within countries to understand how socio-economic status is related to heavier alcohol consumption, for example, [1]. Although study methods and measures are continually being refined, no clear picture has yet emerged. The most common pattern seen in high-income countries is that those of higher socio-economic status are more likely to consume alcohol more frequently than those of lower status, but those of lower status consume more alcohol in total (and more on a typical occasion) [1][2][3]. A recent study conducted in two countries; a high-income and an upper-middle income country, found no inequalities in heavy episodic drinking in Chile (upper-middle income), but in Finland heavy episodic drinking was more prevalent among those with lower education, however, women of higher education were also more likely to consume heavily [1]. There is some evidence that in middle-income countries (e.g. Brazil and Russia) high socio-economic status is associated with heavier consumption [4,5]. However, a different study from Russia found higher odds of hazardous drinking among those who were least educated and were not in employment [6]. One study assessed the impact of educational level in 15 countries, of which 13 were high-income and two were middle-income countries, and found within each of the two middle-income countries, those in the higher educated groups were more likely to consume alcohol in a risky manner [2]. These studies provide limited evidence that patterns of heavier drinking may differ by level of income in countries. To the best of our knowledge, no studies have utilised multi-level modelling to measure how countrylevel factors may interact with individual-level measures of socio-economic status and heavier drinking. Grittner et al. [7], although not directly assessing drinking patterns, conducted a cross-country study of 25 countries comprised of high-, middle-and lowincome to understand how social inequalities and gender differences affected the experience of self-reported alcohol-related problems. Multi-level modelling allowed for assessment of country-level indicators of inequality along with individual-level education measures. The findings showed men in lower income countries were more likely to report alcohol-related social problems [7]. This study suggests that taking account of country-level factors, along with individuallevel variables, in understanding impacts of socioeconomic status is important. Previous cross-country studies to date have tended to use years of education as a measure of socioeconomic status [1,7]. Measures of education status have advantages in that they tend to represent the construct of socio-economic status quite well and are less likely to change over time relative to other measures such as income [8]. In the current study we use years of education grouped into low, medium and high. Income is used less often in relevant cross-country studies. Household income, while a more inclusive measure of socio-economic status than personal income, cannot be adequately determined as lower or higher unless equivalised to yield a representative income. In this current study, we use equivalised household income to first determine income and then to assign respondents to being above or below the poverty line in their respective countries as a way to conceptualise those who are disadvantaged versus not disadvantaged. We also include at the country-level whether the country is classified as a middle-or highincome country [9] to conceptualise disadvantage at the country-level. The countries included in the current study differ in terms of prevalence of alcohol use and estimated per capita levels of consumption (per capita higher in middle-income countries for drinkers [10]). Highincome countries had higher prevalence levels (84% in Australia and UK, New Zealand 79.5%). A lower level of prevalence was apparent in the middle-income countries (Thailand 29.7%, Peru 55.4%, Vietnam 38.3% [11]). As previous studies, for example, Probst, Manthey and Rehm [12], have shown that lifetime abstention is associated with lower country-level income relative to high-income and given the stark variation in abstention rates, a country-level measure of abstention for each country was included in the current study as a potential explanatory variable. To the best of our knowledge, no cross-country study has assessed relationships between disadvantage and heavier drinking using both country-level and individual-level measures. This study will therefore assess if socio-economic disadvantage, at the individuallevel and country-level, is associated with heavier drinking in some middle-and high-income countries. Methods The following countries were included in the current study: Australia, England, Scotland, New Zealand (high-income), Peru, Thailand and Vietnam (middleincome). Inclusion in the study depended on the availability of household composition data to allow for equalisation of income. Sampling methods were designed to obtain a random representative sample and each country utilised the sampling frame that was most appropriate in their context. Either multi-stage sampling of geographical units or telephone samples were used to represent the countries (although the samples in Vietnam and Peru were sub-national). For further details on sampling please see Huckle et al. 2018 [13]. Interviews were conducted via computer-assisted interviewing either over the phone or face-to-face using android tablets. A screening interview established eligibility for participation (drinking in the last 6 months and age 16-65 years) and one respondent was selected at random from the household. Additional screening criteria for Australia meant that a larger proportion of risky drinkers, defined as consuming more than five drinks at least once a month, were included than would otherwise be obtained in a random sample. This has been accounted for with weighting in the current paper. Response rates were calculated using American Association for Public Opinion Research formula #3 (or more stringent formulas) [14]. The years in which data collection occurred in each Sample sizes of drinkers included for the analyses for each country can be found in Table 1. Drinkers who were not within the age range 18-65 years or had missing income data were excluded from the samples. Country-level measures High-and middle-income. Countries were categorised into high-or middle-income based on World Bank categories. During the period of the current study highincome countries had a gross national income per capita > US$12 615 (approximately, the thresholds differ by year) and middle-income countries had a gross national income per capita below this but above US$1025. For the purposes of this analysis, the upper-and lower middle-income were grouped as middle-income [9,15]. Country-level prevalence of alcohol consumption. Abstention rates in the past 12 months for each country were obtained from the Global Information System on Alcohol and Health 2010 [16], as the IAC study samples included in this study comprised drinkers only. Individual-level measures All individual-level survey measures had a reference period of the past 6 months. Alcohol consumption outcome measures. Consumption data were collected using a beverage-and locationspecific measure. Respondents reported on their drinking in a number of specified locations plus any additional locations they drank at. For each place, they were asked how often they drank there and what they would drink on a typical occasion at that location [17]. The locations asked about in each country were adapted to the context and reflected the full range of drinking locations in that context as were the beverages that also included unrecorded beverages. This information was then used to calculate the typical occasion quantity and frequency of drinking (please see Huckle et al. [13] for further details). Measures for analysis were then derived as: 1. Heavier drinking: the proportion of respondents consuming 8+ drinks on a typical occasion within the previous 6 months versus not (a drink was defined as 15 mL absolute alcohol in each country). 2. Risk categories: The risk categories we used in analysis were designed to reflect the evidence presented in Refs. [18,19], i.e. in Rehm et al. [18]. โ€ข Low risk: Up to four drinks on an occasion OR 4-6 drinks on an occasion less than once a week. โ€ข Increased risk: 4-6 drinks on an occasion at least once a week OR 6+ drinks on an occasion less than once a week. โ€ข Higher risk: 6+ drinks on an occasion at least once a week. Disadvantage measures. Education: Education in years for each respondent was grouped as <10 years (Low); 11-12 years (Medium); 13+ years (High) [as per 7]. Poverty line: Respondents were categorised in each country to be either below of above the poverty line (based on equivalised household income). Equivalised household income In order to determine which drinkers in each country were below or above the poverty line we firstly 'equivalised' household income to account for the fact that households contain a different number of individuals. The number and ages of individuals in each household was available in a separate survey question for countries. In New Zealand, household composition data were not complete. Some data were used from the 2013 follow-up IAC survey and for missing data, imputation was used to assign average number of adults and children in that household based on 2013 census data (according to the number of eligible adults between 16 and 65 years of age living in the household in 2011). Seventeen percent of respondents had missing income data after this process. Household income was then equivalised by dividing total household income by the square root of the total number of household members. This is a method used by the Organisation for Economic Co-operation and Development for comparing income across countries [20]. Determining respondents who were above and below the poverty line was performed by obtaining the poverty line in each country, from different sources, and with the assistance of the participating countries. The poverty line was expressed as the income required to keep an adult out of poverty (for the high-income countries poverty is defined relatively whereas for the low-income countries this is usually expressed as the cost of a basket of essential goods). Where the poverty line referred to a year other than the survey year it was adjusted for the local rate of consumer price inflation. A respondent was assigned as being below the poverty line if they belonged to a household whose income once equivalised was less than the hurdle income. Therefore, poverty was measured in absolute poverty within their respective countries. The missing income data ranged across countries: Australia 33%, England 27%, Scotland 29%, New Zealand 33% (with the addition of 17% of respondents for which household size could not be determined this meant that 50% of the data were missing for income), Thailand 3%, Peru 7%, Vietnam 23%. Statistical modelling SAS 9.3 was used both to compute descriptive statistics and to fit multi-level logistic regression models. For the country-grouped data, two different models were fitted. The heavier drinking dichotomous outcome was analysed considering Bernoulli distribution with logit link function. Here the probability of being a heavier drinker depends on gender, age, level of education, poverty line and high-or medium-income countrylevel. Level of education and gender were considered as random effects. The three-level drinking risk groups outcome was analysed by fitting a multinomial distribution with logit link function and the same covariates specification. In particular, a polytomous logistic regression model was considered since the proportional odds test for ordinal logistic regression was rejected. We included gender as a random effect. Age was centred about the mean to allow interpretation against the intercept. In the multi-level models, the inclusion of varyingintercept and varying-slopes was considered for all the covariates, for example, gender, age. After observing S66 T. Huckle et al. the statistical significance of the variance associated with the specific random effect, the models that were reported were 'the best'-model assumptions and potential outliers were checked and Wald and Likelihood ratio tests were used jointly with standard model selection criteria (likelihood-based measures, for example, Akaike Information Criteria, Bayesian Information Criterion) for discriminating among models. We also considered the country-level measure of abstention in the modelling, however, it was removed since it was positively correlated with the country-level income variable. Interactions between country-level and individuallevel variables were also tested in both models. Given the number of countries was small, we also fitted the same models using a Bayesian framework. We considered non-informative prior distributions for the parameters. The estimates obtained were very similar reflecting no influence of the priors chosen on the posterior distribution and leading to the same inferential conclusions and as such is not reported here [21,22]. Analyses presented were run on individuals with complete data only. While missing data for most variables were minimal, there was considerable missing income data in some countries. As such the heavier drinking model (8+ drinks) was first run excluding individual-level poverty line (based on income), which provided a more complete dataset, then with individual-level poverty line included. The addition of poverty line did not change the findings (not reported here). Results In the high-income countries, the proportions of male and females were roughly equal. In two of the middleincome countries, males comprised the majority of drinkers (Thailand and Vietnam). In Peru, it was observed that more drinkers were female. The most populated age groups for drinkers as documented by the surveys were 25-34, 35-44 and 45-54 years in all countries except for Peru where 18-24, 24-34 and 45-54 years were most populated. In Vietnam, the age group 55-65 was among the groups most populated ( Table 1). The percentage of those with low education varied across countries. The countries that had the greatest percentages of drinkers with low education were Peru (55%), Thailand (52%) and Vietnam (71%). In Australia, England and Scotland the majority of drinkers were highly educated ( Table 1). The percentage of drinkers living below the poverty line ranged from 5% in Vietnam to 14% in New Zealand (Table 1). The percentage of drinkers consuming eight or more drinks on a typical occasion ranged from 8% in New Zealand to 16% in Thailand and Vietnam ( Table 1). The percentage of drinkers consuming in the higher risk group ranged from 2% in Peru (due to lower frequency of drinking) to 28% in Scotland (Table 1). Table 2 shows the results for the multi-level model assessing consumption of 8+ drinks on a typical occasion including all countries. Being of lower age and male were associated with a greater likelihood of consuming 8+ drinks on a typical occasion (compared to being female) ( Table 2). Drinkers with low education had a greater likelihood of consuming 8+ drinks on a typical occasion compared to drinkers with high education; the same result was found for drinkers of medium education, however, the magnitude of the effect was smaller ( Table 2). Drinkers living under the poverty line had a greater likelihood of consuming 8+ drinks on a typical occasion compared to drinkers above the poverty line ( Table 2). 8+ drinks on a typical occasion A significant interaction was found between country-level income and education. The probability of being a heavier drinker was lower for drinkers with low education living in the middle-income countries compared to drinkers with high education level in the high-income countries ( Table 2). A significant interaction was also found between country-level income and poverty line. The probability of being a heavier drinker was lower for drinkers living under the poverty line in the middle-income countries compared to drinkers above the poverty line in the high-income countries ( Table 2). Table 3 shows the results for the multi-level model assessing risk categories including all countries. Drinkers of a lower age were more likely to be in the increased and higher risk categories than those of older age ( Table 3). Risk categories (low, increased and higher) The probability of being in the increased risk group compared to the low risk group was higher for male drinkers compared to female drinkers. The same result was found for the higher risk group but the magnitude of the effect was larger ( Table 3). The probability of those with low education being in the higher risk group compared to low risk group was higher relative to those with high education. For medium level of education, the probability of being in the increased and higher risk groups compared to low risk was higher (compared to those with high education) ( Table 3). The likelihood of being in the increased or higher risk groups compared to lower risk was lower for drinkers in the middle-income countries compared to the high-income countries (Table 3). A significant interaction was found for education and country-level income. The probability of higher risk group membership (compared to low risk) was lower for drinkers living in the middle-income countries with low education compared to drinkers with high education level in the high-income countries. The same interaction effect was found for medium education (Table 3). A significant interaction was found for country-level income and poverty line. The higher likelihood of higher risk group membership (compared to low risk) was lower for drinkers living in the middle-income countries and under the poverty line compared to drinkers above the poverty line in the high-income countries (Table 3). Individual-level measures: education and poverty line Several key findings emerged from this study, the first that individual-level disadvantage as measured by education was associated with heavier drinking. Drinkers of low or medium education were more likely to be heavier consumers of alcohol (8+ drinks) with the magnitude of the effect being larger for drinkers with low education. When frequency was considered along with higher typical occasion quantity as measured by the drinking risk groups, low education was related to higher risk group membership as was medium education. These individual-level education findings confirm what is commonly known from the literature with respect to high-income countries -that lower education is generally associated with heavier drinking e.g. greater quantity, heavy episodic drinking [1][2][3]. We also found that drinkers living below the poverty line across countries had a greater probability of consuming 8+ drinks on a typical occasion or of being in the higher risk group (over and above the effect of education). This suggests that the burden of heavier alcohol consumption is falling on drinkers at the most vulnerable end of the socio-economic gradient. Those living in poverty are likely to experience compounding associations such as exposure to more adverse environmental settings related to alcohol e.g. with higher density of alcohol outlets found in areas of high deprivation (e.g. [23,24]) likely also resulting in exposure to more advertising via shop fronts and including exposure to adverse household-level conditions of stress [25,26]. It is also likely those living in poverty have fewer resources to protect against the adverse impacts of alcohol consumption [26]. Country-level income Country-level income had independent associations with heavier drinking patterns. Drinkers in the middleincome countries had a higher probability of consuming 8+ drinks on a typical occasion relative to drinkers in the high-income countries. However, for the risk groups based on both quantity and frequency, the likelihood of being in the increased or higher risk groups was higher for drinkers in the high-income countries. This could be because higher frequency of drinking is more common in the participating high-income countries [27]. Interactions between country-level income and individuallevel disadvantage measures An important part of the current study was to assess how including country-level income affected the relationship between the individual-level measures of disadvantage and alcohol consumption. Interactions between country-level income (middle vs. high) and measures of disadvantage (low education and under the poverty line) revealed that drinkers with greater disadvantage in the middle-income countries were less likely to be a heavier drinker relative to those with fewer disadvantages in high-income countries. In other words, this analysis shows that if you have two people both with a low level of education, the person in the high-income country has a higher probability of being a heavier drinker than the person in the middle-income country. This was found for both outcome measures, 8 + drinks on a typical drinking occasion and the drinking risk groups. This is similar to findings from limited previous studies that have found that higher socioeconomic status is associated with heavier drinking in some middle-income countries [2,4,5]. It also suggests that differences in country-level factors could be contributing to mixed findings in the literature about how socio-economic status relates to heavier consumption. The result in our middle-income countries may relate to the affordability of alcohol, with alcohol being less affordable in several of the participating middleincome countries relative to the high-income countries [29]. There may also be different cultural factors contributing, for example, in Vietnam, higher education is associated with consuming more alcohol as people with higher education tend to have more prominent roles in society and are susceptible to the social norms encouraging drinking among this group [30]. In addition, commercial alcohol is more expensive in Vietnam, and is more related to heavier drinking than informal alcohol [31]. Limitations Missing income data is common in alcohol surveys and could have biased the results. In all the highincome countries, around one third of income data were missing and a higher proportion was missing for New Zealand due to the additional 17% missing household size data (needed to calculate equivalised income). However, adding income (in this case as it related to the poverty line) as the last variable in a step-wise process in the modelling did not change the findings. This not only provides confidence in the results but also suggests that education by itself can likely do a suitable job in cross-country analysis in the future given both the complexities of generating comparable income data across counties and because the magnitude of effect that the individual-level income data contributed over and above education and country-level income variables was relatively small. In some countries, districts or municipalities were sampled, rather than nationwide and needs to be taken into account when interpreting the results. Response rates were high in all countries except Australia, England and Scotland (although the Australian response rate was in the normal range of response rates for telephone surveys in Australia) [32]. Post stratification weights were calculated and applied in these countries to correct for response bias (to the extent it could be). However, given the low response rates, heavier drinking and other measurements such as people in the low socio-economic category may have been underestimated. Conclusions Disadvantaged drinkers in the participating middleincome countries were less likely to be heavier drinkers than less disadvantaged drinkers in the high-income countries. This suggests that socio-economic disadvantage operates differently in relation to heavier drinking patterns depending on country-level income. This study highlights the value of exploring cross-country differences in relation to socio-economic disadvantage and heavier drinking and the importance of including country-level factors to better elucidate relationships.
Neutrino and/or etherino? We review the insufficiencies of the hypothesis that neutrinos and quarks are physical particles in our spacetime; we introduce the hypothesis that the energy and spin needed for the synthesis of the neutron inside stars originate either from the environment or from the ether conceived as a universal medium with very high energy density via an entity here called {\it etherino,} denoted with the letter"$a$"(from the Latin aether), carrying mass and charge 0, spin 1/2 and $0.78 MeV$ energy according to the synthesis $p^+ + a + e^-\to n$; we identifies compatibility and incompatibility of the neutrino and etherino hypotheses; we review the new structure model of the neutron and hadrons at large with massive physical constituents produced free in the spontaneous decays as permitted by the covering hadronic mechanics; and we conclude with the proposal of new resolutory experiments. 1. Historical notes. In 1920, Rutherford [1a] submitted the hypothesis that hydrogen atoms in the core of stars are compressed into new particles having the size of the proton that he called neutrons, according to the synthesis p + + e โˆ’ โ†’ n. (1.1) The existence of the neutron was confirmed in 1932 by Chadwick [1b]. However, Pauli [1c] noted that the spin 1/2 of the neutron cannot be represented via a quantum state of two particles each having spin 1/2, and conjectured the possible emission of a new neutral massless particle with spin 1/2. Fermi [1d] adopted Pauli's conjecture, coined the name neutrino (meaning in Italian "little neutron") with symbol ฮฝ for the particle andฮฝ for the antiparticle, and developed the theory of weak interactions according to which the synthesis of the neutron is given by p + + e โˆ’ โ†’ n + ฮฝ, (1.2) with inverse reaction, the spontaneous decay of the neutron, n โ†’ p + + e โˆ’ +ฮฝ. (1.3) The above hypothesis was more recently incorporated into the so-called standard model (see, e.g., [1e]) in which the original neutrino was extended to three different particles, the electron, muon and tau neutrinos and their anti[particles; neutrinos were then assumed to have masses; then to have different masses; and then to "oscillate" (namely, to change "flavor" or transform one type into the other). 2. Insufficiencies of neutrino hypothesis. Despite historical advances, the neutrino hypothesis has remained afflicted by a number of basic, although generally unspoken insufficiencies or sheer inconsistencies, that can be summarized as follows: 1) According to the standard model, a neutral particle carrying mass and energy in our spacetime is predicted to cross very large hyperdense media, such as those inside stars, without any collision. Such a view is outside scientific reason because already questionable when the neutrinos were assumed to be massless. The recent assumption that neutrinos have mass has rendered the view beyond the limit of plausibility because a massive particle carrying energy in our spacetime simply cannot propagate within hyperdense media inside large collections of hadrons without any collision. Figure 1: A conceptual illustration of the dependence of the kinetic energy of the electron in nuclear beta decays on the direction of emission due to the strongly attractive Coulomb interaction between the positively charged nucleus and the negatively charged electron. 2) The fundamental reaction for the production of the (electron) neutrino, Eq. (1.2), violates the principle of conservation of the energy, unless the proton and the electron have kinetic energy of at least 0.78M eV , in which case there is no energy available for the neutrino. In fact, the sum of the rest energies of the proton and the electron (938.78M eV ) is 0.78M eV smaller than the neutron rest energy (939.56M eV ). 3) As reported in nuclear physics textbooks, the energy measured as being carried by the electron in beta decays is a bell-shaped curve with a maximum value of the order of 0.782M eV (depending on nuclear data). The "missing energy" has been assumed throughout the 20-th century to be carried by the hypothetical neutrino. However, in view of the strongly attractive Coulomb interactions between the nucleus and the electron, explicit calculations show that the energy carried by the electron depends on the direction of emission, with maximal value for radial emission and minimal value for tangential emission ( Figure 1). In this case, the "missing energy" is absorbed by the nucleus, again, without any energy left for the neutrino. 4) The claims of "experimental detection" of neutrinos are perhaps more controversial than the theoretical aspects because of numerous reasons, such as: the lack of established validity of the scattering theory (se Figure 2); the elaboration of the data via a theory centrally dependent on the neutrino hypotheses; the presence in recent "neutrino detectors" of radioactive sources that could themselves account for the extremely few events over an enormous number of total events; the lack of uniqueness of the neutrino interpretation of experimental data due to the existence of alternative interpretations without the neutrino hypothesis; and other aspects. 5) Numerous additional insufficiencies exist, such as: the absence of well identified physical differentiations between the electron, muon and tau neutrino; the theory contains an excessive number of parameters essentially capable to achiever any desired fit; the probability of the synthesis of the neutron according to Eq. (1.2) is virtually null when the proton and the electron have the needed threshold energy of 0.78M eV due to their very small scattering cross section (10 โˆ’20 barns) at the indicated energy; particles are characterized by positive energies and antiparticles by negative energies, with the consequential lack of plausibility of the conjugation from neutrino to antineutrino in the transition from Eq. (1.2) to (1.3) since the same conjugation does not exist for the proton and the electron; and other insufficiencies, a compatibility condition that would require the reaction n โ†’ p + + e โˆ’ + ฮฝ (rather thanฮฝ). For additional studies on the insufficiency os sheer inconsistencies of the neutrino hypothesis, one may consult Bagge [2a] and Franklin [2b] for an alternative theories without the neutrino hypothesis; Wilhelm [2c] for additional problematic aspects; Moessbauer [2d] for problems in neutrino oscillations; Fanchi [2e] for serious biases in "neutrino experiments"; and literature quoted therein. On historical grounds it should be noted that the original calculations in beta decays were done in the 1940s via the abstraction of nuclei as massive points because mandated by the axioms of quantum mechanics, in which case indeed there is no dependence of the electron-nucleus Coulomb interaction on the direction of beta emission, and the neutrino hypothesis becomes necessary. However, the abstraction of nuclei as massive points nowadays implies a violation of scientific ethics and accountability since nuclei are very large objects for particle standards. The representation of nuclei as they actually are in the physical reality then requires the abandonment of the neutrino conjectures in favor of more adequate vistas. 3. Insufficiencies of quark hypothesis. Some of the fundamental, yet generally unspoken insufficiencies or sheer inconsistencies of the assumption that quarks are physical particles in our spacetime are the following: 1) According to the standard model [5], at the time of the synthesis of the neutron according to Eq. (1.2), the proton and the electron literally "disappear" from the universe to be replaced by hypothetical quarks as neutron constituents. Moreover, at the time of the neutron spontaneous decay, Eq. (1.3), the proton and the electron literally "reappear" again. This view is beyond scientific reason, because the proton and the electron are the only permanently stable massive particles clearly established so far and, as such, they simply cannot "disappear" and then "reappear" because so desired by quark supporters. The only plausible hypothesis under Eqs. (1.2) and (1.3) is that the proton and the electron are actual physical constituents of the neutron as originally conjectured by Rutherford, although the latter view requires the adaptation of the theory to physical reality, rather than the opposite attitude implemented by quark theories. 2) When interpreted as physical particles in our spacetime, quarks cannot experience any gravity. As clearly stated by Albert Einstein in his limpid writings, gravity can only be defined in spacetime, while quarks can only be defined in the mathematical, internal, complex valued unitary space with no possible connection to our spacetime (because prohibited by the O'Rafearthaigh's theorem). Consequently, physicists who support the hypothesis that quarks are the physical constituents of protons and neutrons, thus of all nuclei, should see their body levitate due to the absence of gravity. 3) When, again, interpreted as physical particles in our spacetime, quarks cannot have any inertia. In fact, inertia can only be rigorously admitted for the eigenvalues of the second order Casimir invariant of the Poincarรฉ symmetry, while quarks cannot be defined via such a basic spacetime symmetry, as expected to be known by experts to qualify as such. Consequently, "quark masses" are purely mathematical parameters deprived of technical characterization as masses in our spacetime. 4) Even assuming that, with unknown scientific manipulations, the above inconsistencies are resolved, it is known by experts that quark theories have failed to achieve a representation of all characteristics of hadrons, with catastrophic insufficiencies in the representation of spin, magnetic moment, mean lives, charge radii and other basic features of hadrons. 5) It is also known by experts that the application of quark conjectures to the structure of nuclei has multiplied the controversies, while resolving none of them. As an example, the assumption that quarks are the constituents of protons and neutrons in nuclei has failed to achieve a representation of the main characteristics of the simplest possible nucleus, the deuteron. In fact, quark conjectures are unable to represent the spin 1 of the deuteron (since they predict spin zero in the ground state of two particles each having spin 1 2 ), they are unable to represent the anomalous magnetic moment of the deuteron despite all possible relativistic corrections attempted for decades (because the presumed quark orbits are too small to fit data following polarizations or deformations), they are unable to represent the stability of the neujtron when a deuteron constituent, they are unable to represent the charge radius of the deuteron, and when passing to larger nuclei, such as the zirconium, the catastrophic inconsistencies of quark conjectures can only be defined as being embarrassing. For additional references, one may consult Ref. [3a] on historical reasons preventing quarks to be physical particles in our spacetime; Ref. [3b] on a technical treatment of the impossibility for quarks to have gravity or inertia; Ref. [3c] on a more detailed presentation on the topic of this section; and Refs. [7,9g,9h] for general studies. The position adopted by the author since the birth of quark theories (see the memoir [3a] of 1981), that appears to be even more valid nowadays, is that the unitary, Mendeleev-type, SU(3)color classification of particles into families has a final character. Quarks are purely mathematical representation of a purely mathematical internal symmetry solely definable on a purely mathematical, complex-valued unitary space. As such, the use of quarks is indeed necessary for the elaboration of the theory, as historically suggested by the originator Murray Gell-Mann, but quarks are not physical particles in our spacetime. Consequently, the identification of the hadronic constituents with physical particles truly existing in our spacetime is more open than ever and carries ever increasing societal implications since the assumption that quarks are physical constituents of hadrons prevents due scientific process in the search for new clean energies so much needed by mankind, as illustrated later on. Needless to say, all alternative structure models, including those without neutrino and quark conjectures must achieve full compatibility with said unitary models of classification, in essentially the same way according to which quantum structures of atoms achieved full compatibility with their Mendeleev classification. On historical grounds, the classification of nuclei, atoms and molecules required two different models, one for the classification and a separate one for the structure of the individual element of a given family. Quark theories depart from this historical teaching because of their conception of representing with one single theory both the classification and the structure of hadrons. The view advocated in this paper is that, quite likely, history will repeat itself. The transition from the Mendeleev classification of atoms to the atomic structure required a basically new theory, quantum mechanics. Similarly, the transition from the Mendeleev-type classification of hadrons to the structure of individual hadrons will require a broadening of the basic theory, this time a generalization of quantum mechanics due tg the truly dramatic differences of the dynamics of particles moving in vacuum, as in the atomic structure, to the dynamics of particles moving within hyperdense media as in the hadronic structure. 4. Inapplicability of quantum mechanics for the synthesis and structure of hadrons. Pauli, Fermi, Schrรถdinger and other founders of quantum mechanics pointed out that synthesis of the neutron according to Rutherford is impossible for the following reasons: 1) As indicated in Section 1, quantum mechanics cannot represent the 1 2 spin of the neutron according to Rutherford's conception because the total angular momentum of a ground state of a two particles with spin 1/2, such as the proton and the electron, must be 0. 2) The representation of synthesis (1.1) via quantum mechanics is impossible because it would require a "positive" binding-like energy, in violation of basic quantum laws requiring that all binding energies must be negative, as fully established for nuclei, atoms and molecules. This is due to the fact indicated in Section 2 that the sum of the mass of the proton and of the electron, m p + m e = 938.272M ev + 0.511M eV = 938.783M eV, (4.1) is smaller than the mass of the neutron, m n = 939.565M eV , with "positive" mass defect Under these conditions all quantum equations become physically inconsistent in the sense that mathematical solutions are indeed admitted, but the indicial equation of Schrรถdinger's equation no longer admits the representation of the total energy and other physical quantities with real numbers (readers seriously interested in the synthesis of hadrons are strongly suggested to attempt the solution of any quantum bound state in which the conventional negative binding energy is turned into a positive value). 3) Via the use of the magnetic moment of the proton ยต p = 2.792ยต N and of the electron ยต e = 1.001ยต B , it is impossible to reach the magnetic moment of the neutron ยต n = โˆ’1.913ยต N . 4) When the neutron is interpreted as a bound state of one proton and one electron, it is impossible to reach the neutron meanlife ฯ„ n = 918sec that is quite large for particle standards, since quantum mechanics would predict the expulsion of the electron in nanoseconds. 5) There is no possibility for quantum ,mechanics to represent the neutron charge radius of about 1F = 10 โˆ’13 cm since the smallest predicted radius is that of the hydrogen atom of 10 โˆ’8 cm, namely 5,000 times bigger than that of the neutron. Figure 3: A schematic view of the transformation of linear into angular motion and vice-versa that could play a crucial role in the synthesis of the neutron and its stimulated decay. Note that such a transformation is outside the capabilities of the Poincarรฉ symmetry due to its sole validity for Keplerian systems, that is, for massive points orbiting around a heavier nucleus without collisions. By comparison, the transformation of this figure requires the presence of subsidiary constraints altering the conservation laws, thus altering the very generator of the applicable symmetry. It should be noted that the above insufficiencies of quantum mechanics generally apply for the synthesis of all hadrons, beginning with that for the neutral pion where the "positive binding energy" is now of 133.95M eV . The view advocated in this paper is that, rather than denying the synthesis of hadrons just because not permitted by quantum mechanics, a covering mechanics permitting quantitative studies of said synthesis should be built. The most visible evidence indicating the lack of exact character of quantum mechanics for the synthesis and structure of hadrons is that, unlike atoms, hadrons do not have nuclei. Consequently, a mechanics that is exact for the atomic structure cannot be exact for the hadronic structure due to the lack of a Keplerian structure that, in turn, requires the breaking of the fundamental Galilean and Loretzian symmetries. Quantum mechanics was conceived and constructed for the representation of the trajectories of electrons moving in vacuum in atomic orbits (this is the so-called exterior dynamical problems), in which case the theory received historical verifications. The same mechanics cannot possibly be exact for the description of the dramatically different physical conditions of the same electron moving within the hyperdense medium inside a proton (this is the so-cal;led interior dynamical problems). Such an assumption literally implies the belief in the perpetual motion within a physical medium since it implies that an electron must orbit in the core of a star with a conserved angular momentum, as requested by the quantum axiom of the rotational symmetry and angular momentum conservation law. In the final analysis it has been established by scientific history that the validity of any given theory within given conditions is set by the results. Quantum mechanics has represented all features of the hydrogen atom in a majestic way and, therefore, the theory is exactly valid for the indicated conditions. By contrast, when extended to the structure of particles, quantum mechanics has only produced an interlocked chain of individually implausible and unverifiable conjectures on neutrinos and quarks while having dramatic insufficiencies in the representation of particle data, besides failing to achieve final results in various other branches of sciences, such as in nuclear physics, chemistry and astrophysics. After all these controversies protracted for such a long period of time, there comes a time in which the serious conduction of serious science requires a re-examination of the foundational theory. 5. The etherino hypothesis. As clearly shown by the preceding analysis, the synthesis of the neutron according to Rutherford [1a] not only misses spin 1 2 as historically pointed out by Pauli [1c] and Fermi [1d], but also misses 0.78M eV energy. Moreover, these quantities must be acquired by the proton and electron for the synthesis to exist, rather than being "released" as in Eq. (1.2). Consequently, a central open problem in the synthesis of the neutron is the identification of "were" these quantities originate. The first evident answer is that the missing quantities originate from the environment in the interior of stars in which the neutron is synthesized. In fact, there is no doubt that the interior of stars can indeed supply spin 1 2 and 0.78M eV energy. However, due to the fundamental character of the neutron synthesis for the entire universe, serious studies should not be solely restricted to the most obvious possibility, and should consider instead all plausible alternatives no matter how speculative they may appear at this time. Along the latter lines, we recall the hypothesis of the continuous creation of matter in the universe that has been voiced repeatedly during the 20-th century. In this paper we point out that the best possible mechanism for continuous creation is precisely the synthesis of neutrons inside stars under the assumption that the missing energy and spin originates from the ether conceived as a universal medium with an extremely large energy density. Far from being farfetched, the hypothesis is supported by predictably insufficient, yet significant evidence, such as the fact that stars initiate their lives as being solely composed of hydrogen atoms that miss the energy and spin needed for the first synthesis, that of the neutron, after which all conventional nuclear syntheses follow. Additionally, explicit calculations indicate that the immense energy needed for a supernova explosion simply cannot be explained via the sole use of conventional nuclear syntheses, thus suggesting again the possible existence of a mechanism extracting energy from the ether and transforming it into a form existing in our spacetime. It is important to point out that the notion of ether as a universal substratum appears to be necessary not only for the characterization and propagation of electromagnetic waves, but also for the characterization and propagation of all elementary particles and, therefore, for all matter existing in the universe. The need for a universal medium for the characterization and propagation of electromagnetic "waves" is so strong to require no study here, e.g., for waves with 1m wavelength for which the reduction to photons for the purpose of eliminating the ether loses credibility. The same notion of ether appears necessary also for the characterization and propagation of the electron, due to its structure as a "pure oscillation" of the ether, namely, an oscillation of one of its points without any oscillating mass as conventionally understood. Similar structures are expected for all other truly elementary particles. It should be indicate that the above conception implies that, contrary to our sensory perception, matter is totally empty and space is totally full, as suggested by the author since his high school studies [4a]. This conception is necessary to avoid the "ethereal wind" [4b] that delayed studies on the ether for at least one century, since motion of matter would merely require the transfer of the characteristic oscillations from given points of the ether to others. Mass is then characterized by the known equivalence of the energy of the characteristic oscillations, and inertia is the resistance provided by the ether against changes of motion. For additional recent views on the ether we refer interested readers to Ref. [4c]. In order to conduct quantitative studies of the above alternatives, in this note we submit the hypothesis, apparently for the first time, that the synthesis of the neutron from protons and electrons occurs via the absorption either from the environment inside stars or from the ether of an entity, here called etherino (meaning in Italian "little ether") and indicated with the symbol "a" (from the latin aether) having mass and charge 0, spin 1/2 and a minimum of 0.78M eV energy. We reach in this way the following Etherino hypothesis on the neutron synthesis p + + a n + e โˆ’ โ†’ n, where a n denotes the neutron etherino (see below for other cases), and the energy 0.78M eV is assumed to be "minimal" because of the conventional "negative" binding energy due to the attractive Coulomb interactions between the proton and the electron at short distances. The apparent necessity of the etherino hypothesis is due to the fact that the use of an antineutrino in lieu of the etherino p + +ฮฝ + e โˆ’ โ†’ n, (5.2) would have no known physical value for various reasons, such as: 1) the proton and/or the electron cannot possibly absorb 0.78M eV energy and spin 1 2 from the antineutrino due to the virtually null value of their cross section; 2) being an antiparticle, the antineutrino has to carry a negative energy [7c], while the synthesis of the neutron requires a positive energy; and others. In the author view, a compelling aspect supporting the etherino hypothesis is the fact that the synthesis of the neutron has the highest probability when the proton and the electron are at relative rest, while the same probability becomes essentially null when the proton and the electron have the (relative) missing energy of 0.78M eV since in that case their cross section becomes very small (about 10 โˆ’20 barns). Another argument supporting the etherino over the neutrino hypothesis is that the former permits quantitative studies on the synthesis of the neutron as we shall see in subsequent sections, while the latter provides none, as shown in the preceding section. Still another supporting argument is that the etherino hypothesis eliminates the implausible belief that massive particles carrying energy in our spacetime can traverse enormous hyperdense media without collisions since the corresponding etherino event could occur in the ether as a universal substratum, without any propagation of mass and energy in our spacetime. In order to prevent the invention of additional hypothetical particles over an already excessive number of undetected particles existing in contemporary physics, the author would like to stress that the etherino is not intended to be a conventional particle existing in our spacetime, but an entity representing the transfer of the missing quantities from the environment or the ether to the neutron. The lack of characterization as a conventional physical particle will be made mathematically clear in the next sections. It is evident that the etherino hypothesis requires a reinspection of the spontaneous decay of the neutron. To conduct a true scientific analysis, rather than adopt a scientific religion, it is necessary to identify all plausible alternatives, and then reach a final selections via experiments. We reach in this way the following three possible alternatives: First hypothesis on the neutron decay: namely, the etherino hypothesis for the neutron "synthesis" can indeed be fully compatible with the neutrino hypothesis for the neutron "decay"; Second hypothesis on the neutron decay: namely, we have the return of the missing energy and spin to the environment or the ether; and Third hypothesis on the neutron decay: namely, no neutrino or etherino is emitted. Note that the latter case is strictly prohibited by quantum mechanics because of known conservation laws and related symmetries. However, the latter case should be not dismissed superficially because it is indeed admitted by the covering hadronic mechanics via the conversion of the orbital into the kinetic motion as in Fig. 3. The synthesis of the antineutron in the interior of antimatter stars is evidently given by p โˆ’ +ฤn + e + โ†’n. (5.6) whereฤn is the antineutron antietherino, namely an entity carrying negative energy as apparently necessary for antimatter [7c]. This would imply that the ether is constituted by a superposition of very large but equal densities of positive and negative energies existing in different yet coexisting spacetimes, a concept with even deeper cosmological and epistemological implications since their total null value would avoid discontinuities at creation. For the synthesis of the neutral pion we have the hypothesis where a ฯ€ o is the ฯ€ o -etherino, namely, an entity carrying mass, charge and spin 0 and minimal energy of 133.95M eV transferred from the ether to our spacetime. Additional forms of etherino can be formulated depending on the synthesis at hand. The understanding of synthesis (5.7) requires advanced knowledge of modern classical and operator theories of antimatter (see monograph [7c]) because a ฯ€ o must be iso-self-dual, namely, it must coincide with its antiparticle as it is the case for the ฯ€ o . In more understandable terms, a ฯ€ o represents an equal amount of positive and negative energy, since only the former (latter) can be acquired by the electron (positron), the sign of the total energy for isoselfdual states being that of the observer [loc. cit.]. Intriguingly, the etherino hypothesis for the neutron decay, Eq. (5.4), is not necessarily in conflict with available data on neutrino experiments, because it could provide their mere reinterpretation as a new form of communication through the ether. Moreover, in the event the propagation of the latter event results to be longitudinal as expected, it could be much faster than the speed of conventional (transversal) electromagnetic waves. In the final analysis, the reader should not forget that, when inspected at interstellar or intergalactic distances, communications via electromagnetic waves should be compared to the communications by the Indians with smoke signals. The search for basically new communications much faster than those via electromagnetic waves is then mandatory for serious astrophysical advances. In turn, such a search can be best done via longitudinal signals propagating through the ether. Then, the possibility of new communications being triggered by the etherino reinterpretation of the neutrino should not be aprioristically dismissed without serious study. 6. Rudiments of the covering hadronic mechanics. When at the Department of Mathematics of Harvard University in the late 1970s, R. M. Santilli [5a] proposed the construction of a new broader realization of the axioms of quantum mechanics under the name of hadronic mechanics that was intended for the solution of the insufficiencies of conventional theories outlined in the preceding sections. The name "hadronic " mechanics" was selected to emphasize the primary applicability of the new mechanics at the range of the strong interactions, since the validity of quantum mechanics for bigger distances was assumed a priori. . The central problem was to identify a broadening-generalization of quantum mechanics in such a way to represent linear, local and potential interactions, as well as additional, contact, nonlinear, nonlocal-integral and nonpotential interactions, as expected in the neutron synthesis. as well as in deep mutual penetration and overlapping of hadrons ( Figure 2). Since the Hamiltonian can only represent conventional local-potential interactions, the above condition requested the identification of a new quantity capable of representing interactions that, by conception, are outside the capability of a Hamiltonian. Another necessary condition was the exiting from the class of equivalence of quantum mechanics, as a consequence of which the broader theory had to be nonunitary, namely, its time evolution had to violate the unitarity condition. The third and most insidious condition was the time invariance, namely, the broader mechanics had to predict the same numerical values under the same conditions at different times. It was evident that a solution verifying the above conditions required new mathematics, e.g. new numbers, new spaces, new geometries, new symmetries, etc. A detailed search in advanced mathematical libraries of the Cantabridgean area revealed that the needed new mathematics simply did not exist and, therefore, had to be built. Following a number of (unpublished) trials and errors, Santilli [5a] proposed the solution consisting in the representation of contact, nonlinear, nonlocal and nonpotential interactions via a generalization (called lifting) of the basic unit +1 of quantum mechanics into a function, a matrix or an operatorรŽ that is positive-definite like +1, but otherwise has an arbitrary functional dependence on all needed quantities, such as time t, coordinates r, momenta p, density ยต, frequency ฯ‰, wavefunctions ฯˆ, their derivatives โˆ‚ฯˆ, etc. +1 > 0 โ†’รŽ(t, r, p, ยต.ฯ‰, ฯˆ.โˆ‚ฯˆ, ...) =รŽ โ€  = 1/T > 0, (6.1) while jointly lifting the conventional associative product ร— between two generic quantities A, B (numbers, vector fields, matrices, operators, etc.) into the form admittingรŽ, and no longer +1, as the correct left and right unit 2b) for all elements A, B of the set considered. The selection of the basic unit resulted to be unique for the verification of the above three conditions. As an illustration, whether generalized or not, the unit is the basic invariant of any theory. The representation of non-Hamiltonian interactions with the basic unit permitted the crucial by-passing of the theorems of catastrophic inconsistencies of nonunitary theories (for a review one may inspect Section 5, Chapter 1 of Ref. [7a]). Since the unit is the ultimate pillar of all mathematical and physical formulations, liftings (6.1) and (6.2) where: Eq. (6.3b) represent the crucial nonunitarity-isounitary property, namely, the violation of unitarity on conventional Hilbert spaces over a field, and its reconstruction on iso-Hilbert spaces over isofields with inner product <ฯˆ|ร—|ฯˆ >; we have used in Eqs. (6.3b) the notion of isoexponentiation, see Eq. (6.12d); all quantities with a "hat" are formulated on isospaces over isofields with isocomplex numbersฤ‰ = c ร—รŽ, c โˆˆ C; and one should note the isodifferential calculus with expressions of the typed/dt =รŽ with isocanonical commutation rules A few comments are now in order. In honor of Einstein's vision on the lack of completion of quantum mechanics, Santilli submitted the original Eqs. (6.1)-(6.8) under the name of isotopies, a word used in the Greek meaning of "preserving the original axioms." In fact,รŽ preserves all topological properties of +1, Aร—B is as associative as the conventional product A ร— B and the preservation of the original axioms holds at all subsequent levels to such an extent that, in the event any original axiom is not preserved, the lifting is not isotopic. Nowadays, the resulting new mathematics is known as Santilli isomathematics,รŽ is called Santilli's isounit, Aร—B is called the isoproduct, etc. (see the General Bibliography of Ref. [7a] and monograph [8]). Note the identity of Hermiticity and its isotopic image, (<ฯˆ|ร—ฤค โ€ )ร—|ฯˆ >โ‰ก<ฯˆ|ร—(ฤคร—|ฯˆ > ),ฤค โ€  โ‰กฤค โ€  , thus implying that all quantities that are observable for quantum mechanics remain observable for hadronic mechanics; the new mechanics is indeed isounitary, thus avoiding the theorems of catastrophic inconsistencies of nonunitary theories; hadronic mechanics preserves all conventional quantum laws, such as Heisenberg's uncertainties, Pauli's exclusion principle, etc.; dynamical equations (6.3)-(6.8) have been proved to be "directly universal" for all possible theories with conserved total energy, that is, capable of representing all infinitely possible systems of the class admitted (universality) directly in the frame of the observer without the use of transformations (direct universality); and numerous other features one can study in Refs. [6][7][8]. Also, one should note that hadronic mechanics verifies the abstract axioms of quantum mechanics to such an extent that the two mechanics coincide at the abstract, realization-free level. In reality, hadronic mechanics provides an explicit and concrete realization of the theory of "hidden variables" ฮป, as one can see from the abstract identity of the isoeigenvalue equationฤคร—|ฯˆ >=รŠร—|ฯˆ > and the conventional equation H ร— |ฯˆ >= E ร— |ฯˆ >, by providing in this way an operator realization of hidden variables ฮป =T (for detailed studies on these aspects, including the inapplicability of Bell's inequality, see Ref. [6g]. We should also indicate that the birth of hadronic mechanics can be seen in the following new isosymmetry, here expressed for a constant K for simplicity, The reader should not be surprised that the above isosymmetry remained unknown throughout the 20-th century. In fact, its identification required the prior discovery of new numbers, Santilli's isonumbers with arbitrary units [5b]. Compatibility between hadronic and quantum mechanics is reached via the condition Lim r>>10 โˆ’13 cmรŽ โ‰ก 1, (6.10) under which hadronic mechanics recovers quantum mechanics uniquely and identically at all levels. Therefore, hadronic mechanics coincides with quantum mechanics everywhere except in the interior of the so-called hadronic horizon (a sphere of radius 1F = 10 โˆ’13 cm) in which the new mechanics admits non-Hamiltonian realizations of strong interactions. A simple method has been identified in Refs. [5d] for the construction of hadronic mechanics and all its underlying new mathematics consisting of: (i) Representing all conventional interactions with a Hamiltonian H and all non-Hamiltonian interactions and effects with the isounitรŽ; (ii) Identifying the latter interactions with a nonunitary transform U ร— U โ€  =รŽ = I (6.11) (iii) Subjecting the totality of conventional mathematical, physical and chemical quantities and all their operations to the above nonunitary transform, resulting in expressions of the type Note that catastrophic inconsistencies emerge in the event even one single quantity or operation is not subjected to isotopies. In the absence of comprehensive liftings, we would have a situation equivalent to the elaboration of the quantum spectral data of the hydrogen atom with isomathematics, resulting of dramatic deviations from reality. It is easy to see that the application of an additional nonunitary transform to expressions (6.12) causes the lack of invariance, e.g., (6.13) with consequential activation of the theorems of catastrophic inconsistencies [7a]. However, any given nonunitary transform can be identically rewritten in the isounitary form, under which hadronic mechanics is indeed isoinvariant Note that the invariance is ensured by the numerically invariant values of the isounit and of the isotopic element under nonunitary-isounitary transforms,รŽ โ†’รŽ in a way fully equivalent to the invariance of quantum mechanics, as expected to be necessarily the case due to the preservation of the abstract axioms under isotopies. The resolution of the catastrophic inconsistencies for noninvariant theories is then consequential. Hadronic mechanics has nowadays clear experimental verifications in particle physics, nuclear physics, superconductivity, chemistry, astrophysics, cosmology and biology (see monographs [7,8,9h] for details), which verifications cannot possibly be reviewed here. We merely mention for subsequent need the following realization of the isounit for two particles in conditions of mutual penetration I = Diag.(n 2 11 , n 2 12 , n 2 13 , n 2 14 ) ร— Diag.9n 2 21 , n 2 22 , n 2 23 , n 2 24 )ร— where n 2 ak , a = 1, 2, k = 1, 2, 3 are the semiaxes of the ellipsoids representing the two particles, n a4 , a = 1, 2 represents their density,ฯˆ represents the isowavefunction, ฯˆ represents the conventional wavefunction (that forรŽ = 1), and N is a positive constant. Note the clearly nonlinear, nonlocal-integral and nonpotential character of the interactions represented by isounit (6.16). The use f the above isounit permitted R. M. Santilli and D. Shillady to reach the first exact and invariant representation of the main characteristics of the hydrogen, water and other molecules, said representation being achieved directly from first axiomatic principles without ad hoc parameters, or adulterations via the screenings of the Coulomb law under which the notion of quantum loses any physical or mathematical meaning, thus rendering questionable the very name of "quantum chemistry" (see [7b] for details). . In reality, due to its nonunitary structure, hadronic chemistry contains as particular cases all infinitely possible screenings of the Coulomb laws. To understand this results, one should note that quantum mechanics is indeed exact for the structure of one hydrogen atoms, but the same mechanics is no longer exact for two hydrogen atoms combined into the hydrogen molecule due to the historical inability to represent the last 2% of the binding energy as well as due to other insufficiencies. The resolution of these insufficiencies was achieved by hadronic chemistry [7b] precisely via isounit (6.16), namely, via the time invariant representation of the nonlinear, nonlocal and nonpotential interactions occurring in the deep overlapping of the wavepackets of electrons in valence bonds. The new structure models of hadrons presented below is essentially an applications in particle physics of these advances achieved in chemistry. 7. The new structure model of hadrons with massive physical constituents produced free in spontaneous decays. In this section, we show that hadronic mechanics permits the exact and (time) invariant representation of "all" characteristics of the neutron as a new bound state of a proton and an electron suitable for experimental verifications; we extend the results to the new structure model of all hadrons with massive physical constituents that can be produced free in the spontaneous decays; we show the compatibility of these advances with the standard model when restricted to provide only the final Mendeleevtype classification of hadrons; and we show the capability of hadronic mechanics as being the sole theory capable of permitting quantitative representations of the possible interplay between matter and the ether, since the latter requires "positive" binding-like energies for which quantum mechanics admits no physically consistent solutions. When the Schrรถdinger-Santilli isoequation is worked out in detail for a bound state of two particles with spin in condition of total mutual penetration, there is the emergence of a strongly repulsive interaction for triplet couplings (parallel spins), and a strongly attractive interaction for singlet coupling (antiparallel spin). It should be indicated that, to prevent a prohibitive length, this section is primarily dedicated to the "synthesis" of (unstyable) hadrons, while their spontaneous decays is treated elsewhere. The case of interest here is the lifting of the Schrรถdinger equation for a conventional twobody bound state (such as the positronium or the hydrogen atom) via isounit (6.16) where both particles are assumed to be spheres of radius 1F for simplicity. Hence, we consider the simple lifting of quantum bound states characterized by the following Animalu-Santilli isounit [7a]h = 1 โ†’รŽ = U ร— U โ€  = e kร—(ฯˆ/ฯˆ)ร— dr 3 ร—ฯˆ โ€  โ†“ (r)ร—ฯˆ โ†‘ (r) , (7.1) where ฯˆ is the wavefunction of the quantum state andฯˆ is that of the corresponding hadronic state. In all cases of singlet coupling the lifting yields a strongly attractive Hulten potential that, as well known, behaves at short distances like the Coulomb potential, thus absorbing the latter, and resulting in the expressions achieved in the original proposal [5a] (for reviews see [7a,7b,9g]) (7.2) where the original Coulomb interaction has been absorbed by the Hulten constant V o and the liftings m k โ†’m k , k = 1, 2, are new mass isorenormalizations, that is, renormalizations caused by non-Hamiltonian (or non-Lagrangian) interactions. Needless to say, the isorenormalization of the mass implies that of the remaining intrinsic characteristics of particles. This assures the departure from quantum mechanics as necessary for the problem at hand. Detailed studies have shown that the constituents of a bound state described by hadronic mechanics are no longer irreducible representations of the conventional Poincarรฉ symmetry (a necessary departure due to the lack of a Keplerian structure indicated earlier), and are characterized instead by irreducible isorepresentations of the Poincarรฉ-Santilli isosymmetry [6]. For this reason they are called isoparticles and are denoted with conventional symbols plus a "hat", such asรช ยฑ ,ฯ€ ยฑ ,p ยฑ , etc. The mechanism permitting physically consistent equations for two-body bound states requiring a "positive" binding-like energy, as it is the case for the ฯ€ o and the neutron, is due to the mass isorenormalization since it achieves such an increased value under which the Hulten binding energy can be negative. In fact, for the case of the ฯ€ o according to synthesis (7.2) the isorenormalized masses of the individual isoelectrons become of the order of 70MeV , while for the case of the synthesis of the neutron according to Eq. (7.2), the isonormalized mass of the electron (assuming that the proton is at rest) acquires a value of the order of 1.39MeV , thus allowing a negative binding energy in both cases. Via the use of hadronic mechanics, the original proposal [5a] achieved already in 1978 a new structure model of ฯ€ o meson as a compressed positronium, thus identifying the physical constituents with ordinary electron and positrons although in an altered state caused by their condition of total mutual penetration. This permitted the numerical, exact and invariant representation of all characteristics of the ฯ€ o via the following single structural equation e โˆ’r 12 ร—b 1 โˆ’ e โˆ’r 12 ร—b ร—|ฯˆ = E ร—|ฯˆ (7.3a) m k = e = 0.511M eV, k = 1, 2, E = 134.97M eV, ฯ„ = 8.4x10 โˆ’17 s, R = b โˆ’1 = 10 โˆ’13 cm. (7.3b) where the latter expressions are subsidiary constraints on the former (see [7a,7b,9g] for reviews). The above results were extended in the original proposal [5a] to all mesons resulting in this way in a structure model of all mesons with massive physical constituents that can be produced free in the spontaneous decays, generally those with the lowest mode, that are nowadays represented with the symbols ฯ€ o = (รช + ,รช โˆ’ ) HM , ฯ€ ยฑ = (ฯ€ o ,รช ยฑ ) HM = (รช + ,รช ยฑ ,รช โˆ’ ) HM , K o = (ฯ€ + ,ฯ€ โˆ’ ) HM , etc., (7.4) where e, ฯ€, K, etc. represent conventional particles as detected in laboratory andรช,ฯ€,K, etc. represent isoparticles, namely, the alteration of their intrinsic characteristics (called mutation [5a]) caused by their deep mutual penetration. More technically, conventional particles are charactgerized by unitary irreducible representations of the Poincarรฉ symmetry, while isoparticles are characterized by the isounitary irreducible representations of the covering Poincarรฉ-Santilli isosymmetry [6]. A few comments are now in order. Firstly, we should note the dramatic departures of the above structure models from conventional trends in the Mendeleev-type classification of hadrons. To begin, when dealing with classification the emphasis is in searching for "mass spectra." On the contrary, structure model of type (7.4) are known to be spectra suppressing. In essence, the Hulten potential is known to admit only a finite spectrum of energy level. When all conditions (7.3b) are imposed, the energy levels reduces to only one, that specifically and solely for the meson considered. Needless to say, excited states do exist, but are of quantum type, that is, whenever the constituents are excited, they exit from the "hadronic horizon" because isounit (7.1) reduces to 1, and quantum mechanics is recovered identically. Consequently, the excited states of structure model (7.3) for the ฯ€ o are given by the infinite energy levels of the positronium. An additional dramatic departure from classification trends is given by the number of constituents. According to the standard model, in the transition from one hadron to another of a given family (such as in the transition from ฯ€ o to ฯ€ + ) the number of quark constituents remain the same. On the contrary, according to hadronic mechanics, the number of constituents must necessarily increase with the increase of the mass, exactly as it is the case for the atomic (nuclear) structure in which the number of constituents increases in the transition from the H to the He atom (from proton to the deuteron). The model also achieved a representation of the spontaneous decays with the lowest mode that is generally interpreted as a tunnel effect of the constituents through the hadronic horizon (rather than the particles being "created" at the time of the decay as requested by the standard model). The remaining decay are the results of rather complex events under non-Hamiltonian interactions still under investigation at this writing. The representation of Rutherford's synthesis of the neutron, Upon completion of these efforts, Santilli achieved in Ref. [9a] of 1990 the first known, numerically exact and time invariant nonrelativistic representation of all characteristics of the neutron as a hadronic bound state of a proton assumed to be un-mutated and a mutated electron (or isoelectron) n = (p + ,รช โˆ’ ) HM , (7.5) via the following single structural equation representing the compression of the hydrogen atoms below to the hadronic horizon exactly as originally conceived by Rutherford e โˆ’r 12 ร—b 1 โˆ’ e โˆ’r 12 ร—b ร— |ฯˆ = E ร— |ฯˆ (7.6a) ยต e = 0.511M eV, , ยต p = 938, 27M eV, E = 939.56M eV, ฯ„ = 886s, R = 10 โˆ’13 cm. (7.6b) The relativistic extension of the above model was reached in Ref. [9b] of 1993 (see also [6e]) via the isotopies of Dirac's equation, and cannot be reviewed here to avoid a prohibitive length. Remarkably, despite the disparities between Eqs. (7.3) and (7.6), the Hulten potential admitted again one single energy level, that of the neutron. Under excitation, the isoelectron exits the hadronic horizon (again, because the integral in Eq. (7.1) becomes null) and one recovers the quantum description. Consequently, according to structure model (7.6), the excited states of the neutron are the infinite energy levels of the hydrogen atom. The representation of the spin 1/2 of the neutron turned out to be much simpler than expected, as outlined in Figure 4. In particular, the hadronic representation of the synthesis of the neutron does not require any neutrino at all, exactly as originally conceived by Rutherford, of course, not at the quantum level, but at the covering hadronic level. Once compressed inside the proton, in order to have an attractive bond, the electron is constrained to have its spin antiparallel to that of the proton and, in order to achieve a stable state, the electron orbital momentum is constrained to coincide with the spin 1/2 of the proton ( Figure 4), resulting in the following representation of the spin of the neutron namely, it the total angular momentum of the isoelectron is null, s tot e = s spin e + s orb e = 0, (7.8) and the spin of the neutron coincides with that of the proton. It should be noted that a fractional value of the angular momentum is anathema for quantum mechanics, namely, when defined over a conventional Hilbert space H over the field of complex numbers C (because it causes a departure from its nonunitary structure with a host of problems), while the same fractional value is fully admissible for hadronic mechanics, namely when defined on an iso-Hilbert space โˆง H over an isofield โˆง C in view of its isounitary structure. As a simple example, under the isounit and isotopic elementsรŽ = 1 2 ,T = 2 and isonormalization <ฯˆ| ร—T ร— |ฯˆ >= 1 the half-off-integer angular momentumฤด 3 = 1 2 admits the isoexpectation value 1, The above occurrence should not be surprising for the reader familiar with hadronic mechanics. In fact, the sole admission of conventional values of the angular momentum would imply the admission of the perpetual motion for, say, an electron orbiting in the core of a star. In the transition from motion in a quantized orbit in vacuum in an atomic structure to motion within the core of a star, the angular momentum assumes an arbitrary, locally varying value. The only reason for the orbital value 1 2 for the neutron is the existence of the constraint restricting the angular momentum of the isoelectron to coincide with the spin of the proton (Figure 4). The magnetic moment of Rutherford's neutron is characterized by three contributions, the magnetic moment of the proton, that of the isoelectron, and that caused by the orbital motion of the isoelectron. Note that for quantum mechanics the third contribution is completely missing because all particles are considered as points, in which case the electron cannot rotate inside the proton. As well known, the inability by quantum mechanics to treat the orbital motion of the electron inside the proton (due to its point-like character) was the very origin of the conjecture of the neutrino. With reference to the orientation of Figure 4, and by keeping in mind that a change of the sign of the charge implies a reversal of the sign of the magnetic moment, the representation of Ref. (9a) is based on the identity (7.10) Since the spin of the proton and of the electron can be assumed to be conventional in first approximation, we can assume that their intrinsic magnetic moments are conventional, i.e., (7.11) consequently ยต p + ยต e = 1, 835ยต N , (7.12) It is then evident that the anomalous magnetic moment of the neutron originates from the magnetic moment of the orbital motion of the isoelectron inside the proton, namely, a contribution that has been ignored since Rutherford's time until treated in Ref. [9a]. It is easy to see that the desired exact and invariant representation of the anomalous magnetic moment of the neutron is characterized by the following numerical values ยตรช โˆ’orbital = +1.004ยต B , ยตรช โˆ’total = 3 ร— 10 โˆ’3 ยต B , ยต n = โˆ’1.9123ยต N , (7.13) and this completes our nonrelativistic review. Note that the small value of the total magnetic moment of the isoelectron is fully in line with the small value of its total angular momentum (that is null in first approximation due to the assumed lack of mutation of the proton). We regret to be unable to review the numerically exact and time invariant relativistic representation of the anomalous magnetic moment of the neutron [6e,9b] because it provides much deeper insights than the preceding one with particular reference to the density of hadrons, n 2 4 , in isounit (6.16), that is completely absent in conventional treatment via the standard model. In fact, the numerical value of n 4 obtained from the fit of the data on the fireball of the Bose-Einstein correlation permits the exact representation of the neutron anomalous magnetic moment without any additional quantity or unknown parameters. As one can see, the spontaneous decay of the neutron into physical, actually observed particles is given by n = (p + ,รช โˆ’ ) HM โ†’ p + + e โˆ’ , (7.14) as a mere tunnel effect of the massive constituents through the hadronic horizon, after which particles reacquire their conventional quantum characteristics. Assuming that the neutron and the proton are isolated and at rest in the spontaneous decay, the electron is emitted with 0.782M eV energy. When the decaying neutron is a member of a nuclear structure, the energy possessed by the electron is generally less than 0.782M eV and varies depending on the angle of emission, as indicated in Section 2. The behavior of the angular momentum for reaction (7.14) can be interpreted at the level of hadronic mechanics and related Poincarรฉ-Santilli isosymmetry via the transformation of the orbital into linear motions without any need for the neutrino, in the same way as no neutrino is needed for the neutron synthesis . The extension of the model to all baryons resulted to be elementary, with models of the type n = (p + ,รช โˆ’ ) HM โ‰ˆ (p + ,รช โˆ’ ) HM , ฮ› = (p + ,ฯ€ โˆ’ ) HM โ‰ˆ (n,ฯ€ o ) HM , (7.15a) (7.15) where one should note the equivalence of seemingly different structure models due to the indicated mutation of the constituents. Compatibility of the hadronic structure models with the SU (3)-color Mendeleev-type classification was first suggested in Ref. [5d] and resulted to be possible in a variety of ways, such as, via a multivalued hyperunit [7a] consisting of a set of isounits each characterizing the structure of one individual hadrons in a given unitary multiplet The lifting of SU(3)-color symmetries under the above hyperunit is isomorphic to the conventional symmetry due to the positive-definiteness of the hyperunit, thus ensuring the preservation of all numerical results of the Mendeleev-type classifications due to the preservation of the structure constants, Eqs. (6.12e). In closing, the reader may have noted the dichotomy between the etherino hypothesis of Section 5 for the synthesis of the neutron and its absence in this section for the same topic. This is due to the fact that the lifting calH โ†’ฤค implicitly represents the absorption of the needed spin and energy from the ether or from the environment (such as the interior of a star), thus clarifying that, unlike the neutrino, the etherino is not a physical particle in our spacetime, but merely represents the indicate transfer of features. We therefore have the following equivalence n = (p + , a o , e โˆ’ ) QM โ‰ˆ (p + ,รช โˆ’ ) HM , (7.18) with the understanding that the Schrรถdinger equation is physically inconsistent for the QM formulation, while its isotopic image is fully consistent. We can therefore conclude by saying that hadronic mechanics is the first and only theory known to the author for quantitative invariant studies of the interplay between matter and the background medium, whether the ether or the hyperdense medium inside hadrons. 8. New clean energies permitted by the absence of neutrinos and quarks. Molecular, atomic and nuclear structures have provided immense benefits to mankind because their constituents can be produced free. Quark theories on the structure of hadrons have no practical value whatever because, by comparison, quarks by conception cannot be produced free. On the contrary, the structure model of hadrons with physical constituents that can be produced free allows the prediction of new clean energies originating in the structure of individual hadrons, rather than in nuclei, todays known as hadronic energies [9c], that could provide the first industrial application of hadron physics. In fact, the neutron is the biggest reservoir of clean energy available to mankind because: 1) The neutron is naturally unstable; 2) When decaying it releases a large amount of energy carried by the emitted electron; and 3) Such energy can be easily trapped with a thin metal shield. Moreover, hadronic energy is two-fold because, when the decay of the neutron occurs in a conductor, the latter acquires a positive charge while the shield trapping the electron acquires a negative charge, resulting in a new clean production of continuous current originating in the structure of the neutron first proposed by Santilli in Ref. [9c] and today known as hadronic battery [8]. The second source of energy is thermal and it is given by the heat acquired by the shield trapping the emitted electrons. Recall that, unlike the proton, the neutron is naturally unstable. Consequently, it must admit a stimulated decay. That predicted by hadronic mechanics was first proposed by Santilli [9c] in 1994 and it is given by hitting a selected number of nuclear isotopes, called hadronic fuels, with hard photons ฮณ res having energy (frequency) given by a submultiple of the difference of energy between the neutron and the proton 1a) E res = 1.294 n MeV, n = 1, 2, 3, ..., (8.1b) Figure 5: The view illustrates a "hadronic fuel", the MO(100,42), that, when hit by a neutron resonating frequency, is predicted to experience a stimulated decay into an unstable isotope that, in turn, decays spontaneous into a final stable isotope with the total emission of two highly energetic electrons, thus realizing the conditions of Figure 5. under which the isoelectron is predicted to be excited and consequently cross the 1F hadronic horizon, resulting in the stimulated decay ฮณ res + n โ†’ p + + e โˆ’ . (8.2) The energy gain is beyond scientific doubt, because the use of 1/10-th of the exact resonating frequency (8.1b) could produce clean energy up to 100-times the original value, depending on the energy of the released betas. Note that the energy of photons not causing stimulated decay is not lost, because absorbed by the hadronic fuel, thus being part of the heat balance. One among numerous cases of hadronic energy proposed for test in Ref. [5c] is given by 3a) T c(100, 43) โ†’ Ru(100, 44) + ฮฒ โˆ’ , (8.3b) where the first beta decay is stimulated while the second is natural and occurs in 18 sec. Note that the conventional nuclear energy is based on the disintegration of large and heavy nuclei, thus causing well known dangerous radiations and leaving dangerous waste. By comparison, hadronic energy is based on the use of light nuclei as in thecase of Eqs. (8.3), thus releasing no harmful radiation and leaving no harmful waste because both the original nucleus Mo(100, 42) and the final one Ru(100,44) are natural, light and stable elements (for additional studies, see [9g]). As proposed in Ref. [9d], the above stimulated decay of the neutron could be of assistance to conventional nuclear energy since it would allow the stimulated decay of radioactive nuclear waste via the use of a coherent beam of resonating gammas, plus additional feature, such as high intensity electric fields. These conditions would cause a sudden increase ofd positive charges plus an ellipsoidal deformation of large nuclei under which their decay is unavoidable. The latter equipment is predicted to be sufficiently small to be used by nuclear power plants in their existing pools, thus rendering conventional nuclear energy more accepted by society. 9. The much needed experimental resolutions. The international physics community has spent to date in neutrino and quark conjectures well in excess of ten billion dollars of public funds from various countries while multiplying, rather than resolving the controversies as indicated in Sections 2 and 2. It is evident that the physics community simply cannot continue this trend without risking a historical condemnation by posterity. When possible new energies so much needed by mankind emerge from alternative theories without neutrinos and quarks as physical particles in our spacetime, the gravity of the case emerge in its full light. In the hope of contributing toward the much needed expriental resolutions, in this paper we propose the following basic tests. Proposed First Experiment: to resolve whether or not there is energy available for the neutrino in the neutron decay. This experiment can be done today in numerous ways. That recommended in this note, apparently for the first time, is to conduct systematic measurements of the energy of the electron emitted in the decay of a coherent beam of low energy (e.g., thermal) neutrons as depicted in Fig. 6. The detection of energies of the electrons systematically less than 0.78MeV (plus the neutron energy), would eliminate the third hypothesis on tyhe neutron decay, Eq. (5.5), and support the firsty and second hypotheses, Eqs. (5.3) and (5.4), but would be unable to distinguish between them. The detection of electron energy systematically given by 0.78MeV (plus the neutron energy) would disprove the emission of a neutrino or an etherino in neutron decays, and support the third hypothesis, Eq. (5.) in clear favbor ofg the continuous creation of matter in the unioverse. Note that the conduction of the proposed test with "high energy" neutrons would not be resolutory because the variation of the electron energy expected to be absorbed by the neutrino would be excessively smaller than the electron energy. The conduction of the test via nuclear beta decays is also not recommendable due to the indicated expected dependence of the electron energy from the direction of beta emission, which dependence is ignorable for the case of decay of individual neutrons. The author has been unable to identify in the literature any conduction of the proposed trest since all available experiments refer to nuclear beta decays rather than that of individual neutrons. Any indication by interfested colleagues of specific reference to tests similar to that herein proposed would be gratefully appreciatred. I Proposed Second Experiment: Achieve the laboratory synthesis of the neutron to identify the needed energy. The first attempt at synthesizing the neutron in laboratory known to this author was conducted with positive outcome in Brazil by the Italian priest-physicist Don Borghi and his associates [9e] (see [3b] for a review). The tests were apparently successful, although the experimental set up does not allow the measurement of theenergy needed for the synthesis. The latter information can be obtained nowadays in a variety of ways. That recommended in this note, consists in sending a coherent electron beam against a beryllium mass saturated with hydrogen and kept at low temperature (so that the protons of the hydrogen atoms can be approximately considered to be at rest). A necessary condition for credibility is that said protons and electrons be polarized to have antiparallel spins (singlet couplings), because large repulsions are predicted for triplet couplings at very short distances for particles with spin, as it is the case for the coupling of ordinary gears. Since the proton and the electron have opposite charges, said polarization can be achieved with the same magnetic field as illustrated in Fig. 7. Neutrons that can possibly be synthesized in this way will escape from the beryllium mass and can be detected with standard means. The detection of neutron produced with electron kinetic energies systematically in excess of 0.78MeV would confirm the neutrino hypothesis. The systematic detection of neutrons synthesized either at the threshold energy of 0.78MeV or less would support alternative hypotheses, such as that of the etherino, and render polausible the hypothesis of continuous creation of matter in the universe via the neutron synthesis as studied in Section 5. Proposed Third Experiment: Test the stimulated decay of the neutron as a source for new clean energies. The test of the stimulated decay of the neutron proposed in Ref. [9c], Eqs. (8.3) and Figure 5, was successfully conducted by N. Tsagas and his associates [9f] (see Ref. [9g for a review and upgrading). As illustrated in Fig. 6, the latter experiment was conducted via a disk of Europa 152 (emitting photons precisely with the needed resonating frequency) coupled to a disc of molybdenum, the pair being contained inside a scintilloscope for the detection of the emitted electrons, the experimental set up being suitably shielded, as customary. The test was successful because it detected electrons emitted by the indicated pair with energy bigger than 1 MeV since electrons from the Compton scattering of photons and atomic electrons can at most have 1 MeV . the same test can be repeated in a variety of way with different hadronic fuels (see [9c] for alternatives). Needless to say, despite their positive outcome, the available results on the proposed three tests are empirical, rudimentary and inconclusive. Rather than being reasons for dismissal, these insufficiencies establish instead the need for the finalization of the proposed basic experiments, also in view of their rather large scientific and societal relevance.
Albumin and interferon-ฮฒ fusion protein serves as an effective vaccine adjuvant to enhance antigen-specific CD8+ T cell-mediated antitumor immunity Background Type I interferons (IFN) promote dendritic cells maturation and subsequently enhance generation of antigen-specific CD8 +T cell for the control of tumor. Using type I interferons as an adjuvant to vaccination could prove to be a potent strategy. However, type I interferons have a short half-life. Albumin linked to a protein will prolong the half-life of the linked protein. Methods In this study, we explored the fusion of albumin to IFNฮฒ (Alb-IFNฮฒ) for its functional activity both in vitro and in vivo. We determined the half-life of Alb-IFNฮฒ following treatment in the serum, tumor, and tumor draining lymph nodes in both wild type and FcRn knockout mice. We characterized the ability of Alb-IFNฮฒ to enhance antigen-specific CD8+ T cells using ovalbumin (OVA) or human papillomavirus (HPV) E7 long peptides. Next, we evaluated the therapeutic antitumor effect of coadministration of AlbIFNฮฒ with antigenic peptides against HPVE7 expressing tumor and the treatmentโ€™s ability to generate HPVE7 antigen specific CD8+ T cells. The contribution of the antitumor effect by lymphocytes was also examined by an antibody depletion experiment. The ability of Alb-IFNฮฒ to serve as an adjuvant was tested using clinical grade therapeutic protein-based HPV vaccine, TACIN. Results Alb-IFNฮฒ retains biological function and does not alter the biological activity of IFNฮฒ. In addition, Alb-IFNฮฒ extends half-life of IFNฮฒ in serum, lymph nodes and tumor. The coadministration of Alb-IFNฮฒ with OVA or HPVE7 antigenic peptides enhances antigen-specific CD8 +T cell immunity, and in a TC-1 tumor model results in a significant therapeutic antitumor effect. We found that CD8 +T cells and dendritic cells, but not CD4 +T cells, are important for the observed antitumor therapeutic effect mediated by Alb-IFNฮฒ. Finally, Alb-IFNฮฒ served as a potent adjuvant for TA-CIN for the treatment of HPV antigen expressing tumors. Conclusions Overall, Alb-IFNฮฒ serves as a potent adjuvant for enhancement of strong antigen-specific CD8 +T cell antitumor immunity, reduction of tumor burden, and increase in overall survival. Alb-IFNฮฒ potentially can serve as an innovative adjuvant for the development of vaccines for the control of infectious disease and cancer. INTRODUCTION Type I interferons are a major class of immune cytokines that also can be used as potent antiviral agents for the treatment of viruses such as hepatitis C. 1 2 Beyond inducing antiviral immune responses, these cytokines, including both interferon-ฮฑ (IFNฮฑ) and interferon-ฮฒ (IFNฮฒ), elicit a plethora of signals. For example, type I interferons contribute to the efficacy of anticancer therapies and have shown to mediate antineoplastic effects against many different malignancies. 3 Type I interferons also intervene in different stages of cancer immunoediting, including Open access equilibrium between the immune system and malignant cells, as well as during the phase in which neoplastic cell variants escape due to compromised immune systems. 3 Most importantly, type I interferons have been shown to trigger dendritic cell (DC) maturation and migration toward lymph nodes (LNs), both of which are important in cross-priming cytotoxic immune responses. 3 During antigen presentation by plasmacytoid DCs (pDCs), a high level of type I interferon secretion is observed in a concentrated area of T cells within lymphatic tissues associated with cancer. Recently, pDCs have been shown to traffic to tumor tissues and secrete chemokines such as C-X-C motif ligand 9 (CXCL9) and CXCL10, which recruit T cells to the tumor. 4 Type I interferons can also directly increase the cytotoxicity and survival of CD8 +T cells. 5 In recent years, immunotherapy has begun to emerge as a potentially promising strategy for cancer treatment. Additionally, many anticancer therapies that rely on type I interferon signaling have shown success in clinical use. 3 6 In many instances, administering type I interferons can lead to antiviral and antiproliferative bioactivities. 7 These cytokines even possess immunostimulatory functions. 7 As an adjuvant to standard cancer immunotherapies, type I interferons have already demonstrated improvements in disease-free survival as well as overall survival. For example, several randomized trials using IFNฮฑ as an adjuvant in both low-dose and high-dose regimens have suggested effectiveness in improving survival outcomes of melanoma patients. [8][9][10] Unfortunately, type I interferons have a short 2-3 hours long plasma half-life and require weekly injections when administered as an adjuvant, 10 11 which significantly reduces its applicability in clinical settings. Albumin is a ubiquitous plasma protein that is known for its long half-life in vivo and ability to thus increase the half-life of molecules that are associated with it. 12 13 This is achieved via transcytotic recycling of albumin's neonatal Fc receptor (FcRn). 14 Due to its circulation pattern as a plasma protein physiologically, albumin is able to drain into the lymphatic tissues. Albumin binding has been shown to be effective for directing immunostimulatory molecules, including vaccine constructs, to the LNs in order to elicit potent immune responses. 15 16 Albumin has low immunogenicity and is easy to construct, express, and purify, therefore it is an advantageous drug carrier. 17 Albumin thus serves as a prime candidate to deliver cytokines and other biological cargo preferentially toward the LNs. Due to the ability of IFNฮฒ to promote DC expansion 18 and albumin's ability to traffic toward LNs and extend half-life, we reason that a fusion between albumin and IFNฮฒ may have a profound impact on cross-priming cytotoxic immune responses and may generate large pDC populations for antigen presentation. Strategies that expand cross-presenting DC populations also have the potential to be efficacious in the treatment of cancer. We generated fusion protein albumin-IFNฮฒ (Alb-IFNฮฒ) by genetically fusing albumin to IFNฮฒ. In this study, we evaluated the therapeutic potential of Alb-IFNฮฒ to modulate immune cell phenotypes and improve antitumor responses. We show that Alb-IFNฮฒ does not alter or impede the biological activity of IFNฮฒ, as our novel molecule is able to generate DCs in vitro from bone marrow (BM) cells. The half-life of Alb-IFNฮฒ is indeed longer than that of IFNฮฒ alone, suggesting efficacy in the fusion strategy to albumin. In addition, in vivo distribution studies of Alb-IFNฮฒ show preferential accumulation of our fusion protein in the tumor-draining LNs (tdLNs). More importantly, cross-presenting DCs generated by Alb-IFNฮฒ in vivo are functional and able to generate potent antigen-specific T and B cell responses to both the ovalbumin (OVA) and human papillomavirus (HPV) E7 antigens. We also found that knocking out basic leucine zipper ATF-like transcription factor 3 (Batf3), which plays a crucial role in the development, expansion, and function of cross-presenting pDCs, 19 20 reduced the antitumor effects of Alb-IFNฮฒ. Furthermore, administrating Alb-IFNฮฒ as an adjuvant to a clinical grade therapeutic HPV protein-based vaccine, TA-CIN, for the treatment of HPVassociated TC-1 tumors resulted in a significant reduction in TC-1 tumor burden, improved overall survival, and upregulation of E7-specific CD8 +T cell and DC activities. We also show that the antitumor immunity elicited by our fusion protein is both CD8-and DC-dependent and CD4independent. In response to Alb-IFNฮฒ, we observed an upregulation in CXCL9 and CXCL10 expressions in the tdLNs, which are chemoattractants secreted by DCs to recruit T cells to the tumor. Our results strongly support that Alb-IFNฮฒ makes an excellent adjuvant to immunotherapies with strong therapeutic and clinical translation implications because it is able to enhance immunological responses mediated by DCs. MATERIALS AND METHODS Cells As previously described, TC-1 cells express the HPV16 E6 and E7 proteins. 21 Cells were grown in RPMI 1640 media, supplemented with 10% (v/v) fetal bovine serum, 50 units/mL of penicillin/streptomycin, 2 mM L-glutamine, 1 mM sodium pyruvate, 2 mM non-essential amino acids, and 0.1% (v/v) 2-mercaptoethanol at 37ยฐC with 5% CO 2 . For the BMDC isolation and culture, the tibias and femurs were removed from C57BL/6 mice under sterile condition. Both ends of the bone were cut-off and BM was flushed out by 26-gage syringes with complete RPMI medium. Following red blood cell lysis and washing, BM cells were suspended with complete RPMI medium and seeded in 6-well culture plates with 29 ng/mL of granulocyte-macrophage colonystimulating factor (GM-CSF) for 5 days. 22 For the DC activation experiments, BMDCs matured in GM-CSF were treated with 0.1 ยตM of Alb-IFNฮฒ, 0.1 ยตM of IFNฮฒ, or 1 ยตg/mL of lipopolysaccharide (LPS) (as positive control) for 16 hours. Open access Generation of Alb-IFNฮฒ protein constructs For the generation of pcDNA3-Alb-IFNฮฒ, mouse IFNฮฒ was first amplified via PCR with a cDNA template of the mouse IFNฮฒ (pUNO1-mIFNB1) plasmid from Invivogen (San Diego, CA 92121 USA) and the following primers: 5โ€ฒ AAAG AATT CATC AACT ATAA GCAGCTC-3' and 5-AAAC TTAA GTCA GTTT TGGA AGTTTCT-3'. The amplified product was then cloned into the EcoRI/Afl II sites of pcDNA3-Alb. 23 The plasmid constructs were confirmed by DNA sequencing. Alb-IFNฮฒ proteins were expressed using Expi293F expression system kit (Thermo Fisher Scientific, Waltham, Massachusetts, USA) according to manufacturer's instructions. Expi293F cells were transfected with Alb-IFNฮฒ. Proteins were purified by HiTrap Albumin column (GE Healthcare Life Sciences, Marlborough, Massachusetts, USA). Mice Female C57BL/6 mice aged 6-8-weeks were purchased from Charles Rivers Laboratories (Frederick, Maryland, USA). All mice were maintained under specific pathogenfree conditions at the Johns Hopkins University School of Medicine Animal Facility (Baltimore, Maryland, USA). All animal procedures were performed according to protocols approved by the Johns Hopkins Institutional Animal Care and Use Committee. Recommendations for the proper use and care of laboratory animals were closely followed. For tumor inoculation, 2ร—10 5 TC-1 cells in 50 ยตL of PBS were subcutaneously (s.c.) injected into 6-8 weeks old female C57BL/6 mice. Tumor volume was measured by digital calipers and greatest length and width were determined. Tumor volumes were calculated by the formula: tumor volume = (length ร— width 2 )/2. For CD4 + and CD8 + T cell depletion, 100 ยตg of anti-mouse CD8 + depleting antibody or 200 ยตg of anti-mouse CD4 + depleting antibody were administered to tumor-bearing mice for 3 days via intraperitoneal injection. The depletion was maintained through the experiment by giving depleting antibody once a week. CD4 + or CD8 + T cell depletion level were evaluated by flow cytometry on blood. 24 Batf3-/-and FcRN-/-mice were acquired from Jackson Laboratories. Flow cytometry analyses Peripheral blood samples from naรฏve and TC-1 tumorbearing mice were collected into 100 ยตL of PBS containing 0.5 mM EDTA. Following red blood cell lysis and washing, PBMCs were collected and stained with Zombie Aqua to determine the cell viability. Fc receptors were blocked by anti-mouse CD16/CD32 antibody. To analyze OVA-and E7-specific T cells, Fc receptor blocked PBMCs were stain with PE-conjugated SIINFEKL (OVA) peptide or HPV16 E7aa49-57 peptide loaded H-2D b E7 tetramer and FITCconjugated anti-mouse CD8ฮฑ antibody (Biolegend). Adaptive T cell transfer and tracking C57BL/6 mice were s.c. injected with 5ร—10 5 TC-1 cells for 10 days (after the tumor reached 8 to 10 mm in diameter). A 10 ยตg of IFNฮฒ or 50 ยตg of Alb-IFNฮฒ was intravenously injected into tumor-bearing mice through retro orbital sinus. One day after treatment with Alb-IFNฮฒ or IFNฮฒ, 5ร—10 6 luciferase-expressing E7-specific T cells were intravenously injected into the tumor-bearing mice via tail vein. Luciferase-expressing E7-specific T cells were generated as previously described. 30 For tracking the E7-specific T cell in tumor-bearing mice, 75 mg/kg of d-Luciferin was given to the mice via intraperitoneal injection and imaged by the IVIS Spectrum in vivo imaging system series 2000 (PerkinElmer) on day 1 and day 4 after T cell transfer. Total photon counts were quantified in the tumor site by using Living Image 2.50 software (PerkinElmer). ELISA For the half-life experiment, naรฏve or TC-1 tumor-bearing C57BL/6 mice were intravenously injected with 10 ยตg of IFNฮฒ or 50 ยตg of Alb-IFNฮฒ. Serum were collected in EDTA-free Eppendorf tube on 3, 24 and 48 hours posttreatment. Tumor and LNs were harvested 16 hours posttreatment and then minced into 1-2 mm pieces and lysis by RIPA buffer (Cell Signaling Technology, Massachusetts, USA). IFNฮฒ was measured by IFN beta Mouse ELISA Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. For the OVA and HPV16 L2 antibody Open access detection, 1 ug/mL of mouse OVA or HPV16 L2 protein in PBS was coated on BRANDplates microplates overnight at 4ยฐC. After 16 hours, the plates were washed, blocked with eBioscienceTM ELISA/ELISPOT Diluent (Thermo Fisher Scientific), and added diluted serum for 2 hours at room temperature.Non-vaccinated mice serum was used as control. Goat anti-mouse IgG-HRP secondary antibody was added at 1:5000 dilution for 1 hour, followed by TMB substrate. The OD at 450 nm was determined by 800 TS Absorbance Reader (BioTek Instruments). Statistical analysis The statistical analysis was performed using GraphPad Prism V.6 software and data were interpreted as means with SD. Kaplan-Meier survival plots are used to estimate the survival percentage and tumor-free rate. Long rank tests were used to compare the survival time between treatment groups. Comparison between individual data points were used to analyze in the t-test and p value smaller than 0.05 is considered statistically significant, *pโ‰ค0.05, **pโ‰ค0.01,*** pโ‰ค0.001, **** pโ‰ค0.0001, ns=not significant. IFNฮฒ fused to albumin retains biological function and does not alter the biological activity of IFNฮฒ To bypass short half-life limitations posed by IFNฮฒ, a genetic fusion protein consisting of albumin and IFNฮฒ was produced and purified (figure 1A). In many instances, cytokine function can be altered when it is fused to a carrier protein. Thus, we determined whether the fusion of albumin to IFNฮฒ (Alb-IFNฮฒ) would affect the biological function of IFNฮฒ. Through in vitro titration experiments, the expression of H-2Kb and PD-L1 on TC-1 cells increased as the cells were treated with increasing concentrations of IFNฮฒ or Alb-IFNฮฒ (figure 1B-C). BMDCs were also treated with either IFNฮฒ alone or Alb-IFNฮฒ, with BMDCs treated with LPS serving as a positive control and untreated BMDCs serving as a negative control. H-2Kb ( figure 1E) and PD-L1 (figure 1G) expressions were comparable between Alb-IFNฮฒ-treated BMDCs and BMDCs treated with IFNฮฒ alone. Similarly, BMDCs treated with Alb-IFNฮฒ also expressed similar levels of CD40 ( figure 1D) and CD86 (figure 1F) compared with IFNฮฒ alone. The addition of Alb-IFNฮฒ failed to increase PD-1 or LAG3 expression using both TC-1 and BMDCs (online supplemental figure 1). Taken together, our data suggest that Alb-IFNฮฒ retains similar biological function compared with IFNฮฒ alone. The linkage of albumin to IFNฮฒ extends half-life and increases IFNฮฒ in serum, LNs and tumor in vivo Next, we sought to better understand the underlying trafficking mechanism of Alb-IFNฮฒ in vivo. Albumin is known to increase the half-life of molecules that are associated with it through transcytotic recycling of the FcRn. [12][13][14] Specifically, albumin fusion to IFNฮฒ has been shown to increase the half-life of IFNฮฒ by more thanfivefold. 31 Thus, we suspected FcRn to play a role in our albumin fusion strategy. To determine the whether Alb-IFNฮฒ has an extended half-life, we intravenously injected Alb-IFNฮฒ or IFNฮฒ into C57BL/6 mice. We found that levels of IFNฮฒ were significantly higher at every time point when mice were treated with Alb-IFNฮฒ as compared with mice treated with IFNฮฒ alone (figure 2A). We also found that levels of IFNฮฒ significantly decreased in FcRn knockout mice (figure 2A), suggesting the importance of FcRn in extending the half-life of Alb-IFNฮฒ. Previous studies have also suggested that due to the circulation pattern of albumin in the plasma, albumin fusion proteins are able to preferentially traffic toward the draining LNs. 15 16 From our experiments, we found that Alb-IFNฮฒ was present at higher levels in LNs (figure 2B), and targets to both the tumors and the tumor draining LNs (tdLNs) more efficiently than IFNฮฒ alone (figure 2C). Thus, we were able to confirm the notion that an albumin fusion strategy increases the half-life of conjugated IFNฮฒ and is somewhat reliant on the presence of FcRn. We also showed that an albumin fusion strategy allows IFNฮฒ to be targeted to both tumors and tdLNs at higher levels than being treated with IFNฮฒ alone. Coadministration Alb-IFNฮฒ and antigenic peptides enhances antigen-specific CD8+ T cell immunity in vivo To evaluate the ability of Alb-IFNฮฒ to promote CD8 +T cell responses to an exogenously-derived antigen, we vaccinated C57BL/6 mice with either OVA protein or E7 peptide (amino acids [43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62]. It has been well documented that long E7 peptide is better than short peptides at inducing robust T cell responses, and it is capable of Open access delivering specific cargo to antigen presenting cells. 32 Additionally, OVA protein was used rather than OVA long peptide in order to demonstrate that our treatment strategy can be used with various vaccination platforms, such as protein vaccines. We simultaneously administered these antigens with or without either IFNฮฒ or Alb-IFNฮฒ to C57BL/6 mice twice at 1-week intervals (figure 3A). One week after the second vaccination, PBMCs were analyzed for the presence of OVA-specific and E7-specific CD8 +T cells by tetramer staining. Mice that received coadministration of Alb-IFNฮฒ and OVA induced the highest number of OVA-specific CD8 +T cells compared with the other treatment groups (figure 3B-C). In addition, mice treated with coadministration of E7 long peptide and Alb-IFNฮฒ similarly developed the most robust E7-specific CD8 +T cells (figure 3D-E). Finally, significantly higher titers of OVA-specific IgG2a/IgG1a antibodies were detected in the sera of mice vaccinated with both Alb-IFNฮฒ and the OVA antigen compared with mice vaccinated with IFNฮฒ and OVA or OVA alone (online supplemental figure 2). Our results suggest that coadministration of Alb-IFNฮฒ with antigen enhances antigen-specific CD8 +T cell mediated and humoral immune responses in vivo compared with coadministration of IFNฮฒ alone. Coadministration of Alb-IFNฮฒ and E7 peptide generates a potent therapeutic antitumor effect against E7 expressing TC-1 tumor We next looked at the antitumor properties of Alb-IFNฮฒ coadministered with E7 antigen. We treated C57BL/6 mice bearing HPV16 E7-positive TC-1 tumors with either E7 alone, Alb-IFNฮฒ alone, E7 with IFNฮฒ, or E7 with Alb-IFNฮฒ twice at 1-week intervals ( figure 4A). Tumor-bearing mice administered with the E7 antigen with Alb-IFNฮฒ showed the smallest tumor volume compared with the other groups ( figure 4B). Consistently, tumor-bearing mice administered with E7 with Alb-IFNฮฒ survived twice as long compared with mice treated with the other treatment groups ( figure 4C). Clinically, interferon has led to the development of many side effects and potentially toxic at high doses, especially when registered with multiple different treatments. 33 34 While the doses of IFNฮฒ and Alb-IFNฮฒ used in this study were low and all tumor-bearing mice were only treated twice, there may still be toxicity concerns. One important side effect as a result of IFNฮฒ-induced toxicity is weight loss. [35][36][37][38] In our study, we did not observe any significant weight loss in tumor-bearing mice administered with any combination of the treatments ( figure 4D). It seems that coadministration of Alb-IFNฮฒ Open access and E7 peptide does not induce any noticeable toxic side effects and was able to suppress tumor growth and prolong survival rates of tumor-beating mice better than tumor bearing mice treated with IFNฮฒ and E7 peptide. Coadministration of Alb-IFNฮฒ with HPV E7 peptides results in enhanced E7-specific CD8+ T cell immune responses and DC activity Because albumin conjugation has been shown to extend the half-life of the IFNฮฒ (figure 2), we explored whether a single s.c. administration of Alb-IFNฮฒ with E7 peptide in tumor-bearing mice can elicit E7-specific CD8 +T cellmediated immune responses and enhanced DC activity. DCs are known for their potent ability to cross present exogenous antigens to cytotoxic CD8 +T cells. While we have demonstrated that fusion of albumin to IFNฮฒ does not impede the ability of IFNฮฒ to expand DCs, we further examined whether coadministration of Alb-IFNฮฒ with E7 antigen was superior in expanding DCs and promoting cytotoxic T cell responses to E7 antigens in vivo compared with coadministration of IFNฮฒ with E7 antigen. TC-1 tumor bearing C57BL/6 mice were vaccinated with either E7 alone, Alb-IFNฮฒ alone, IFNฮฒ with E7, or Alb-IFNฮฒ with E7. PBMCs were then collected from the mice for analysis. There was a significantly higher amount of E7-specific CD8 +T cells in mice treated with Alb-IFNฮฒ and E7 compared with all other treatment conditions in the TC-1 tumor-bearing mice, indicating potent expansion of E7-specific CD8 +T cells following coadministration with Alb-IFNฮฒ with E7 antigen ( figure 5A-B). Moreover, mice treatment with Alb-IFNฮฒ and E7 also had the highest levels of DC activation marker CD86 (figure 5C). In order to characterize the immune cell proliferation, we next examined the proliferative marker Ki67 following treatment. 39 The advantage of using Ki67 for lymphocyte proliferative assays is to indicate the function of E7 specific T cells. Alb-IFNฮฒ and E7 vaccinated mice also exhibited the highest proliferation activity of E7-specific CD8 +T cells in tumor-bearing mice compared with all other vaccination regimens (figure 5D). DCs in Alb-IFNฮฒ and E7 vaccinated mice also had significantly higher Ki67 proliferative expression than other treatment conditions ( figure 5E). Thus, our data suggest that Alb-IFNฮฒ is able Open access to generate and expand more potent antigen-specific cytotoxic T cell and DCs compared with IFNฮฒ when coadministered tumor antigen in tumor-bearing mice. CD8 +T cells and cross presenting DCs, but not CD4 +T cells, are important for the observed antitumor therapeutic effect mediated by Alb-IFNฮฒ coadministered with E7 peptides After determining Alb-IFNฮฒ elicits antitumor immunity through the expansion of both DCs and E7-specific CD8 +T cells when coadministered with E7 antigen, we wanted to test for the subset of lymphocytes that are essential for Alb-IFNฮฒ to produce its effects. We first depleted either CD4 +or CD8+T cells in TC-1 tumor-bearing mice (n=5) by injecting them with either 200 ยตg of anti-mouse CD4 antibodies or 100 ยตg of anti-mouse CD8 antibodies daily for 3 days, followed by Alb-IFNฮฒ and E7 vaccination (figure 6A). Our results suggest that CD8 +T cell depletion completely abolished the anti-tumor effects generated by Alb-IFNฮฒ and E7 vaccination (figure 6B) and decreases the survival rate of Alb-IFNฮฒ and E7 vaccinated tumor-bearing mice ( figure 6C). However, the tumor volume and survival rates of Alb-IFNฮฒ and E7 -treated tumor-bearing mice did not significantly change by depletion of CD4 +T cells (figure 6D-E). Furthermore, Batf3 is known to play an important role in the development, expansion, and function of cross-presenting DCs. 15 Thus, we perform the anti-tumor experiment in Baft3 KO mice to determine whether it affected antitumor effect of Alb-IFNฮฒ and E7 vaccination. Batf3 KO mice treated with Alb-IFNฮฒ and E7 vaccination apparently reduced their ability to control the tumor progression ( figure 6F) and generated fewer E7-specific CD8 +T cells ( figure 6G). Taken together, our results suggest both E7-specific CD8 +T cells and cross-presenting DCs are important for Alb-IFNฮฒ to properly elicit potent antitumor responses when coadministered with E7 antigen. Treatment with Alb-IFNฮฒ increased antigen-specific CD8+ T lymphocytes in the tumor microenvironment To understand how Alb-IFNฮฒ affects antigen-specific CD8 +T cells trafficking to the tumor microenvironment (TME), tumor-bearing mice were treated with either Open access Alb-IFNฮฒ, IFNฮฒ, or PBS control followed by adaptive transfer of luciferase-expressing E7-specific CD8 +T cells (see online supplemental figure 3). By day 4, E7-specific CD8 +T cells were highly accumulated in the tumor area of mice administered with Alb-IFNฮฒ compared with IFNฮฒ (online supplemental figure 3). In comparison, tumor bearing mice administered with IFNฮฒ did not demonstrated impact to the number of E7-specific CD8 +T cell in the tumor compared with untreated group. Taken together, our data indicated that administration of Alb-IFNฮฒ facilitates tumor infiltration of E7-specific CD8 +T lymphocytes in the TME. Treatment with Alb-IFNฮฒ leads to increased levels of chemokines in tumors and increased CD8+ T cell activity and DC activation in the tdLNs Cross-presenting DCs have been shown to secrete chemokines such as CXCL9 and CXCL10. These chemokines are then able to recruit T cells to the TME, thus mounting an antitumor immune response. 4 To test whether Alb-IFNฮฒ can promote the expression of these chemokines in the tumors, we analyzed DC activation in the TME and changes in chemokine expression following Alb-IFNฮฒ treatment. The levels of CXCL10 and CXCL9 were significantly higher in tumors treated with Alb-IFNฮฒ compared mice treated with to IFNฮฒ (online supplemental figure 4A-B). Within the tdLNs, mice treated with Alb-IFNฮฒ exhibited significantly higher CD8 +T cell proliferative activity in the tdLNs of tumor-bearing mice compared with untreated control mice. However, there is no significant difference between mice treated with Alb-IFNฮฒ or IFNฮฒ (online supplemental figure 4C). Additionally, Alb-IFNฮฒ was able to induce higher DC activation in the tdLNs compared with IFNฮฒ (online supplemental figure 4D).Thus, our experiments showed that Alb-IFNฮฒ treatment successfully increases chemokine expression, which also increases CD8 +T cell activity and DC maturation in the tdLNs of tumor bearing mice. Alb-IFNฮฒ serves as a potent adjuvant for HPV protein based therapeutic vaccine, TA-CIN, for the treatment of HPV antigen expressing tumors Tissue Antigen-Cervical Intraepithelial Neoplasia (TA-CIN) is a candidate therapeutic HPV protein vaccine comprised of a fusion of full length HPV16 L2, E6, and E7 proteins. 40 It is administered as a filterable protein aggregate to promote uptake by antigen presenting cells. This protein vaccine has been shown to induce both E7-specific CD8 +T cell-mediated antitumor and HPV L2-specific neutralizing antibody responses in preclinical models. 16 40 41 However, the clinical efficacy of TA-CIN alone may not be as effective as intended, probably due to the lack of immunogenic Open access adjuvants in the formulation of the protein vaccine. 42 43 Thus, we sought to overcome TA-CIN immunogenic deficiencies by combining it with Alb-IFNฮฒ treatment. Tumor-bearing mice were administered either TA-CIN or TA-CIN in combination with Alb-IFNฮฒ (figure 7A). TC-1 tumor-bearing mice receiving TA-CIN treatment in combination with Alb-IFNฮฒ had significantly lower tumor growth compared with mice that were vaccinated with TA-CIN alone (figure 7B). Additionally, combination treatment of TA-CIN and Alb-IFNฮฒ was also more effective in prolonging the survival of tumorbearing mice than TA-CIN alone or untreated tumor-bearing mice ( figure 7C). Combination treatment of Alb-IFNฮฒ and TA-CIN also induced higher levels of E7-specific CD8 +T cells in tumor-bearing mice (figure 7D). Significantly higher level of anti-L2 IgG antibodies were similarly detected in the combination group compare to TA-CIN alone ( figure 7E). Taken together, we show that Alb-IFNฮฒ is able to enhance TA-CIN-elicited antitumor effects to suppress tumor growth and we believe that Alb-IFNฮฒ serves as a potentially potent immunologic adjuvant. DISCUSSION In this study, we evaluated the therapeutic potential of Alb-IFNฮฒ in combination with antigens to modulate immune cells and improve antigen-specific antitumor responses. Our data shown Alb-IFNฮฒ not only retains similar biological activity compared with IFNฮฒ in vitro but is able to generate potent antigen-specific T and B cell responses to OVA and HPV16 proteins. Additionally, vaccination of Alb-IFNฮฒ and HPV16 antigens in tumor-bearing mice resulted in a significant reduction in tumor burden and better overall survival. The antitumor immune responses generated by Alb-IFNฮฒ and HPV16 antigens vaccination were found to be CD8-dependent and DC-dependent and CD4-independent. One possible explanation for why CD4 +T cells are not as important for the antitumor effect is that it has been documented that mice vaccinated with IFNa and OVA antigens generate OVA specific CD8 independent of CD4 or CD40. 44 This is likely because Albinterferons cause maturation of DCs and also provide a third signal to enhance CD8 proliferation. 45 We also observed a significant increase of the antigen-specific CD8 +T cells in the tumor location was observed in tumor bearing mice treated with Alb-IFNฮฒ. Alb-IFNฮฒ also Open access accumulates in the tdLNs and facilitates the expansion of antigen-specific CD8 +CTLs in the TME. We suggested Alb-IFNฮฒ can increase CD8 +T cell activities and promote DC maturation in the tdLNs possibly through inducing an upregulation of CXCL9 and CXCL10. An assessment of Alb-IFNฮฒ used in combination with a clinical drug TA-CIN showed superior antitumor effects compared with TA-CIN alone, therefore suggesting Alb-IFNฮฒ as an effective immunologic adjuvant. The therapeutic potential of Alb-IFNฮฒ lead us to believe that it should be further investigated for clinical translation. Alb-IFNฮฒ holds immense therapeutic potential as a novel immunotherapy. With Alb-IFNฮฒ we could possibly improve treatment schedules while limiting any side effects to generate potent antigen-specific antitumor responses. The linkage of Albumin to IFNฮฒ not only extends half-life and but also leads to the targeting of IFNฮฒ to the LNs and tumor in vivo and thereby serves as a potent adjuvant for vaccination. Thus, Alb-IFNฮฒ can bypass shortcomings posed by the weekly administrations and increased dosages of IFNฮฒ thereby limiting potential side effects in the clinic. Although PEGylated IFNฮฑ and IFNฮฒ have also demonstrated increased half-lives in vivo, Alb-IFNฮฒ could likely better target the interferons to LNs based on its natural circulation. Therefore, Alb-IFNฮฒ has a higher chance of contacting immune cells in LNs, and subsequently enhanced the DC cross-presentation. However, future studies comparing the half-life and effectiveness of PEGylated IFNฮฒ and Alb-IFNฮฒ should be considered. Anticancer immunotherapies harness the immune system to develop a response toward tumors. Immunotherapy can include checkpoint inhibitors, adoptive cell therapies, and cancer vaccines, among other approaches. 46 47 There are many cancer vaccine delivery methods, including through intratumoral and localized mucosal routes. [48][49][50][51] However, these delivery methods are invasive, therefore limiting the number of participants willing to take part in clinical settings. [52][53][54] An alternative approach is to target tdLNs, which are known to accumulate tumor antigens that can be used to prime antitumor T cell responses. 55 In our study, we show that Alb-IFNฮฒ is able to target the tdLNs (figure 2). With our albumin-fusion targeting strategy, we can locally Open access administer a less-invasive procedure that can target therapeutic vaccines toward the tdLNs in order to elicit potent, local antigen-specific antitumor responses within the TME (online supplemental figure S2). Alb-IFNฮฒ therefore provides immense clinical opportunities to deliver antigens to DCs at the TME and expand robust cytotoxic immune responses. Additionally, in the current study we observed increased luciferase expressing antigen specific CD8 +T cells in the TME following treatment with Alb-IFNฮฒ. There are at least two reasons that may account for the observed phenomenon. First, it may be attributed to the trafficking of the antigen specific CD8 +T cells to the location of the tumor (as implied by the study with CXCL9 and CXCL10). Second is that it may be due to the proliferation of antigen specific CD8 +T cells at the tumor location (as suggested by the characterization of Ki67). In the current study, we have found that both Batf3 is an important factor for the ability of Alb-IFNฮฒ to control tumor progression. IFNฮฒ enhances cross-presenting DC maturation, whereas Batf3 is crucial to the development, expansion, and functioning of cross-presenting DCs. 15 56-58 Thus, we used Batf3 KO mice to study the role of this gene in the ability of Alb-IFNฮฒ to expand cross-presenting DCs. We show that Baft3 KO mice administered with Alb-IFNฮฒ were less capable to control tumor growth progression and generated fewer E7-specific CD8 +T cells ( figure 6). Of note, Batf3 KO mice treated with Alb-IFNฮฒ still were able to control tumor growth compared with untreated mice were, suggesting that although Batf3 alone is important for Alb-IFNฮฒ effect it is not the only contributing factor. Other factors may also contribute to the ability of Alb-IFNฮฒ to control tumor in addition to Batf3. In our study, we found that FcRn is an important mediator for the ability of albumin to extend the half-life of IFNฮฒ. When administering Alb-IFNฮฒ to FcRn KO mice, we noticed a shorter half-life of Alb-IFNฮฒ compared with Alb-IFNฮฒ in C57BL/6 mice ( figure 2). However, despite a significant decrease in the half-life of Alb-IFNฮฒ in FcRn KO mice, the half-life was still longer than the half-life of IFNฮฒ in C57BL/6 mice. Thus, although FcRn can extend the half-life of IFNฮฒ linked to albumin, some other factors may also contribute to the prolonged half-life mediated by albumin. It is of interest to further explore the other possible mechanisms that account for the prolongation of half-life of the protein fused to albumin. Open access We have observed tumor-bearing mice treated with Alb-IFNฮฒ resulted in more tumor antigen specific CD8 +T cells in the tumor location (online supplemental figure S2). At least two reasons may account for the observed phenomenon. One is that the antigen specific T cells may be preferentially attracted to the tumor location in tumor-bearing mice treated with Alb-IFNฮฒ. Alternatively, the other reason is that tumor-bearing mice treated with Alb-IFNฮฒ results in enhanced proliferation of antigen specific CD8 +T cells in the tumor location. Indeed, our data from the characterization of CXCL9 and CXCL10 in the tumor location appears to be higher (online supplemental figure S3). In fact, type I interferons have been shown to induce CXCL10 and CXCL9 production in DCs and subsequently enhance their ability to stimulate CD8 +effector T cells. 59 60 However, other IFN-modulated chemokines/cytokines couple be involved, potentially in a different manner in the tumor or LN. [61][62][63][64] Thus, further exploration on how other chemokines/cytokines may be impacted by Alb-IFNฮฒ should be considered. Alb-IFNฮฒ may serve as a protein based adjuvant that can be used to enhance protein based vaccines. It would be important to further test whether Alb-IFNฮฒ can be used as an adjuvant to improve vaccine efficacy of other types of protein based vaccines or other forms of vaccine, such as DNA/RNA based, cell based, or vector based vaccines. This information would create the opportunity for wide application of Alb-IFNฮฒ to enhance vaccine potency. For clinical translation, it will be important to further characterize the toxicity generated by Alb-IFNฮฒ. The understanding of the ability of Alb-IFNฮฒ to enhance vaccine potency as well as the toxicity associated with the coadministration of Alb-IFNฮฒ will be critical for the assessment whether Alb-IFNฮฒ will serve as a better adjuvant compared with other adjuvants. Such information will be critical for the development of vaccines against infections and cancers. Contributors S-HT, BL, LL and YJK contributed to the conduction of the experiments. S-HT, MAC, EF and YJK contributed to the original draft of the manuscript. LF contributed to the editing of the manuscript. T-CW and C-FH supervised and conceptualized the study, and interpreted the data. C-FH is the guarantor of the work. Funding This work was supported the National Institute of Health and the National Cancer Institute under award numbers R01CA237067, R01CA233486, R21CA234516, R21DE029910, R21CA256020 and P50CA098252. Competing interests T-CW is a cofounder of and has an equity ownership interest in Papivax. Also, T-CW owns Papivax Biotech stock and is a member of Papivax Biotech's Scientific Advisory Board. Patient consent for publication Not applicable. Ethics approval Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available on reasonable request. All data and materials are available from the corresponding author on written request. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See https://creativecommons.org/ licenses/by/4.0/.
Effect of the interleukin 10 polymorphisms on interleukin 10 production and visceral hypersensitivity in Chinese patients with diarrhea-predominant irritable bowel syndrome Abstract Background: Irritable bowel syndrome (IBS), a functional gastrointestinal disorder, is characterized by cytokine imbalance. Previously, decreased plasma interleukin 10 (IL-10) level was reported in patients with IBS, which may be due to genetic polymorphisms. However, there are no reports correlating the IL-10 polymorphisms with IL-10 production in patients with IBS. This study aimed to analyze the effect of IL-10 polymorphisms on IL-10 production and its correlation with the clinical symptoms in Chinese patients with diarrhea-predominant IBS (IBS-D). Methods: Two IL-10 single nucleotide polymorphisms (rs1800871 and rs1800896) were detected in 120 patients with IBS-D and 144 healthy controls (HC) using SNaPshot. IBS symptom severity score, Bristol scale, hospital anxiety, and depressive scale (HADS) were used to evaluate the clinical symptoms, as well as the psychological status and visceral sensitivity of the subjects. IL-10 levels in the plasma and peripheral blood mononuclear cell (PBMC) culture supernatant were measured using enzyme-linked immunosorbent assay, while those in ileal and colonic mucosal biopsies were measured using immunohistochemistry. Results: The frequency of rs1800896 C allele was significantly lower in the patients with IBS-D than that in the HC (odds ratio: 0.49, 95% confidence interval: 0.27โ€“0.92, Pโ€Š=โ€Š0.0240). The IL-10 levels in the plasma (Pโ€Š=โ€Š0.0030) and PBMC culture supernatant (Pโ€Š=โ€Š0.0500) of the CT genotype subjects were significantly higher than those in the TT genotype subjects. The CT genotype subjects exhibited a higher pain threshold in the rectal distention test than the TT genotype subjects. Moreover, IL-10 rs1800871 GG genotype subjects showed an increase in the HADS score compared to other genotype subjects. Conclusions: IL-10 rs1800896 C allele is correlated with higher IL-10 levels in the plasma and the PBMC culture supernatant, which is associated with a higher pain threshold in the Chinese patients with IBS-D. This study provides an explicit relationship of IL-10 polymorphisms with IL-10 production, which might help in understanding the pathogenesis of IBS-D. Introduction Irritable bowel syndrome (IBS) is the most common functional gastrointestinal disorder characterized by the presence of abdominal pain, bloating, and altered bowel habits. According to the Rome IV criteria, there are four subtypes of IBS: constipation-predominant IBS, diarrheapredominant IBS (IBS-D), diarrhea and constipation mixed IBS, and unsubtyped IBS. The pathophysiological mechanisms underlying IBS are unclear. The abnormalities of motility, visceral hypersensitivity (VH), gut microbial alteration, and psychological stress contribute to the clinical symptoms of IBS. Additionally, systemic and mucosal immune activation play an important role in IBS. Previous studies have confirmed that patients with IBS exhibited an imbalanced cytokine profile. [1,2] The infection after an acute gastroenteritis is a major trigger for IBS development, resulting in post-infectious IBS. [3,4] The plasma concentration of interleukin 6 (IL-6) and IL-8 tends to increase and that of interferon-g tends to decrease in the patients with IBS. IL-10, an anti-inflammatory cytokine, is very important in the immune activation of the patients with IBS. However, there are conflicting reports on the role of IL-10 in IBS. [1,5,6] Some studies have reported that the IL-10 in plasma decreased in patients with IBS, [7,8] while others reported that there was no difference when compared to that in the healthy participants. [1,9] Several lines of evidences have demonstrated that there was a decrease in the level of IL-10 mRNA in the intestinal mucosa of patients with IBS. [10,11] IL-10 is synthesized in the immune cells such as T and B lymphocytes, monocytes, macrophages, and mast cells. The cytokine gene polymorphisms have been suggested to influence the cytokine production. The single nucleotide polymorphisms (SNPs) of IL-10 gene, such as IL-10-1082 G/A (rs1800896) and IL-10-819 C/T (rs1800871), are both associated with IBS. [12][13][14][15] IL-10 rs1800896 polymorphism is associated with the enhanced production of IL-10 cytokine in vitro and is more prevalent in the healthy subjects. [16] Earlier studies have demonstrated that rs1800896 and rs1800871 polymorphisms of IL-10 were both correlated with the risk of developing IBS-D, which indicated the genetic susceptibility of patients with IBS-D. [17] However, most of the studies were conducted on Western populations and with limited sample size. Additionally, the correlation between IL-10 gene polymorphisms and IL-10 production is not yet defined. Therefore, this study analyzed the effect of IL-10 polymorphisms on IL-10 production and its correlation with the clinical symptoms in Chinese patients with IBS-D. Ethical approval The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Peking University Health Science Center (No. 2013-12). All the participants provided written informed consent. Study design and participants recruitment This study was conducted from 2013 to 2018. We recruited IBS-D patients according to Rome III criteria from the Outpatients of Gastroenterology Department of Peking University Third Hospital sequentially, aging from 18 to 65 years old. Healthy volunteers (aging from 18 to 65 years) were recruited from community. The participants were excluded if they met the following exclusion criteria: (1) histories of antibiotics, probiotics/prebiotics, or psychotropic medications intake during the previous 4 weeks; (2) systemic or gastrointestinal diseases, such as diabetes mellitus and inflammatory bowel disease; (3) current infectious diseases of the respiratory, digestive, or urinary system; (4) a history of abdominal surgery. Gastrointestinal symptom severity, daily bowel movement frequency and consistency were evaluated by IBS symptom severity scores and Bristol stool form scale. Visceral sensitivity was assessed by rectal distension test using BAROSTAT (Distender Series II; G&J Electronics, Ontario, Canada). Sensory thresholds for initial sensory, initial defecation, and defecation urgency were calculated for each individual by the lowest pressures. Psychological status was evaluated by hospital anxiety and depressive scale (HADS). All participants underwent colonoscopy or had colonoscopy/barium enema performed in the past 6 months to rule out organic colonic diseases. Participants underwent colonoscopy in Peking University Third Hospital with biopsies in distal ileal and sigmoid colonic mucosa. Each participant was routine to investigate a hemogram, plasma chemistry profile, blood test for hepatic virus B and C and HIV, stool microscopy and occult blood testing, liver function tests. Peripheral blood was collected for further analysis. Cytokine gene polymorphisms evaluated through SNaPshot DNA was isolated from cells of approximate 4 mL of the peripheral blood following a phenol/chloroform protocol. Each DNA sample was quantified twice using the DNA quantification NanoDrop (Thermo Scientific, Waltham, MA, USA). Samples were only accepted if the average DNA concentration was at least 0.25 ng/mL and the coefficient of variation between the two rounds of quantification was smaller than 0.1. SNaPshot was used to genotype the IL-10 polymorphisms including rs1800896 and rs1800871. The amplification primers of the candidate SNPs as followed. IL-10 rs1800896: Forward, 5 0 -ACACTACTAAGGCTTCTTT GGGA-3 0 ; Reverse, 5 0 -TACAAGGGTACACCAGTGC (C/T)A-3 0 . IL-10 rs1800871: Forward, 5 0 -AAGGTTT-CATTCTATGTGCTGG-3 0 ; Reverse, 5 0 -GTAAGAGTAG TCTGCACT TGCTG-3 0 . Genomic DNA was diluted to a concentration of 10 ng/mL before identification of genetic mutations. A multiplex SNaPshot assay (ABI PRISM, Foster City, CA, USA) was employed to determine the genotypes. First, 10 ng of genomic DNA was added to a 10 mL polymerase chain reaction (PCR) mixture containing 20 mmol dNTPs (Promega, Madison, WI, USA), 0.5 U of FastStart Taq DNA polymerase (Kapa Biosystems, Woburn, MA, USA), 1 mL 10ร‚ PCR buffer with MgCl 2 (15 mmol/L), and amplification primers at a terminal concentration of 0.1 mmol/L. The thermal cycler conditions of multiplex PCR amplification were as follows: initial denaturation at 94ยฐC for 5 min and amplification for 10 cycles at 94ยฐC for 30 s, 65ยฐC for 30 s, and 72ยฐC for 30 s, followed by 30 cycles at 94ยฐC for 30 s, 53ยฐC for 30 s, and 72ยฐC for 30 s, and a final elongation step at 72ยฐC for 10 min. Subsequently, the PCR products were examined by electrophoresis in a 2.5% agarose gel. Then, we purified the PCR products using a mix of 5.4 U of Exonuclease I (NEB, Beverly, MA, USA) and 1.33 U of shrimp alkaline phosphatase (Fermentas, Lithuania) incubated at 37ยฐC for 60 min followed by 85ยฐC for 20 min. Subsequently, the multiplex SNaPshot sequencing reactions were performed in a final volume of 5 mL containing 2 mL of purified multiple PCR products, 1 mL of SNaPshot Multiplex Mix, 0.4 mL of 10ร‚ sequencing buffer (ABI, Los Angeles, CA, USA), and 3 mL of SNaPshot sequencing primers. The thermal cycler conditions were an initial denaturation followed by 30 cycles at 96ยฐC for 10 s, 50ยฐC for 5 s, and 60ยฐC for 30 s. Then, the depuration of product was performed with 1 U of CIP at 37ยฐC for 60 min and 75ยฐC for 15 min. Finally, the SNaPshot products (1 mL) were genotyped in the ABI 3730 Genetic Analyzer platform before they were mixed with 8.5 mL of HiDi TM formamide and 0.5 mL of GeneScan-120 LIZ size standard (ABI). Data were analyzed by GeneMapper 4.0 (ABI). In order to guarantee the quality of the data, approximately 3% of the samples were randomly selected and regenotyped by direct sequencing. Isolation and culture of peripheral blood mononuclear cells The 4 mL of peripheral blood sample was collected in ethylene diamine tetraacetic acid vials and plasma was collected. Ficoll Histopaque was used for peripheral blood mononuclear cells (PBMCs) isolation according to the protocol. [13] PBMCs were harvested and counted with a hemocytometer. Cell viability was assessed by trypan blue staining following which they were resuspended at 1 ร‚ 10 6 cells/mL in complete media. Cells were transferred to plates and incubated, non-stimulated, for 72 h at 37ยฐC in a 5% CO 2 humidified atmosphere. Cell-free supernatants were stored frozen at ร€80ยฐC and analyzed for cytokine levels in batches. Enzyme-linked immunosorbent assay Systematic inflammatory tone was assessed by measuring in both plasma and supernatant of PBMCs culturing through enzyme-linked immunosorbent assay (ELISA; EBioscience (Barcelona, Spain), Human IL-10 Platinum ELISA Kit). All samples and standards were assayed in two duplicates. To start with the measurement, each well of the 96-well plate was pre-wet with 200 mL assay buffer, then covered with a foil plate sealer and incubated 10 min at room temperature on a shaker. A volume of 25 mL of standard, wash buffer (served as the blank) or sample and 25 mL microparticles was added to each well and incubated at 4ยฐC overnight. The liquid in each well was removed and wells were washed with 200 mL wash buffer for two times. After wash buffer was removed thoroughly, 25 mL biotin antibody was added to each well and incubated at room temperature for 2 h. The liquid in wells was removed and wells were washed with 200 mL wash buffer for two times again. Afterwards, 25 mL streptavidin-phycoerythrin was added to each well and incubated at room temperature for 30 min followed by twice wash with the Wash Buffer. A volume of 150 mL wash buffer was added to each well to resuspend to microparticles and incubated for 5 min on the shaker. Then, the plate was placed into Luminex 200 to measure median fluorescence intensity of standards and samples. Immunohistochemistry staining Immunohistochemistry was carried out to assess IL-10 protein expression in the mucosa biopsy tissues from distal ileum and sigmoid. Sections were deparaffinized in xylene and rehydrated in decreasing concentration of ethanol (100%, 95%, 80%) and subjected to immunohistochemical technique using the ZSGB-BIO ALK system (ZK-9600; Origene; and ZSGB-BIO; MO BIO, Beijing, China) After antigen retrieved and endogenous peroxidases blocked, primary antibody against IL-10 (ab134742, 1:800; Abcam, Cambridge, MA, USA) was incubated for 12 h at 1:800 dilution. As a secondary antibody and for visualization, a peroxidase/3, 3 0 -diaminobenzidine-positive was used according to the manufacturer's protocol (ZSGB-BIO ALK Detection System Peroxidase 3, 3 0diaminobenzidine-positive mice; PV-6000; Origene; and ZSGB-BIO; MO BIO). The IL-10 levels were evaluated based on integral optical density of positive stain sing Image Pro Plus 6.0. Statistical analysis Data were expressed as mean ยฑ standard deviation or median (q25, q75) depending on data distribution. Comparisons of parametric data with normal distribution between two groups were performed by Student t test. Comparisons of no-normality data between two groups were performed by Mann-Whitney U test. Non-parametric data were compared by Chi-square test if the theoretical value was more than 5, otherwise, the Fisher exact test was used. A P < 0.05 was considered statistically significant. Both allele and genotype models (allele model; dominant model; recessive model; homozygote model; heterozygote [model]) were used. Genetic association analyses and odds ratio (OR) calculations were performed for minor alleles based on genotypes using IBM SPSS version 30.0 (IBM Corp., Armonk, NY, USA) and PLINK 1.0.7 (http://pngu. mgh.harvard.edu/purcell/plink), Hardy-Weinberg equilibrium (HWE) determination and Pearson correlation analysis were also performed. Clinical characteristics of the subjects A total of 264 participants aged between 18 and 65 years old were enrolled in this study. Based on the Rome III criteria, we confirmed that there were 120 patients with IBS-D (IBS group). Totally 144 healthy volunteers (healthy controls [HC] group) without previous or current gastrointestinal symptoms and infection were recruited during the same time. The clinical characteristics such as the gender, age, and body mass index were similar between the IBS and HC groups [ Table 1]. The IBS group exhibited a higher HADS score than the HC group (HC vs. IBS: 6.50 [2.00, 12.00] vs. 7.25 [11.50, 15.00]). The scores for abdominal pain, abdominal bloating, dissatisfaction with bowel habits and life disturbance were significantly higher in the IBS group than those in the HC group (all P < 0.05) [ Table 1]. Moreover, the IBS group had a significantly looser stool consistency, based on the Bristol scale, than the HC group (P = 0.001). A total of 78 participants underwent rectal distention test (52 in the IBS group and 26 in the HC group). The IBS group exhibited a significantly lower visceral pain threshold in the initial sensory, initial defecation, and defecation urgency than the HC group (all P < 0.05) [ Table 1], indicating that patients with IBS-D were more sensitive in the visceral nociception. Genotyping of IL-10 gene polymorphisms The genotype distribution of all studied polymorphisms in the HC group were consistent with the HWE (IL-10 rs1800871: P = 0.5400; IL-10 rs1800896: P = 0.500). The detective rate for rs1800871 and rs1800896 was 99.52% and 99.76%, respectively [ Table 2]. The genotype of these two SNPs is shown in Figure 1. There was no correlation between rs1800871 and the risk for developing IBS-D in the allele or any other genotype model [ Table 3]. The frequency of rs1800896 C allele was significantly lower in the IBS group than that in the HC group (OR: 0.49, 95% confidence interval [CI]: 0.27-0.92, P = 0.024). Chinese Medical Journal 2019;132 (13) www.cmj.org In genotype analysis, the frequency of rs1800896 CC + CT genotype in the IBS group was significantly lower (OR: 0.51, 95% CI: 0.27-0.99, P = 0.0450) than that in the HC group in the dominant model. Measurements for the IL-10 level The levels of IL-10 in the plasma and the PBMC culture supernatant of 82 subjects in the IBS group and 38 subjects in the HC group were detected using the ELISA. The expression of IL-10 in the intestinal biopsy was detected in 52 subjects of the IBS group and 26 subjects of the HC group. There was no difference in the IL-10 concentration in the plasma, PBMC culture supernatant, and ileum or colon mucosa between the IBS and HC groups [ Figure 2]. Then, the IL-10 level was compared among the different genotypes of IL-10 rs1800871 and IL-10 rs1800896. As shown in Table 4, there was no difference in the IL-10 level among the AA, GA, and GG genotypes of IL-10 rs1800871. As for IL-10 rs1800896, the subjects with CT genotype exhibited a significantly higher IL-10 concentration in the plasma (TT vs Table 4 and Figure 3]. The expression of IL-10 in the ileum and that in the colon was not statistically different. Association of IL-10 polymorphisms and clinical symptoms The correlation between IL-10 polymorphisms (rs1800871 and rs1800896) and clinical symptoms was evaluated. IL-10 rs1800871 was significantly correlated to the HADS score (R = 0.234, P = 0.023). The subjects with GG genotype had a higher HADS score than the subjects with AA or AG genotype [ Figure 4A]. There was a significant positive correlation between IL-10 rs1800896 polymorphism and the pain threshold for initial defecation (R = 0.310, P = 0.007) and the defecation urgency (R = 0.298, P = 0.010). The subjects with CT genotype presented a significantly higher pain threshold of initial defecation (P = 0.007) and defecation urgency (P = 0.010) than the subjects with TT genotype [ Figure 4B and 4C]. There was no correlation between the IL-10 polymorphisms and other clinical symptoms. Discussion Patients with IBS-D present a particularly VH with a looser stool consistency, which may result by chronic systemic and mucosal inflammation. IL-10 plays an important role in the anti-inflammation response and is considered to be a potent suppressor of T lymphocytes or macrophages and their derived effector molecules, such as proinflammatory cytokines (IL-1b, tumor necrosis factor [TNF]-a) and chemokines (monocyte chemotactic protein 1, macrophage inflammatory protein 1a). [18] It is speculated that IL-10 gene polymorphisms might be involved in the IL-10 production. [19] Our study suggested that the carriers with IL-10 rs1800896 C allele have a lower risk for developing IBS-D, which may be associated with higher production of IL-10. The gene encoding IL-10 is located on the human chromosome 1q31-1q32. The two polymorphisms analyzed in this study were ร€1082 A/G and ร€819 C/T, which are located in the gene promoter region. According to the single nucleotide polymorphism database (http://www.ncbi. nlm.nih.gov/SNP/), they are annotated as rs1800896 (T > C) and rs1800871 (A > G), which represents the polymorphism in the complementary strands. Qin et al [13] investigated that the IL-10 rs1800871 polymorphism was associated with a decreased risk for developing IBS in the eastern population. Moreover, other studies have confirmed that the C allele of IL-10 rs1800896 was strongly associated with the decreased risk for developing IBS in the western population. [7] However, there are studies which have reported no correlation between IL-10 polymorphisms and the risk for developing IBS. [12,20] In this study, the frequency of IL-10 rs1800896 C allele was significantly (OR: 0.49, 95% CI: 0.27-0.92, P = 0.0240) lower in the patients with IBS-D and was associated with a decreased risk for developing IBS-D. While the polymorphism rs1800871 presented no association with developing IBS, which investigated in patients with IBS-D and the healthy subjects. IL-10 was initially described as a T helper 2-type cytokine and was reported to be expressed in various cells of the adaptive immune system as well as the cells of the innate immune system. Patients with IBS exhibit cytokine imbalance characterized by decreased levels of IL-10 and increased levels of TNF-a. [1,7] The lower levels of IL-10 can be a predictor for IBS development. [21] The T allele of IL-10 rs1800896 has been reported to be associated with lower production of IL-10, while the C allele is associated with higher IL-10 production. [16,22] The C allele is associated with IL-10 production in the low stimulated cell culture. [23] In addition, there are very few studies that report a direct correlation between IL-10 polymorphisms and IL-10 production. Our data confirmed that the subjects carrying C allele of IL-10 rs1800896 exhibited a higher IL-10 concentration in the plasma and the PBMC culture supernatant. Additionally, we also demonstrated that the subjects with CT genotype were less sensitive in the rectal distention test than the subjects with TT genotype. Earlier studies demonstrated that IL-10 and other inflammatory cytokine concentration in the PBMC correlated with the severity of symptoms in IBS-D, including the intensity and frequency of painful events and motility-associated symptoms. [24] Some researchers hypothesize that IBS is on a spectrum with inflammation bowel disease (IBD), because they have largely overlapping pathophysiological mechanisms and clinical symptoms. [25,26] For example, the abdominal pain and diarrhea are predominant symptoms for patients with IBS or IBD, and post-inflammatory abnormalities are important in IBD and IBS. The subjects without C allele are more likely to develop IBS after intestinal infection. [3,27] Additionally, the polymorphisms of IL-10 gene can exert a protective effect on patients with IBD, ulcerative colitis or Crohn disease. [28,29] Our finding was intriguing as the IL-10 rs1800871 polymorphism exhibited a marginally positive correlation with the HADS score. The development of depressive disorder is considered to be associated with the activation of systemic inflammation as well. [30,31] It has been reported that the genotypes IL-10 rs1800871 and IL-10 rs1800896 decrease the risk for developing depression and are correlated with the Hamilton depression rating scale. [32] Globally, patients with IBS have a high comorbidity rate for depression or anxiety. [33] Our study supports the correlation between high comorbidity in IBS and mental disorder in an aspect of genetics polymorphism. To the best of our knowledge, this study took the lead in studying the correlates of IL-10 gene polymorphisms and the expression level of IL-10. However, this study has some limitations. Only two polymorphisms for IL-10 were analyzed in this study. There may be other SNPs in IL-10 corresponding to rs1800896 and/or rs1800871, which may contribute to IL-10 expression. On the other hand, further studies are required for evaluating the transcriptional and epigenetic effects in IL-10 polymorphism. In summary, the IL-10 rs1800896 polymorphism has a correlation with a higher concentration of IL-10 in both Chinese Medical Journal 2019;132 (13) www.cmj.org the plasma and the PBMC culture supernatant of Chinese patients with IBS-D. Additionally, this SNP is also associated with a higher visceral pain threshold. This study demonstrated a correlation between the IL-10 polymorphisms and IL-10 production, which might help in understanding the pathogenesis of IBS-D.
Heat Emergencies: Perceptions and Practices of Community Members and Emergency Department Healthcare Providers in Karachi, Pakistan: A Qualitative Study Heat waves are the second leading cause of weather-related morbidity and mortality affecting millions of individuals globally, every year. The aim of this study was to understand the perceptions and practices of community residents and healthcare professionals with respect to identification and treatment of heat emergencies. A qualitative study was conducted using focus group discussions and in-depth interviews, with the residents of an urban squatter settlement, community health workers, and physicians and nurses working in the emergency departments of three local hospitals in Karachi. Data was analyzed using content analysis. The themes that emerged were (1) perceptions of the community on heat emergencies; (2) recognition and early treatment at home; (3) access and quality of care in the hospital; (4) recognition and treatment at the health facility; (5) facility level plan; (6) training. Community members were able to recognize dehydration as a heat emergency. Males, elderly, and school-going children were considered at high risk for heat emergencies. The timely treatment of heat emergencies was widely linked with availability of financial resources. Limited availability of water, electricity, and open public spaces were identified as risk factors for heat emergencies. Home based remedies were reported as the preferred practice for treatment by community members. Both community members and healthcare professionals were cognizant of recognizing heat related emergencies. Introduction Heat emergencies are a public health problem of significance. Global climate change has been related to rising temperatures, which have resulted in heatwaves, more severe droughts, heavy rains, and intense hurricanes in various parts of the world [1,2]. These heat related emergencies have affected communities that were unprepared to handle them. Extreme air temperature contributes directly to cardiovascular and respiratory disease deaths, especially among the elderly. More than 166,000 people died as a result of heatwaves between 1998 and 2017, including more than 70,000 in Europe during the 2003 heatwave. Between 2000 and 2016, the number of people exposed to heatwaves went up by nearly 125 million. The UNHCR's report on weather-related incidents from 1995 to 2015 highlighted the importance of risk identification and reduction in vulnerable countries. [3]. With regard to heat events, Chicago, United States reported 514 deaths in July 1995 [4] followed by California that recorded a higher number of emergency visits in 2006 [5]. Similar burden of 2 of 12 heat emergencies has been observed in Western Europe including France, Germany and Spain [6,7]. In addition, New South Wales reported a 13% rise in all-cause mortality, and a 14% increase in ambulance calls related to heat emergencies in 2011 [8]. Furthermore, India reported 1000 excessive deaths in Ahmedabad as a result of extreme heat in 2010 [9]. Pakistan ranks 7th on the list of the top 10 countries with the climate risk index score of 30.5 and a total number of 141 weather-related events in the past 10 years [10]. Karachi, being located in a hot climate area is at risk for heat related emergencies, due to the high concentration of infrastructure and limited greenery [11]. Temperatures soar to an extreme of 45 โ€ข C during the months of May to July, along with high humidity of 30-95% as a daily average from 2006 to 2015 [12]. The 30-year average maximum temperature for June, the hottest summer month in Karachi, is 34.8 โ€ข C [13]. There was a "heat spell" in Karachi in June 2015, causing a state of emergency in the city, claiming more than 1200 lives in a span of four days [14]. Heat emergencies include dehydration, heat cramps, heat exhaustion, heat stroke, and death [15]. Quantitative literature about the epidemiology of heat emergencies has identified that the elderly, children, outdoor workers, and people with comorbidities are at greater risk for heat emergencies [1]. The approach used by most of the epidemiologists to measure heat illnesses has been quantitative [16,17]. Some researchers have identified the need for analyzing the vulnerability of at-risk populations through a qualitative approach. One such study from Sweden explored communities' perceptions regarding city heat and identified social isolation and female gender as risk factors for heat emergencies in addition to other known risks [16]. A similar study from Australia identified that negative perceptions about heat disasters among the elderly increased their risk for heat illness and hospitalization [18]. Some qualitative studies have explored the perceptions of outdoor workers towards occupational heat injuries, but little is known about the perceptions of healthcare providers and vulnerable communities towards heat emergencies [15,[19][20][21][22][23][24]. It is hypothesized that a lack of water supply, long and unpredictable power outages, and a lack of awareness among communities and residents contributed to the high number of heat-related deaths in Karachi in 2015, but evidence is lacking [12]. To better understand the threats that heat-related emergencies pose to Karachi's populations, the perceptions of local community members and healthcare workers must be explored, the findings of which can be used to develop heat-related emergency prevention strategies [25]. Therefore, the aim of this study was to understand the perceptions and practices of healthcare providers, along with community members regarding diagnosing and treating heat emergencies in Karachi. Study Design The study was conducted in November 2017, in Karachi, Pakistan, as part of the Heat Emergency Awareness and Treatment (HEAT) trial (NCT03513315). The HEAT trial was carried out in the months of summer. However, this qualitative study was embedded in the HEAT trial and conducted in November considering that heatwave is a tropical phenomenon in Pakistan. In Karachi, the temperature in the month of November remains moderate ranging from 20 to 32 โ€ข C. In addition, we think that the perception about heat is independent of weather conditions. One of the objectives in the heat trial was to develop a community heat prevention guideline. To facilitate the contextual development of the guideline catering to the needs of community and emergency healthcare workers, a qualitative study was conducted using focus group conversations (FGDs) and in-depth individual interviews (IDIs). The participants were community members, community healthcare workers, and Emergency Department healthcare workers. The study settings were rural areas of Karachi; Ibrahim Hyderi where the cluster randomized trial was conducted, and three hospitals, which were considered as the catchment population of the area. Study Population and Setting Focus group discussions were conducted with the residents of Ibrahim Hyderi (local vicinity in Karachi), including community healthcare workers. The sample pool of FGDs comprised a wide range of participants to ensure variability in the responses. They were approached through a health organization working in the area which is trusted by the community. This health organization was part of the Heat Trial. The FGDs were conducted at the office of the health organization. The discussion spanned around 45 min. In-depth interviews were conducted with emergency care workers of three major hospitals in Karachi. All the interviews were conducted at the participant's hospital, at a convenient place and time. The average duration of the interviews was 30 min. Data Collection and Analysis The FGDs and IDIs were conducted until saturation was achieved [26]. The research team trained in qualitative research, moderated the FGDs and IDIs. The research team and participants were unknown to each other to establish a common epistemological ground for the interviews [27]. The FGDs and IDIs were conducted by the two members of the research team (NA & RN). A semi structured IDI and FGD guide was prepared by the research team, which was reviewed by a qualitative research expert and a heat emergencies expert. (Appendices A and B). All IDIs and FGDs were conducted in the local language, Urdu, and were audio-recorded, after obtaining written consent from all the participants. Each participant was assigned a unique code. Confidentiality was maintained using these unique codes during transcription of the audio interviews. This allowed the researchers to anonymize the interviews while identifying valuable information like demographics, themes, and subcategories. The interviews were transcribed by two authors (NA and RN). These authors further used content analysis approach [28] to identify the emerging themes and subthemes from the transcripts (Figure 1). Consensus was then reached between the two authors and a thematic dictionary was created, with definitions for each theme and quotes to support it. Rigor and trustworthiness in the study was established through following Lincoln and Guba's criteria [29]. The study's credibility was enhanced by emphasizing the purpose to learn from respondents through open and nonjudgmental attitude of interviewer during FGDs and IDIs. healthcare workers, and Emergency Department healthcare workers. The study settings were rural areas of Karachi; Ibrahim Hyderi where the cluster randomized trial was conducted, and three hospitals, which were considered as the catchment population of the area. Study Population and Setting Focus group discussions were conducted with the residents of Ibrahim Hyderi (local vicinity in Karachi), including community healthcare workers. The sample pool of FGDs comprised a wide range of participants to ensure variability in the responses. They were approached through a health organization working in the area which is trusted by the community. This health organization was part of the Heat Trial. The FGDs were conducted at the office of the health organization. The discussion spanned around 45 min. In-depth interviews were conducted with emergency care workers of three major hospitals in Karachi. All the interviews were conducted at the participant's hospital, at a convenient place and time. The average duration of the interviews was 30 min. Data Collection and Analysis The FGDs and IDIs were conducted until saturation was achieved [26]. The research team trained in qualitative research, moderated the FGDs and IDIs. The research team and participants were unknown to each other to establish a common epistemological ground for the interviews [27]. The FGDs and IDIs were conducted by the two members of the research team (NA & RN). A semi structured IDI and FGD guide was prepared by the research team, which was reviewed by a qualitative research expert and a heat emergencies expert. (Appendix A and Appendix B). All IDIs and FGDs were conducted in the local language, Urdu, and were audio-recorded, after obtaining written consent from all the participants. Each participant was assigned a unique code. Confidentiality was maintained using these unique codes during transcription of the audio interviews. This allowed the researchers to anonymize the interviews while identifying valuable information like demographics, themes, and subcategories. The interviews were transcribed by two authors (NA and RN). These authors further used content analysis approach [28] to identify the emerging themes and subthemes from the transcripts (Figure 1). Consensus was then reached between the two authors and a thematic dictionary was created, with definitions for each theme and quotes to support it. Rigor and trustworthiness in the study was established through following Lincoln and Guba's criteria [29]. The study's credibility was enhanced by emphasizing the purpose to learn from respondents through open and nonjudgmental attitude of interviewer during FGDs and IDIs. Approval to conduct the study was obtained from the Ethics Review Committee of the Aga Khan University, Karachi, Pakistan. Results A total of 30 people participated in the FGDs ( Table 1). The mean age of the sample was 34 years. The women were housewives and community healthcare workers whereas half the men were fishermen, and the rest were a tailor, students, and office workers. Among the participants, 60% had intermediate level education. Themes Emerging from FGDs with Community Residents Respondents discussed the challenges that the community face during periods of extreme heat in Karachi. In-depth narratives revealed the preventive and treatment mechanisms used by community to deal with heat emergencies. Two themes were drawn from the narratives depicting recognition and treatment of heat emergencies in the community and access to and quality of care in the hospitals in terms of heat crises. Recognition and Early Treatment of Heat Emergencies at Home Participants from all the three FGDs admitted that early recognition of symptoms is important in managing patients with heat emergencies. One of the challenges in treating heat-related cases at home is that it is not recognized in the early stages of heat emergencies, which prevents the implementation of appropriate care. They expressed that cases are not recognized at the earlier stages of the condition, which is the biggest barrier in the treatment. Initially, the cases are treated for raised body temperature with acetaminophen, home remedies, and sprinkling water over the head and eyes if the patient has lost consciousness. The majority mentioned lack of financial resources as the cause of over reliance on home remedies, while others stated that people lacked the ability to recognize heat-induced conditions. A CHW from community informed, "People in the community keep treating patients at home at the initial stages, the decision to go to the hospital is taken in extreme conditions. Sometimes they call us, and we help them but most of the time treatment takes place at home with home remedies." (FGD # 3) Almost all the participants in the FGDs stated that drinking plenty of water during hot weather could help prevent heat emergencies. Moreover, the community residents informed that drinking lemonade and other homemade drinks, such as "lassi" (a mixture made of yogurt, milk, and water) helps hydrating the body. However, they emphasized that only simple water is enough to rehydrate if it is available. A male member of the community narrated, "Drinking lemonade has good effects on the body on hot summer days, I drink a lot whenever I am feeling low" (FGD # 1) Similarly, a mother from the community verbalized, Community residents, both males and females perceived that drinking water or taking a shower soon after entering home in hot weather can cause paralysis of body or other infectious diseases and thought that this practice needed to be avoided. A male member from community verbalized, "I always advise my kids to avoid the practice of drinking cold water immediately after entering home in hot weather; take rest for a few minutes to allow the body to cool down. Drinking water immediately could cause other diseases, therefore, drink water once your body temperature comes to normal" (FGD # 1) Modification in lifestyle can play a pivotal role in preventing heat emergencies such as drinking plenty of water and reducing strenuous activities. Participants narrated that eating less spicy foods and wearing light colored clothes help the body in maintaining its homeostasis. The male members thought that wearing a cap when going out in the sun and drinking water frequently during work can help a person stay safe from the harmful effects of heat. On the other hand, the community health workers (CHWs) believed that the community residents are not always aware of these preventive measures, which results in incidence of dehydration and heat emergencies. Both, the male community residents and CHWs felt that the structure of the houses in the community also contributed to heat emergencies, as the element of ventilation is rarely considered during construction. One male member from community expressed, "The design of the house is rarely considered during construction of the house, even the rooms within the house are not well ventilated" (FGD # 1) Heat emergencies in the community can be considerably prevented by ensuring the adequate supply of water and electricity. In summers, the minimal availability of both largely affects health of community dwellers. Frequent power outrages for hours and sometimes for a stretch of days affect supply of water and, because of which, the community residents must drink stored and unhygienic water. A female from community narrated that, "We face a lot of issues with the supply of water in summers, unavailability of electricity hampers water supply. We bring water from our neighborhoods and store" (FGD #2 group) Access and Quality of Care in the Hospital The number of healthcare facilities available in the community is inadequate with limited health services only targeting specific diseases. One male member of community stated that lack of transport facilities to transfer patients to healthcare facilities posed an added financial burden on the families. Therefore, often the decision regarding seeking healthcare depends on the financial resources, and medical care is accessed only in extreme situations. "We have to hire a private car to transfer our patients to a larger healthcare facility. We take this decision in cases when the patient is critically ill or about to die and when all the home remedies have been tried on the patient [ . . . ] because we can't afford private transport" (FGD # 1) Moreover, the participants expressed concerns regarding lack of trust in emergency medical services. They stated that poor quality of services, harsh behavior of the healthcare professionals and the complex process of getting care at public sector hospitals made it difficult to avail healthcare services. They, therefore preferred home remedies or seeking healthcare from nearby private healthcare facilities, despite financial hardship. One participant said, "I have a very bad experience of going to a doctor because of number of reasons, one of them is their harsh behavior." (FGD #3) Cultural constraints are another contributing factor for not seeking healthcare, particularly among women because they are not allowed to go alone without a male chaperone. As one female verbalized, "We have to wait till evening for our males to arrive home and accompany us to a doctor and often the nearby clinics are closed by then" (FGD #2) Themes Emerging from IDIs of Healthcare Professionals The participants in the IDIs were doctors and nurses ranging from 25 to 50 years of age and all of them were involved in the management of patients during the heat wave emergency of 2015 (Table 2). Three themes emerged from the interviews with the healthcare professionals. However, both healthcare professionals and community members emphasized the importance of the early recognition and treatment of heat emergencies. Recognition and Treatment of Heat Emergencies Generally, the healthcare providers felt confident of their ability to recognize heat illnesses. They thought that they were more aware of signs and symptoms of heat illnesses since the heat wave of 2015. When patients visit an emergency department with dehydration, dizziness and rapid pulse; healthcare providers recognize that this is heat illness and provide treatment to them accordingly. A male doctor from a public health facility stated, "So sometimes in extreme heat when the temperature rises up to 40 degrees, patients present with dehydration. They often visit with complaints about altered level of consciousness and shivering or chills as well. So, in this scenario we hydrate them for their survival." (IDI 01-DP20N-M) The respondents further explained other signs and symptoms that patients affected by heatwaves, such as lethargy, low blood pressure, rapid heart-beat, and high body core temperature. A male nurse from a public health facility stated, "Patients are lethargic, they have low blood pressure, and if they have had more sun exposure, they come with high temperature as well" (IDI 02-NP7Y-M) Regarding treatment of patients with heat emergencies, healthcare providers bring down the patient's temperature by sponging, icing, and keeping them in air-conditioned room. Further, they follow the workup for heat affected patients as soon as the patient is identified as having a heat stroke. Two participants expressed this as follows, "To manage heat stroke patients, we mainly do sponging. We have a tub and pipe for them to take showers, we do icing, we keep them in an airconditioned area and lastly we give 5% Dextrose, if not controlled." (IDI 06-DDY-M) "Patients often do complain of severe headache as soon as they reach to hospital. We immediately take them inside and sponge their heads, we check their temperature, secure IV line, and hydrate them." (IDI 04-ND28Y-F) Facility Level Plan Based on their experience, the participants stated that dealing with heat emergencies required facility level planning and physical resources. They mentioned that essential supplies are needed to handle heat emergencies as per the plan. They also emphasized the need for a proper management plan to address the burden of heat emergencies; plans were being followed to some extent in each facility. A respondent from a public health facility stated, "We need ample fluids because dehydration is common in heat strokes. We should be prepared to have all the items that should be in the crash cart such as IV cannula, oxygen masks, medicines and intubation for more sick patients." (IDI 02-NP7Y-M) The participants further highlighted the need for a multidisciplinary team to manage heat-stroke. A doctor and a nurse from the public health facilities expressed their views as follows, "This should be teamwork, not only work of the emergency department. We have to involve other specialties such as medicine department, nephrology department, cardiology etc. [ . . . ] Sometimes patients become very sick, and this leads to sepsis. So, in this situation an infectious disease specialist should be contacted to deal with such patients. So, teamwork is essential, without teamwork it is not possible." (IDI 01-DP20N-M) "The first priority is that there should be air-conditioned areas so that patients get cooled directly as they enter the hospital, and their temperature stays on low. Simultaneously, fluids should be provided to them to balance their electrolytes" (IDI 03-ND28-H-F) A doctor from the public hospital mentioned the physical resources required for heat stroke patients, such as proper ICU, ward, and ambulance for timely transportation. She further explained that a standard set of requirements is the same for routine patient and patients affected with heat-stroke. A female doctor from a public health facility mentioned, "Proper ICU is required, proper ward care is required, an ambulance to shift patients is required, so we need everything that a normal routine patient need" (IDI 05-DD29-Y-F) Training Most of the participants emphasized that there should be regular trainings on the management of heat emergencies. The training should cover all the components, from basic to complex case scenarios, on disease recognition, diagnosis, and management. In addition, they emphasized on the availability of guidelines for uniformity in practices. They stated that guidelines will improve the clinical decision making of the healthcare providers. As two of the participants said, "There should be regular trainings and reinforcement on the implementation of guidelines to bring uniformity in practice." (IDI 03-ND28-H-F) "There should be theoretical and practical training so that one is aware that summer is coming, and what should be the criteria to deal with heat stroke patients." (IDI 01-DP20N-M) Discussion A qualitative approach was used in this study to investigate the perceptions and practices of a local Karachi community and its healthcare providers regarding recognizing and managing heat emergencies. Both the community and healthcare providers were aware of heat emergencies, especially after the 2015 Karachi heat wave. The discussion with community members revealed that the socioeconomic status of households can have a significant impact on the treatment of heat illnesses. The private sector of the healthcare system in Pakistan delivers 70% of the total healthcare services which is based on fee for services [30]. There is also a huge disparity in accessing healthcare services with 30% of the population living with absolute poverty. In addition, the healthcare expenditure as percentage of the gross domestic product is only 3.2% in Pakistan [31]. Similar findings have previously been observed; individuals from low socioeconomic status had poorer health outcomes and higher mortality because they lived in small, overcrowded housing with limited access to water, cooling appliances, and health facilities, as seen in Karachi's 2015 heat wave [32][33][34]. Emergency care workers identified an increased burden on the emergency department as a result of heat illnesses in a city that already has a high prevalence of endemic illnesses [35]. Outdoor laborers were found to be more vulnerable to heat emergencies. These results are consistent with a study from Ahmedabad, India, which listed outdoor workers as a vulnerable group for heat illnesses [36]. Previous studies have identified workers in humid indoor/outdoor conditions [14] and women working in kitchens as vulnerable groups, which were not identified in our study. Heat emergencies have previously been identified as largely preventable through the provision of necessities such as water and electricity, as well as educating about the health risks of heat [15,21,22]. Furthermore, drinking traditional yoghurt, which is commonly used in South Asia provides the body with the liquid and nutrients in an easily digestible form, which are lost while sweating as an effect of exposure to heatwave. [37]. Members of the community identified financial and structural barriers to accessing emergency care for heat illnesses. In Pakistan, financial constraints are a common and consistent barrier to accessing healthcare during any illness, including heat emergencies [38]. Furthermore, cultural barriers, such as the belief that a woman visiting a healthcare facility without being accompanied by a male member is unacceptable, hinder timely treatment for heat-related illnesses, despite the fact that women are more vulnerable to heat emergencies while performing domestic chores. Cultural barriers are one of the many reasons for low utilization of healthcare services in many low-and middle-income countries (LMICs) [39]. Poor quality of emergency care services was perceived to be an impeding factor by the communities in obtaining healthcare for heat-related illnesses. Poor quality of care is one of many factors contributing to the underutilization of public healthcare facilities in LMICs [40]. Emergency department personnel had little knowledge of the signs and symptoms of heat illnesses, such as describing shivering as one of the symptoms, which is not indicative of heat illnesses, and mentioning the use of Dextrose 5% fluid as a treatment, which is not a standard of care [41]. In addition, differentiating between body core, skin, and air temperatures is critical for accurate diagnosis of heat emergencies. In Karachi, there is no uniform method for measuring body core temperatures in hospitals. However, measuring body core temperature and obtaining a history of heat exposure where possible are important for diagnosing heat-related illnesses, especially heat stroke, in the emergency department [42]. In the context of growing local diseases and outbreaks, matching the symptoms with heat was deemed challenging. Our results are consistent with a study conducted in Germany, in which general practitioners (GPs) were unable to identify environmental conditions as possible risk factors for heat emergencies [43]. The healthcare professionals expressed the need for a facility level response plan to deal with heat emergencies effectively. The heatwave in 2015, in Karachi, led to the development of the first heatwave management plan, in consultation with national and international experts [12]. This plan set out strategies for relevant agencies to ensure timely information on weather conditions, interagency coordination, and a public triggered activation system. In addition, strengthening primary healthcare services, to make them more responsive towards the management of the heat-related illnesses, can reduce the burden of heat emergencies in communities. Although ineffective primary care was not identified directly by our community, previous literature shows that the primary healthcare system serves as an early treatment hub for patients [44]. However, this is lacking in Pakistan [45]. Primary healthcare services may improve the health outcomes of the patients by detecting diseases in their early stages, and it can be one of the core elements in the implementation of an extreme heat disaster management plan [46]. Limitations and Strength of the Study This study had several limitations. Despite the small sample size, the potential themes emerged by grouping the responses of the participants and saturation in the responses was achieved. The practices reported by the emergency healthcare workers might have been influenced by the temptation to offer desirable responses believing their activities were being scrutinized. Moreover, the findings of the study are not intended to be generalizable beyond the scope of the study. Despite these limitations, the study provided contextual knowledge to facilitate the construction of heat prevention and treatment guidelines for communities and healthcare professionals. The study explored the perceptions of the community and healthcare workers simultaneously, which is a strength of this study. The findings from this study can be applicable to other resource constrained settings similar to Karachi. The heterogeneity in the sample pool was ensured by recruiting a wide range of male and female respondents including community dwellers, nurses, physicians and administrative workers who were knowledgeable about and had experience of a phenomenon of interest [47]. Perspectives of policy makers and representatives of the implementation agencies could be another interesting aspect to include in future research. Conclusions This qualitative study has provided new insights into the context of communities and healthcare professionals facing consequences of heat exposure in Karachi. Study findings suggest that there is an awareness about heat emergencies among community members in Karachi. The community perceived dehydration as a heat emergency and managed it with home remedies, cooling, and hydration. Furthermore, heat emergencies were identified to be connected with a shortage of electricity and water supply. The healthcare workers had limited awareness about the signs and symptoms of heat emergencies and perceived the early recognition and treatment of heat emergencies as challenging, in the face of endemic infections having similar presentation. Furthermore, poor quality of public healthcare services, inadequate training, and ineffective implementation of heat wave preparedness plans were identified as impeding factors in the treatment heat emergencies. Considering these aspects, there is a need to carry out preventive actions that take into account the socioeconomic challenges of the communities. This may inform heat prevention policies in communities facing longer and more intense hot spells.
Evidence for past interaction with an asymmetric circumstellar shell in the young SNR Cassiopeia A Observations of the SNR Cassiopeia A (Cas A) show asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star. We investigate whether a past interaction of Cas A with an asymmetric circumstellar shell can account for the observed asymmetries. We performed 3D MHD simulations that describe the remnant evolution from the SN to its interaction with a circumstellar shell. The initial conditions are provided by a 3D neutrino-driven SN model whose morphology resembles Cas A. We explored the parameter space of the shell, searching for a set of parameters able to produce reverse shock asymmetries at the age of 350 years analogous to those observed in Cas A. The interaction of the remnant with the shell can produce asymmetries resembling those observed in the reverse shock if the shell was asymmetric with the densest portion in the nearside to the northwest (NW). The reverse shock shows the following asymmetries at the age of Cas A: i) it moves inward in the observer frame in the NW region, while it moves outward in other regions; ii) the geometric center of the reverse shock is offset to the NW from the geometric center of the forward shock; iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher velocity (between 4000 and 7000 km/s) than in other regions (below 2000 km/s). The asymmetries observed in the reverse shock of Cas A can be interpreted as signatures of the interaction of the remnant with an asymmetric circumstellar shell that occurred between 180 and 240 years after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred between $10^4$ and $10^5$ years prior to core-collapse. We estimate a total mass of the shell of the order 2 Msun. Introduction Cassiopeia A (in the following Cas A) is one of the best studied supernova remnants (SNRs) of our Galaxy. Its relative youth (with an age of โ‰ˆ 350 years; Fesen et al. 2006) and proximity (at a distance of โ‰ˆ 3.4 kpc; Reed et al. 1995) make this remnant an ideal target to study the structure and chemical composition of the stellar material ejected by a supernova (SN). The analysis of multi-wavelength observations allowed some authors to reconstruct in great details its three-dimensional (3D) structure (e.g., DeLaney et al. 2010;Milisavljevic & Fesen 2013Grefenstette et al. 2014Grefenstette et al. , 2017. Several lines of evidence suggest Send offprint requests to: S. Orlando that the morphology and expansion rate of Cas A are consistent with a remnant mainly expanding through a spherically symmetric wind of the progenitor star (e.g., Lee et al. 2014). Thus, the vast majority of anisotropies observed in the remnant morphology most likely reflect asymmetries left from the earliest phases of the SN explosion. This makes Cas A a very attractive laboratory to link the physical, chemical and morphological properties of a SNR to the processes at work during the complex phases of the SN. First attempts to link Cas A to its parent SN were very successful and have shown that the bulk of asymmetries observed in the remnant are intrinsic to the explosion (Orlando et al. 2016), and the extended shock-heated Fe-rich regions evident in the main shell originate from large-scale asymmetries that developed from stochastic processes (e.g., convective overturn and the standing accretion shock instability; SASI) during the first seconds of the SN blast wave (Wongwathanarat et al. 2017). More recently, Orlando et al. (2021) (in the following Paper I) have extended the evolution of the neutrino-driven core-collapse SN presented in Wongwathanarat et al. (2017) till the age of 2000 years with the aim to explore how and to which extent the remnant keeps memory of post-explosion anisotropies imprinted to the ejecta by the asymmetric explosion mechanism. Comparing the model results for the SNR at โ‰ˆ 350 years with observations shows that the main asymmetries and features observed in the ejecta distribution of Cas A result from the interaction of the post-explosion large-scale anisotropies in the ejecta with the reverse shock. The above models, however, do not explain one of the most intriguing aspects of the Cas A structure, as evidenced by the analysis of the position and velocity of the forward and reverse shocks. Observations in different wavelength bands indicate a forward shock expanding with a velocity around โ‰ˆ 5500 km s โˆ’1 along the whole remnant outline (e.g., DeLaney & Rudnick 2003;Patnaude & Fesen 2009;Fesen et al. 2019;Vink et al. 2022) and a reverse shock moving outward with velocity ranging between 2000 km s โˆ’1 and 4000 km s โˆ’1 in the eastern and northern hemisphere of the remnant (e.g., Sato et al. 2018;Fesen et al. 2019;Vink et al. 2022). These velocities are somehow consistent with the remnant expanding through a spherically symmetric wind of the progenitor star and, in fact, are well reproduced by the models (e.g., Orlando et al. 2016 and Paper I). The observations, however, suggest that the reverse shock in the southern and western quadrants of Cas A is stationary or is even moving inward in the observer frame toward the center of the explosion (e.g., Anderson & Rudnick 1995;Keohane et al. 1996;DeLaney et al. 2004;Morse et al. 2004;Helder & Vink 2008;Sato et al. 2018) at odds with the model predictions (e.g., Vink 2020; Vink et al. 2022; see also Paper I). In addition, observations of Cas A show an offset of โ‰ˆ 0.2 pc (at the distance of 3.4 kpc) between the geometric center of the reverse shock and that of forward shock (Gotthelf et al. 2001) that cannot be reproduced by the models (see Paper I). The above results are even more puzzling by looking at the X-ray synchrotron emission associated with the forward and reverse shocks. Helder & Vink (2008) have shown that the reverse shock radiation is limited to a thin spherical shell partially visible mainly in the western hemisphere and shifted toward the west with respect to the remnant outline (thus again suggesting an offset between the centers of the reverse and forward shocks). A similar conclusion was reached by analyzing radio observations of Cas A (see upper panel of Fig. 1), which show that the forward and reverse shocks are much closer to each other and the radio emission is higher in the western than in the eastern hemisphere (Arias et al. 2018). Helder & Vink (2008) have proposed that the high synchrotron emission in the western hemisphere is due to a locally higher reverse shock velocity in the ejecta rest frame (โ‰ˆ 6000 km s โˆ’1 ), so that, there, the reverse shock is able to accelerate electrons to the energies needed to emit X-ray synchrotron radiation (see lower panel of Fig. 1). Some hints about the possible cause of the unexpected reverse shock dynamics come from the evidence that isolated knots of ejecta show a significant blue and redshift velocity asymmetry: ejecta traveling toward the observer have, on average, lower velocities than ejecta traveling away (e.g., Reed et al. 1995;DeLaney et al. 2010;Milisavljevic & Fesen 2013). It is debated as to whether this is due to the explo- sion dynamics Isensee et al. 2010) or expansion into inhomogeneous circumstellar medium (CSM; Reed et al. 1995;Milisavljevic & Fesen 2013). Observations of slow-moving shocked circumstellar clumps in the remnant, the so-called "quasi-stationary flocculi" (QSFs), seem to favor the latter scenario. In fact, these structures are, in large majority, at blue shifted velocities (Reed et al. 1995), implying that more CSM material is placed in the front than in the back of Cas A. This may suggest that the asymmetries associated with the velocities of ejecta knots and, most likely, the evidently asymmetric structure of the reverse shock in Cas A may reflect the in-teraction of the remnant with an inhomogeneous structure of the CSM. Further support to the scenario of inhomogeneous CSM comes from radio and X-ray observations which suggest that the remnant is interacting with a density jump in the ambient medium (probably a local molecular cloud) in the western hemisphere (Keohane et al. 1996;Sanders 2006;Hwang & Laming 2012;Kilpatrick et al. 2014). More recently, Sato et al. (2018) have analyzed Chandra and NuSTAR observations of Cas A (see lower panel of Fig. 1), identifying inward-moving shocks in the observer frame located from a region close to the compact central object (inside the mean reverse shock radius derived by Gotthelf et al. 2001) to the maximum of the dense shell brightness to the west (coincident with the reverse-shock circle). The authors connected these shocks with the brightest features in Xray synchrotron radiation seen with NuSTAR. Since in spherical symmetry, an inward-moving reverse shock is not consistent with the dynamical age of a SNR as young as Cas A, they proposed that the inward moving shocks are reflected shocks caused by the interaction of the blast wave with a molecular cloud with a density jump > 5. The possibility that the remnant is interacting with molecular clouds is reasonable and may explain some of the features of the reverse shock (e.g., Kilpatrick et al. 2014;Sato et al. 2018). This interaction would imply a deceleration of the forward shock that propagates through a denser medium and, possibly, an indentation in the remnant outline (e.g., Slane et al. 2015). However, both of these signatures of interaction are not clearly visible in Cas A: the forward shock shows similar velocities along the remnant outline and both the forward and reverse shocks have shapes that are roughly spherical, without any sign of interaction with a molecular cloud. Furthermore, most of the molecular gas detected lies in the foreground of Cas A (e.g., Krause et al. 2004;Wilson & Batrla 2005;Dunne et al. 2009;Koo et al. 2018) and would not have any effect on the propagation of the forward and reverse shocks. If, on one hand, an ongoing interaction of the remnant with a molecular cloud seems not to be plausible to justify the asymmetries observed in the reverse shock of Cas A, it is possible, on the other hand, that the remnant has encountered a dense shell of the CSM in the past (e.g., Borkowski et al. 1996) and the signatures of that interaction are now visible in the structure of the reverse shock. In fact, massive stars are known to experience episodic and intense mass loss events before going to SN. These events may be related, for instance, with the activity of luminous blue variable stars (LBVs; Conti 1984) and Wolf-Rayet stars (WR stars; Foley et al. 2007;Pastorello et al. 2007Pastorello et al. , 2008Smith et al. 2020). In these cases, after the explosion, the shock wave from the SN travels through the wind of the progenitor and, at some point, collides with the material of pre-SN mass loss events (see Smith 2014 for a recent review). For instance, strong indications of interaction with a circumstellar shell, likely associated with wind residual, have been recently found in the Vela SNR (Sapienza et al. 2021). Observations of light echoes showed that Cas A is the remnant of a Type IIb SN (Krause et al. 2008;Rest et al. 2011). This implies that its progenitor star has shed almost all of its H envelope (see also Kamper & van den Bergh 1976;Chevalier & Kirshner 1978) before the core-collapse. Various hypotheses have been proposed to explain how the progenitor star of Cas A has lost its envelope: via its own stellar wind (e.g., Heger et al. 2003), or via binary interaction that involves mass transfer and, possibly, a common-envelope phase (e.g., Podsiadlowski et al. 1992), or via interaction of the pro-genitor with the first SN of a binary that removed its envelope (Hirai et al. 2020). In any case, the expanding remnant, at some point, should have encountered and interacted with the gas of these pre-SN mass loss events. In an early study, Chevalier & Liang (1989) have suggested that Cas A interacted with a circumstellar shell in the past and identified its bright ring with the shocked shell. A few years later, this idea was further investigated by Borkowski et al. (1996) through a 1D numerical model. These authors have proposed that the blast wave of Cas A traveled through an inhomogeneous CSM characterized by a circumstellar shell resulted from the interaction of the slow stellar wind in the red supergiant stage of the progenitor star with the faster wind in the subsequent blue supergiant stage. Koo et al. (2018) have interpreted the spatial distribution of QSFs observed in Cas A as evidence of a massive eruption from the progenitor system to the west, which most likely occurred 10 4 โˆ’ 10 5 years before the SN. Observations of the circumstellar environment around Cas A have also shown evidence of nebulosities that have been interpreted to be the relics of the red supergiant mass-loss material from Cas A's progenitor (Weil et al. 2020). Here, we investigate whether some of the large-scale asymmetries in the reverse shock of Cas A may reflect the past interaction of the remnant with a dense shell of CSM, most likely the consequence of an episodic mass loss from the progenitor massive star that occurred in the latest phases of its evolution before collapse. To this end, we reconsidered our model for describing the remnant of a neutrino-driven core-collapse SN that reproduces the main features of Cas A (presented in Paper I), but added the description of an asymmetric shell of CSM with which the remnant interacts within the first 300 years of evolution. We performed an extensive simulation campaign to explore the parameter space of the shell and derived, from the models, the profiles of the forward and reverse shock velocity versus the position angle in the plane of the sky at the age of Cas A. By comparing the profiles derived from the models with those inferred from the observations (Vink et al. 2022) we identified the models which better than others reproduce the observations. Since no complete survey of parameter space can possibly be done, we do not expect an accurate match between models and observations and we cannot exclude that shells with structure different from those explored here can do a better job in matching the observations. Indeed, we do not aim at deriving an accurate, unique, description of the CSM. Our idealized shell model aims at showing that the main large-scale asymmetries observed in the reverse shock of Cas A (namely, the inward-moving reverse shock observed in the western hemisphere, the offset between the geometric centers of the reverse and forward shocks, and the evidence that the nonthermal emission from the reverse shock is brighter in the western than in the eastern region) can be naturally explained as the result of a past interaction of the remnant with a circumstellar shell. The study is also relevant for disentangling the effects from interior inhomogeneities and asymmetries (produced soon after the core-collapse) from those produced by the interaction of the remnant with an inhomogeneous CSM. Future studies are expected to consider a structure of the CSM derived self-consistently from the mass-loss history of the progenitor system; in this way, the comparison between models derived from different progenitors and observations of Cas A would be able to provide some hints on the nature and mass-loss history of the stripped progenitor of Cas A and, possibly, to shed light on the question whether it was a single star or a member in a binary. The paper is organized as follows. In Sect. 2 we describe the model setup; in Sect. 3 we discuss the results for the interaction of the SNR with the asymmetric dense shell of CSM; and in Sect. 4 we summarize the main results and draw our conclusions. In Appendix A, we discuss an alternative model for the asymmetric circumstellar shell. Problem description and numerical setup We adopted the numerical setup presented in Paper I and which describes the full development of the remnant of a neutrinodriven SN, following its evolution for โ‰ˆ 2000 years. The setup is the result of the coupling between a model describing a SN explosion with remarkable resemblance to basic properties of Cas A (model W15-2-cw-IIb; Wongwathanarat et al. 2017) and hydrodynamic (HD) and magneto-hydrodynamic (MHD) simulations that describe the formation of the full-fledged SNR (e.g., Orlando et al. 2016). The 3D SN simulation follows the evolution from about 15 milliseconds after core bounce to the breakout of the shock wave at the stellar surface at about 1 day after the core-collapse. Then, the output of this simulation was used as initial condition for 3D simulations which follow the transition from the early SN phase to the emerging SNR and the subsequent expansion of the remnant through the wind of the progenitor star. A thorough description of the setup can be found in Paper I, while a summary of its main features is provided in Sect. 2.1. In this paper, the setup was used to investigate if some of the features observed in the reverse shock of Cas A can be interpreted as signatures of the interaction of the remnant with an asymmetric shell of dense CSM material, most likely erupted by the progenitor star before its collapse (see Sect. 2.2). Modeling the evolution from the SN to the SNR The SN model is described in Wongwathanarat et al. (2017) and considers the collapse of an original 15 M โŠ™ progenitor star (denoted as s15s7b2 in Weaver 1995 andW15 in Wongwathanarat et al. 2015) from which its H envelope has been removed artificially (before the collapse) down to a rest of โ‰ˆ 0.3 M โŠ™ (the modified stellar model is termed W15-IIb in Wongwathanarat et al. 2017). This is motivated by the evidence that observations of light echoes suggest that Cas A is the remnant of a Type IIb SN (Krause et al. 2008;Rest et al. 2011), so that its progenitor star has shed almost all of its H envelope before to go SN (see also Kamper & van den Bergh 1976;Chevalier & Kirshner 1978). The model also considers a neutrino-energy deposition able to power an explosion with an energy of 1.5 ร— 10 51 erg = 1.5 bethe = 1.5 B (see Wongwathanarat et al. 2013Wongwathanarat et al. , 2015. After the explosion an amount of 3.3 M โŠ™ of stellar debris was ejected into the CSM. The main model parameters are summarized in Table 1. The SN model takes into account: the effects of gravity (both self-gravity of the SN ejecta and the gravitational field of a central point mass representing the neutron star that has formed after core bounce at the center of the explosion); the fallback of material on the neutron star; the Helmholtz equation of state (Timmes & Swesty 2000), which includes contributions from blackbody radiation, ideal Boltzmann gases of a defined set of fully ionized nuclei, and arbitrarily degenerate or relativistic electrons and positrons. In addition, the model considers a small ฮฑ-network to trace the products of explosive nucleosynthesis that took place during the first seconds of the explosion (see Wongwathanarat et al. 2013Wongwathanarat et al. , 2015. This nuclear reaction network includes 11 species: protons ( 1 H), 4 He, 12 C, 16 O,20 Ne,24 Mg,28 Si,40 Ca,44 Ti,56 Ni, and an additional "tracer nucleus" 56 X, which represents Fe-group species synthesized in neutronrich environments as those found in neutrino-heated ejecta (see Wongwathanarat et al. 2017 for details). After the shock breakout, the SN model shows large-scale asymmetries in the ejecta distribution. The most striking features are three pronounced Ni-rich fingers that may correspond to the extended shock-heated Fe-rich regions observed in Cas A. These features naturally developed from stochastic processes (e.g. convective overturn and SASI) during the first second after core bounce (Wongwathanarat et al. 2017). These characteristics make the adopted SN model most promising to describe a remnant with properties similar to those observed in Cas A (see also Paper I). The output of model W15-2-cw-IIb at โ‰ˆ 20 hours after the core-collapse was used as initial conditions for the 3D HD and MHD simulations which describe the long-term evolution (โ‰ˆ 2000 years) of the blast wave and ejecta, from the shock breakout to the expansion of the remnant through the CSM. In Paper I, we analyzed three long-term simulations to evaluate the effects of energy deposition from radioactive decay and the effects of an ambient magnetic field by switching these effects either on or off. Here, we reconsidered the models presented in Paper I and modified the geometry and density distribution of the CSM to describe the interaction of the remnant with a dense shell in the CSM. Our simulations include: i) the effects of energy deposition from the dominant radioactive decay chain 56 Ni โ†’ 56 Co โ†’ 56 Fe, by adding a source term for the internal energy which takes into account the energy deposit which can be converted into heat (excluding neutrinos, which are assumed to escape freely) and assuming local energy deposition without radiative transfer 1 (Jeffery 1999;Ferrand et al. 2019); ii) the deviations from equilibrium of ionization, calculated through the maximum ionization age in each cell of the spatial domain (see Orlando et al. 2015); iii) the deviations from electronproton temperature equilibration, calculated by assuming an almost instantaneous heating of electrons at shock fronts up to kT = 0.3 keV (Ghavamian et al. 2007) and by implementing the effects of Coulomb collisions for the calculation of ion and electron temperatures in the post-shock plasma (Orlando et al. 2015); and iv) the effects of back-reaction of accelerated cosmic rays at shock fronts, following an approach similar to that described in Orlando et al. (2012) by including an effective adiabatic index 2 ฮณ eff which depends on the injection rate of particles ฮท (i.e., the fraction of CSM particles with momentum above a threshold value, p inj , that are involved in the acceleration process; Blasi et al. 2005) but neglecting nonlinear magnetic-field amplification. The SNR simulations were performed using the pluto code (Mignone et al. 2007(Mignone et al. , 2012 configured to compute intercell fluxes with a two-shock Riemann solver (the linearized Roe Riemann solver in the case of HD simulations and the HLLD ap- a We ran about fifty 3D high-resolution simulations of the SNR, exploring the space of parameters reported in Table 2; we summarize here only the models that best match the observations. b Presented in Wongwathanarat et al. (2017). c Presented in Paper I. d The shell is characterized by the best-fit parameters reported in Table 2 but with n sh = 10 cm โˆ’3 and ฯ† = 0. e The shell is the same as in SH1 but rotated by 90 o clockwise about the y axis. f The shell is the same as in SH1 but with density n sh = 20 cm โˆ’3 and ฯ† = 50 o (see Table 2). proximate Riemann solver in the case of MHD simulations; see Paper I for more details). The HD/MHD equations were solved in a 3D Cartesian coordinate system (x, y, z), assuming the Earth vantage point to lie on the negative y-axis. The remnant is oriented in such a way that the Fe-rich fingers developed soon after the core bounce point toward the same direction as the extended Fe-rich regions observed in Cas A (see Paper I). The large physical scales spanned from the shock breakout to the full-fledged remnant at the age of 2000 years were followed by gradually extending the computational domain (a Cartesian box covered by a uniform grid of 1024 3 zones) as the forward shock propagates outward. The spatial resolution varies between โ‰ˆ 2.3 ร— 10 11 cm (on a domain extending between โˆ’1.2ร—10 14 cm and 1.2ร—10 14 cm in all directions) at the beginning of the calculation to โ‰ˆ 0.018 pc (on a domain extending between โˆ’9.4 pc and 9.4 pc) at the end. First we evaluated the effects of back-reaction of accelerated cosmic rays on the results by considering simulations either with (models W15-IIb-sh-HD-1eta and W15-IIb-sh-HD-10eta) or without (W15-IIb-sh-HD) the modifications of the shock dynamics due to cosmic rays acceleration at both the forward and reverse shocks. We considered two cases: ฮท = 10 โˆ’4 and ฮท = 10 โˆ’3 , leading to ฮณ eff โ‰ˆ 3/2 (model W15-IIb-sh-HD-1eta) and ฮณ eff โ‰ˆ 4/3 (model W15-IIb-sh-HD-10eta), respectively (see Fig. 2 in Orlando et al. 2016). The former case is the most likely for Cas A, according to Orlando et al. (2016); the latter is an extreme case of a very efficient particle acceleration. For the sake of simplicity, in the present calculations we did not assume a time dependence of ฮณ eff , i.e., we assumed that the lowest value is reached immediately at the beginning of the simulation. In Paper I, we found that the energy deposition from radioactive decay provides an additional pressure to the plasma which inflates ejecta structures rich in decaying elements. Thus, we performed an additional simulation (W15-IIb-sh-HD+dec) analogous to model W15-IIb-sh-HD but including the effects of energy deposition from radioactive decay. We investigated these effects on the remnant-shell interaction by comparing models W15-IIb-sh-HD and W15-IIb-sh-HD+dec. We have also investigated the possible effects of an ambient magnetic field on the remnant-shell interaction. In fact, although the magnetic field does not affect the overall evolution of the remnant, it may limit the growth of HD instabilities that develop at the contact discontinuity or during the interaction of the forward shock with inhomogeneities of the CSM (see, for instance, Orlando et al. 2019a) as, in the present case, the circumstellar shell. Hence, following Paper I, we performed a simulation as model W15-IIb-sh-HD+dec but including an ambient magnetic field (model W15-IIb-sh-MHD+dec) and compared the two models. As in Paper I, we adopted the ambient magnetic field configuration described by the "Parker spiral" resulting from the rotation of the progenitor star and from the corresponding expanding stellar wind (Parker 1958); the adopted pre-SN magnetic field has an average strength at the stellar surface B 0 โ‰ˆ 500 G (Donati & Landstreet 2009). A few words of caution are needed about the adopted magnetic field. In fact, the pre-SN field strength and configuration are unknown in Cas A and our choice is, therefore, arbitrary. Furthermore, typical magnetic field strengths in post-shock plasma inferred from observations of Cas A are of the order of 0.5 mG (e.g., Sato et al. 2018), whereas the pre-SN magnetic field is of the order of 0.2 ยตG at a distance of 2.5 pc from the center of explosion and the highest values of magnetic field strength in post-shock plasma at the age of Cas A are of the order of 10 ยตG in our MHD simulation (model W15-IIb-sh-MHD+dec). These observations suggest that some mechanism of magnetic field amplification is at work, such as turbulent motion in the post-shock plasma (e.g., Giacalone & Jokipii 2007;Inoue et al. 2009 and references therein) and/or non-linear coupling between cosmic rays and background magnetic field (Bell 2004). The former requires quite high spatial resolution and the latter solving the evolution of the non-linear coupling in a short time-scale and length-scale. In fact, both these mechanisms are not included in our MHD simulations that describe the evolution of the whole remnant. In light of this, we expect that stronger fields may have effects on the remnant-shell interaction not included in our models especially for the acceleration of particles (most likely correlated with the magnetic field amplification) and the growth of HD instabilities. Consequently, the synthesis of radio emission presented in Sect. 3.3 is expected to predict radio images which cannot be directly compared with radio observations of Cas A. Nevertheless, these synthetic maps were derived with the aim to identify the position of the reverse shock during the remnantshell interaction. Since the position and resulting overall shape of the reverse shock do not depend on the particular configuration of the magnetic field adopted, the maps can be safely used for our purposes. The inhomogeneous CSM In Paper I, the remnant was described as expanding through the spherically symmetric wind of the progenitor star. The wind density was assumed to be proportional to r โˆ’2 (where r is the radial distance from the center of explosion) and was equal to n w = 0.8 cm โˆ’3 (consistent with the values of post-shock wind density inferred from observations of Cas A; Lee et al. 2014) at the radius r fs = 2.5 pc, namely the nominal current outer radius of the remnant (at a distance of โ‰ˆ 3.4 kpc). Assuming a wind speed of 10 โˆ’ 20 km s โˆ’1 (typical values for the wind during the red supergiant phase), the estimated mass-loss rate iแนก M โ‰ˆ 2 โˆ’ 4 ร— 10 โˆ’5 M โŠ™ yr โˆ’1 . Furthermore, a progressive flatten-ing of the wind profile to a uniform density n c = 0.1 cm โˆ’3 was considered at distances > 3 pc (where we ignore the structure of the still unshocked CSM) to prevent unrealistic low values of the density. Here, we aim at exploring the effects of a dense shell of CSM on the evolution of the remnant and at testing the hypothesis that some of the features observed in the reverse shock of Cas A may be interpreted as signatures of a past interaction of the remnant with an asymmetric circumstellar shell. Deriving an accurate reconstruction of the pre-SN CSM around Cas A is well beyond the scope of the paper. Thus, for our purposes, we adopted an idealized and parametrized description of the CSM which consists of a spherically symmetric wind (as in Paper I) and a shell of material denser than the wind. We also allowed the shell to be asymmetric with one hemisphere being denser than the other. In fact, several lines of evidence suggest that the CSM around massive stars can be characterized by the presence of asymmetric and dense circumstellar shells, resulting from episodic massive eruptions during the late stages of star evolution (e.g., Smith et al. 2014;Levesque et al. 2014;Graham et al. 2014). The asymmetry was modeled with an exponential density stratification along a direction with unit vector D, which defines the symmetry axis of the shell. This dipole asymmetry with the enhancement covering 2ฯ€ steradians is made simply to differentiate the two hemispheres of the shell; one might expect an enhancement occupying a smaller solid angle (as seen from the explosion center) to produce similar effects on the reverse shock dynamics over a smaller range of azimuth. The transition from the shell to the wind was modulated by a Gaussian function. The exponential and Gaussian functions were selected to allow for a smooth transition between the wind and the shell. Considering the orientation of the remnant in the 3D Cartesian coordinate system (see Sect. 2.1), the density distribution of the CSM is given by: where n w has a value between 0.6 cm โˆ’3 and 0.8 cm โˆ’3 , depending on the injection efficiency (see Orlando et al. 2016), r fs = 2.5 pc, n sh is a reference density of the shell, r sh is the shell radius, ฯƒ represents the shell thickness, H is the scale length of the shell density, r ยท D = x cos ฮธ cos ฯ† โˆ’ y sin ฯ† + z sin ฮธ cos ฯ†, and ฮธ and ฯ† are the angles measured: the former in the [x, z] plane (i.e., around the y-axis) counterclockwise from the x-axis (i.e., from the west) and the latter about the z-axis counterclockwise from the [x, z] plane 3 (i.e., from the plane of the sky). We note that the parameter H determines the contrast between the densest and least dense portions of the shell and, therefore, the level of asymmetry introduced between the two remnant hemispheres. As a first step, we explored the space of parameters of the shell, assuming that D (i.e., the symmetry axis of the shell) lies in the plane of the sky ([x, z] plane) and, therefore, is perpendicular to the line-of-sight (LoS). In this case, ฯ† = 0 and r ยท D = x cos ฮธ + z sin ฮธ. An example of pre-SN CSM resulting in this last case, for ฮธ = 30 o , is shown in Fig. 2. Considering that the Earth vantage point lies on the negative y-axis, the shell has the maximum density in the north-west (NW) quadrant and the minimum density in the south-east (SE) quadrant. As a second step, we explored the possibility that D forms an angle ฯ† > 0 with the plane of the sky in Eq. 1. In other words, 3 So that the unit vector D has components: D x = cos ฮธ cos ฯ†, D y = โˆ’ sin ฯ† and D z = sin ฮธ cos ฯ†. we explored models in which the shell was also denser in its blueshifted nearside than in the redshifted farside, as suggested by the evidence that the large majority of QSFs is at blue shifted velocities (Reed et al. 1995). In Sect. 3.4, we discuss the results of this exploration and present model W15-IIb-sh-HD-1eta-az, our favorite model (with ฯ† = 50 o ) to describe the dynamics of the reverse and forward shocks observed in Cas A. Results We ran 3D high-resolution simulations of the SNR, searching for the parameters of the shell (density, radius, and thickness) and of its degree of asymmetry (the angles ฮธ and ฯ† and the density scale length) which can reproduce the slow down of the reverse shock velocity in the western region of Cas A, the offset between the geometric centers of the reverse and forward shocks as inferred from the observations, and the evidence that the nonthermal emission from the reverse shock in the western region is brighter than in the eastern region, simultaneously. Table 2 reports the shell parameters of our simulations most closely reproducing the observations and the range of values explored. The Table also reports the total mass calculated for the shell. We note that this mass depends on the geometry of the shell adopted. We expect that a smaller or partial shell or a shell with a shape deviating from spherical symmetry (for instance, more elongated on one side) may produce similar observables with a different mass. In Sect. 3.1, we discuss in detail the interaction of the remnant with the dense shell for one of our simulations with ฯ† = 0 in Eq. 1 more closely resembling the observations (model W15-IIb-sh-HD-1eta). Then, in Sect. 3.2 we describe how the remnant evolution changes using different shell parameters. Finally, in Sects. 3.3 and 3.4 we analyze the remnant asymmetries caused by the interaction of the remnant with the shell, including also the case with ฯ† > 0 (in Sect. 3.4), and compare the model results with observations. Interaction of the remnant with the dense shell The evolution of the remnant in the phase before the interaction of the blast wave with the shell is analogous to that presented in Paper I. Initially the metal-rich ejecta expand almost homologously, though in models including the radioactive decay significant deviations are present in the innermost ejecta (rich in 56 Ni and 56 Co) due to heating caused by the decay chain 56 Ni โ†’ 56 Co โ†’ 56 Fe. These effects decrease fast with time and are significant only during the first year of evolution. Thus, after this initial inflation of Fe-rich ejecta, we expect that the qualitative evolution of the remnant is similar in models either with or without the radioactive decay effects included (see Paper I). In all the models, the Fe-group elements start to interact with the reverse shock about โ‰ˆ 30 years after the SN when almost all 56 Ni and 56 Co have already decayed to stable 56 Fe (see Paper I for details). In our simulations best matching the observations (all reported in Table 1), the dense shell of CSM has a radius r sh = 1.5 pc. In model W15-IIb-sh-HD-1eta, the blast wave starts to interact with the shell โ‰ˆ 180 years after the SN event (see upper left panel in Fig. 3). The NW side of the shell is hit first due to the large-scale asymmetries of the blast wave inherited from the earliest moments of the explosion 4 (see Paper I). At the beginning of the interaction, the forward shock slows down because of the propagation through a medium denser than the r โˆ’2 wind density distribution. Consequently, the distance between the forward and reverse shock gradually decreases. This effect is the largest in the NW side which is hit earliest and where the CSM shell in our reference model has the highest density. This is also evident in Fig. 4 showing the forward and reverse shock radii in the NW and SE hemispheres of the remnant: the forward shock in the NW side shows a slow down in its expansion at the time of interaction with the shell that is not present in the SE side. This further enhances the degree of asymmetry in the remnant, leading to a thickness of the mixing region between the forward and reverse shocks that is the smallest in the NW side. The shell is fully shocked at t โ‰ˆ 240 years (upper center panel in Fig. 3). At later times, the forward shock travels again through the r โˆ’2 wind density distribution and its velocity gradually increases to the values expected without the interaction with the shell. a The reference density of the shell, n sh , the shell radius and thickness, r sh and ฯƒ, the angle ฮธ (measured in the plane of the sky, about the yaxis, counterclockwise from the west), the angle ฯ† (measured about the z-axis counterclockwise from the plane of the sky) and the scale length of the shell density, H, are used in Eq. 1 to describe the CSM. The total mass of the shell, M sh , the masses of the SE and NW hemispheres of the shell, M SE and M NW , and the peak densities of the shell in the SE and NW directions, n SE and n NW , are derived from the simulations. b The density The remnant-shell interaction drives a reflected shock that travels inward through the mixing region and that is the most energetic where the shell is the densest. The inward propagating shock wave reaches the reverse shock at t โ‰ˆ 290 years (upper right panel in Fig. 3). As a result, the reverse shock velocity in the observer frame decreases and, again, the effect is the largest (with velocities equal to zero or even negative) in the NW side where the reflected shock is the most energetic (Fig. 4). At the Fig. 5: Isosurface of the distribution of Fe (corresponding to a value of Fe density which is at 5% of the peak density) at the age of Cas A for different viewing angles for model W15-IIb-sh-HD-1eta; the colors give the radial velocity in units of 1000 km s โˆ’1 on the isosurface (color coding defined at the bottom of each panel). The semi-transparent clipped quasi-spherical surfaces indicate the forward (green) and reverse (yellow) shocks. The shocked shell is visualized through a volume rendering that uses the blue color palette (color coding on the right of each panel); the opacity is proportional to the plasma density. A navigable 3D graphic of this model is available at https://skfb.ly/o8FnO. age of Cas A, the reverse shock in the NW hemisphere has already started to move inward in the observer frame (see Fig. 4) and the remnant shows the effects of the interaction with the shell (lower left panel in Fig. 3). As a result, the mixing region is less extended and the density of the shocked plasma is higher in the western than in the eastern region. Fig. 5 shows the spatial distribution of Fe at the age of Cas A in model W15-IIb-sh-HD-1eta. The effects of back-reaction of accelerated cosmic rays do not change significantly this distribution (models W15-IIb-sh-HD and W15-IIb-sh-HD-10eta show similar results), whilst the decay of radioactive species leads to the inflation of the Fe-rich plumes in models W15-IIb-sh-HD+dec and W15-IIb-sh-MHD+dec (see Paper I). The figure shows different viewing angles, namely with the perspective on the negative y-axis (i.e., the vantage point is at Earth; upper panel), on the positive x-axis (middle panel), and on the positive y-axis (i.e., the vantage point is from behind Cas A; lower panel). At this time about 35% of Fe has already passed through the reverse shock (see Fig. 5 in Paper I), leading to the formation of large regions of shocked Fe-rich ejecta in coincidence with the original large-scale fingers of Fe-group elements (see Paper I for more details). The shocked dense shell of CSM has already started to interact with the fingers of ejecta developed by HD instabilities (Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz shear instability; Gull 1973;Fryxell et al. 1991;Chevalier et al. 1992). In particular, the figure shows Ferich shocked filamentary structures, which extend from the contact discontinuity toward the forward shock. These fingers protrude into the shocked shell material, producing holes in the shell (see middle panel in Fig. 5) and driving the mixing between stellar and shell material. The fingers in the NW side are closer to the forward shock than in the SE side, due to the reduced distance between the contact discontinuity and the forward shock where the shell is the densest. The radial velocity of the fingers in the NW side is smaller than in the SE (see the color code of the isosurface in Fig. 5) due to the passage of the inward shock, which is the most energetic in the NW. This is consistent with observations (e.g., Willingale et al. 2002). At later times, the degree of asymmetry of the reverse shock structure largely increases due to the fastest inward propagation of the reverse shock in the NW region (see Fig. 4). At the age of โ‰ˆ 1000 yr, the reverse shock is highly asymmetric and reaches the center of the explosion from NW (see lower center panel in Fig. 3 and Fig. 4). Then, it starts to propagate through the ejecta traveling outward in the SE portion of the remnant (lower right panel in Fig. 3). In the meantime, the forward shock continues to travel through the r โˆ’2 wind density distribution with roughly the same velocity in all directions. The forward shock appears to be spherically symmetric at the end of the simulation, when the remnant has a radius R โ‰ˆ 9 pc and an age of โ‰ˆ 2000 yr. It is interesting to note that, at this age, the signatures of the remnantshell interaction are clearly visible in the reverse shock, whilst the forward shock apparently does not keep memory of the past interaction (see Fig. 4). Effects of shell parameters on the remnant evolution We explored the space of parameters of the shell by performing about fifty simulations. The parameters explored are (see Table 2): the reference density of the shell, n sh , the shell radius and thickness, r sh and ฯƒ, the angle ฮธ (measured counterclockwise from the west; see Eq. 1), and the scale length of the shell density, H. For this exploration, the angle ฯ† was fixed equal to zero; its effect is investigated in Sect. 3.4. The parameter space was explored adopting an iterative process of trial and error to converge on model parameters that qualitatively reproduce the profiles of the forward and reverse shock velocities versus the position angle in the plane of the sky at the age of Cas A (e.g., Vink et al. 2022; see also Fig. 6 and Sect. 3.3). We note that our models do not pretend to be able to reconstruct the structure of the pre-SN circumstellar shell but they aim to test the possibility that the inward-moving reverse shock observed in the western hemisphere of Cas A can be interpreted as the signature of a past interaction with a circumstellar shell. From our exploration, we found that the models producing velocity profiles, which more closely reproduce the observations (listed in Table 1), are characterized by the common set of parameters listed in Table 2 with n sh = 10 cm โˆ’3 if ฯ† = 0 (see the favorite values). The exploration of the parameter space was limited to asymmetric shells with the densest side in the western hemisphere of the remnant, namely where the profile of the reverse shock velocity has a minimum (see Fig. 6). The shell asymmetry is regulated by the angle ฮธ and the scale length of the shell density, H. The former parameter determines where the shell is the densest in the plane of the sky, so that the effects of interaction with the shell are the largest and the reverse shock velocity has a minimum. Larger (lower) values of ฮธ determine a shift of the minimum shock velocity toward the north (south) in Fig. 6. The parameter H regulates the density contrast between the two hemispheres of the shell (namely its densest and least dense portions) and, therefore, the level of asymmetry introduced by the shell: the higher the value of H the smaller the contrast between the reverse shock velocities in the two remnant hemispheres. In simulations assuming a shell radius smaller than r sh = 1.5 pc, the remnant starts to interact with the shell at an earlier time. Hence, the reverse shock starts earlier to move inward where the shell is dense, leading to a reverse shock structure that significantly deviates from the spherical shape at the age of Cas A (this happens at later times in our favorite simulations listed in Table 1): a result which is at odds with observations. On the other hand, in simulations with a higher shell radius, the forward shock in the western hemisphere still has expansion velocities much lower than those in the eastern hemisphere (producing an evident minimum in the profiles in Fig. 6) because it did not have the time to re-accelerate to the velocity values expected when it propagates through the wind of the progenitor star. The shell density regulates the slow down of the forward shock traveling through the shell and the strength of the inward shock. In models with a shell denser than in our favorite cases (listed in Table 1), the slow down of the forward shock is higher, leading to a minimum in its velocity profile at the age of Cas A (at odds with observations; see Fig. 6), and the inward shock is stronger, leading to a deeper minimum in the profile of the reverse shock (inward velocities lower than the minimum values observed, namely โ‰ˆ โˆ’2000 km s โˆ’1 ). On the other hand, models with a shell less dense than in our favorite models produce a minimum in the velocity profile of the reverse shock with velocities higher than observed (> โˆ’2000 km s โˆ’1 ). A similar role is played by the thickness of the shell, ฯƒ: a thicker (thinner) shell produces larger (smaller) effects on the forward and reverse shock dynamics. In principle, one could trade off density and thickness to produce similar results. However, we found that, in simulations with higher values of ฯƒ, the reverse shock deviates from the spherical shape at the age of Cas A if the interaction of the remnant with the thicker shell starts at earlier times than in our favorite models; conversely, if the interaction starts at the time of our favorite simulations, the forward shock shows a minimum in its velocity profile because it left the thicker shell too late and it did not have time to re-accelerate. Including the effects of radioactive decay does not qualitatively change the evolution of the remnant and its interaction with the shell. As shown in Paper I, the energy deposition by radioactive decay provides additional pressure to the plasma, which inflates structures with a high mass fraction of decaying elements against the surroundings. Thus, the expansion of the ejecta is powered by this additional pressure and the remnant expands slightly faster than in the case without these effects taken into account. As a consequence, the remnant starts to interact with the shell slightly earlier in models W15-IIb-sh-HD+dec and W15-IIb-sh-MHD+dec than in the others. The reflected shock from the shell reaches the reverse shock at earlier times. Thus, at the age of Cas A, the reverse shock in the NW region moves inward in the observer frame with slightly higher velocities than in models without radioactive decay (e.g., compare models W15-IIb-sh-HD and W15-IIb-sh-HD+dec in Fig. 6). The differences between the models, however, are moderate (< 30%). As for the effect of an ambient magnetic field, it does not, as expected, influence the overall dynamics of the forward and reverse shocks. Model W15-IIb-sh-MHD+dec (the only one including the magnetic field) shows an evolution similar to that of model W15-IIb-sh-HD+dec. The main effect of the magnetic field is to limit the growth of HD instabilities at the contact discontinuity due to the tension of the magnetic field lines which maintain a more laminar flow around the fingers of dense ejecta gas that protrude into the shocked wind material (e.g., Orlando et al. 2012). Indeed the post-shock magnetic field is heavily modified by the fingers and the field lines wrap around these ejecta structures, leading to a local increase of the field strength. The interaction of the remnant with the shell leads to a further compression of the magnetic field, which is more effective in the NW region where the shell is the densest. There the post-shock magnetic field reaches values of the order of 10 ยตG when the pre-SN field strength at 2.5 pc was 0.2 ยตG. As a result, the field strength is significantly higher in the NW than in the SE region of the remnant and this contributes in determining an asymmetry in the brightness distribution of the nonthermal emission in the two hemispheres. Note that model W15-IIb-sh-MHD+dec neglects the magnetic field amplification due to backreaction of accelerated cosmic rays, so that the enhancement of magnetic field in the NW is purely due to the high compression of field lines during the interaction of the remnant with the shell. Field strengths higher by an order of magnitude (and consistent with observations) may be reached due to magnetic field amplification. Reverse shock asymmetries at the age of Cas A The interaction of the remnant with the asymmetric shell affects the propagation of the reverse shock in a different way in the eastern and western sides. This is evident from an inspection of Fig. 6, which shows the profiles of the forward and reverse shock velocities versus the position angle in the plane of the sky at the age of Cas A. For comparison, the figure also shows the profiles derived from a model not including the interaction of the remnant with the shell (upper left panel; model W15-2-cw-IIb-HD presented in Paper I) and the profiles derived from the analysis of Chandra observations of Cas A (black and magenta diamonds; Vink et al. 2022). At the age of Cas A, the models describing the interaction with the shell show a forward shock that propagates with velocity between 5000 km s โˆ’1 and 6000 km s โˆ’1 at all position angles, thus producing results analogous to those derived from the model without the shell and in agreement with observations. This is a sign that the effect of the interaction of the forward shock with the shell has ran out and, in fact, in all the models the forward shock propagates through the r โˆ’2 wind density distribution 5 . Conversely, the velocity of the reverse shock shows strong changes with the position angle if the remnant interacts with the shell, at odds with model W15-2-cw-IIb-HD that shows a reverse shock velocity around โ‰ˆ 3000 km s โˆ’1 at all position angles. In the eastern side, where the shell is tenuous, the evolution of the reverse shock is only marginally affected by the shell and the shock propagates with a velocity of โ‰ˆ 3000 km s โˆ’1 (as in model W15-2-cw-IIb-HD); in the western side, where the shell is dense, the reverse shock is slowed down by the reflected shock driven by the interaction with the shell and, as a result, the reverse shock appears to move inward in the observer frame as observed in Cas A. Fig. 6 shows that the agreement between models and observations is remarkable, producing a reverse-shock minimum roughly where it is observed. However, we also note that the predicted minimum is broader than observed, extending further to the north. This suggests that the high density portion of the modeled shell is too large (the match with the data would improve if the high density portion of the shell subtends a smaller solid angle as seen from the explosion center) or that the actual shell is incomplete or irregular (and not with a regular spherical shape as assumed). We note that Sato et al. (2018) report bright nonthermal X-ray emitting features in the interior of Cas A due to inward-moving shocks in the western and southern hemispheres (see also Anderson & Rudnick 1995;Keohane et al. 1996;DeLaney et al. 2004;Helder & Vink 2008). These features are not reproduced by our models which predict an inwardmoving reverse shock only in the NW hemisphere. Features similar to those observed by Sato et al. (2018) might be obtained by rotating the asymmetric shell approximately 90 o clockwise about the y axis. In this case, however, we found that the models do not reproduce other asymmetries that characterize the reverse shock of Cas A, in particular the velocity profiles derived by Vink et al. (2022) and the orientation of the offset between the geometric center of the reverse shock and that of forward shock (Gotthelf et al. 2001). In Appendix A, we present an example of these models. Here we preferred to discuss the models that most closely reproduce many (but not all) of the reverse shock asymmetries observed in Cas A. Nevertheless, our study clearly shows that the interaction of the remnant with local asymmetric density enhancements (as the densest portion of the shell in our simulations) can produce inward-moving reverse shocks there. In case of a shell with a more complex structure (e.g., more fragmented) than modeled here, it may be possible to reproduce the locations where inward-moving shocks are observed if density enhancements are placed in the same locations. The negative velocity of the reverse shock in the NW side has important consequences on the acceleration of particles. In fact, the ejecta enter the reverse shock with a higher relative velocity in the western than in eastern side. Fig. 7 shows the velocities of ejecta when they enter the reverse shock at the different position angles. The velocity is below 2000 km s โˆ’1 for most of the position angles, except in the western part, where the ejecta enter the reverse shock with a velocity between 4000 and 7000 km s โˆ’1 . This implies that, only in the western part, the reverse shock is potentially able to accelerate electrons to high enough energies to emit X-ray synchrotron radiation: its velocity relative to the 5 As mentioned in Sect. 3.2, simulations with r sh > 1.5 pc (not reported here) produce profiles of the forward shock velocity versus the position angle in the plane of the sky with a significant decrease in the NW side at odds with observations. Table 1, including model W15-2-cw-IIb-HD presented in Paper I. ejecta must be well above the limit for producing this emission (โ‰ˆ 3000 km s โˆ’1 ; Vink 2020). This may explain why most of the X-ray synchrotron emission originates from the western part of the reverse shock (Helder & Vink 2008) where the radio and Xray observations agree on the position of the reverse shock (Vink 2020; see also Fig. 1). Interestingly, from the analysis of the 1 Ms Chandra observation of Cas A, Helder & Vink (2008) concluded that the dominant X-ray synchrotron emission from the western side of Cas A can be justified by a local reverse shock velocity in the ejecta frame of โ‰ˆ 6000 km s โˆ’1 as opposed to a velocity โ‰ˆ 2000 km s โˆ’1 elsewhere. Our models predict similar velocities. The remnant-shell interaction can also alter the geometric centers of the forward and reverse shocks. Since these shocks can be easily traced by the nonthermal emission due to particle acceleration at the shock fronts (e.g., Arias et al. 2018), we synthesized the radio emission from the models at the age of Cas A. The synthesis has been performed using REMLIGHT, a code for the synthesis of synchrotron radio, X-ray, and inverse Compton ฮณ-ray emission from MHD simulations (Orlando et al. 2007(Orlando et al. , 2011. Note that most of our simulations do not include an ambient magnetic field. In these cases we synthesized the nonthermal emission assuming a uniform randomized magnetic field with strength 1 ยตG in the whole spatial domain. In model W15-IIb-sh-MHD+dec, in which the ambient magnetic field configuration is described by the "Parker spiral" (Parker 1958), we synthesized the emission by adding a background uniform randomized magnetic field with strength 0.5 ยตG to the spiral-shaped magnetic field; this was necessary to prevent a field strength very low at a distance of a few pc from the center of explosion and to make the field strength comparable to that assumed for the other models. We note that non-linear amplification of the field in proximity of the shock due to cosmic rays streaming instability is expected at the forward shock (Bell 2004) and, most likely, the same process is also operating at the reverse shock. Our models, however, do not include this effect. Hence our synthetic maps cannot be directly compared with radio observations of Cas A. In fact, as mentioned in Sect. 2.1, the radio emission was synthesized as a proxy of the position of the reverse shock, which is not expected to depend on the configuration and strength of the Table 1, including model W15-2-cw-IIb-HD (presented in Paper I). The maps are normalized to the maximum radio flux, F max , in model W15-IIb-sh-HD-10eta. The blue dotted contours show cuts of the forward and reverse shocks in the plane of the sky passing through the center of the explosion (marked with a blue cross in each panel). The red and green circles mark the same cuts but for spheres roughly delineating the forward and reverse shocks, respectively, in models describing the remnant-shell interaction (the circles are the same in all the panels); the center of these spheres are marked with a cross of the same color (red or green) in each panel. These crosses represent the geometric center of the forward and reverse shocks, respectively, offset to the SE from the center of the explosion. The inset in the lower right corner of each panel is a zoom of the center of the domain. magnetic field. On the other hand, we expect that higher reverseshock strengths may produce higher magnetic field strengths and this may enhance the synchrotron emissivity in the NW side. Fig. 8 shows the radio maps synthesized for the first six SNR models listed in Table 1. To better identify the effects of the remnant-shell interaction on the structure of the forward and reverse shocks, we compared the synthetic maps from these models with the radio map synthesized from model W15-2-cw-IIb-HD (upper left panel in the figure). For each model, we derived the position of the forward and reverse shocks in the plane of the sky (blue contours in the figure) and fitted these positions (the contours) with circles (thus deriving the geometric centers and the radii of the forward and reverse shocks). We found that the centers and radii of the circles fitting the forward and reverse shocks are very similar in models which describe the remnantshell interaction in Table 1. For this reason, the circles reported in Fig. 8 (red for the forward shock and green for the reverse shock) correspond to those derived from the fitting of the blue contours in model W15-IIb-sh-HD-1eta and are the same in all the panels to help in the comparison between model W15-2-cw-IIb-HD and all the other models. As expected the effects of the remnant-shell interaction are most evident in the NW region, where the shell has the highest density. In this region, both the forward and reverse shocks in models including the shell are at smaller radii from the center of the explosion than in model W15-2-cw-IIb-HD. This is evident from the upper left panel in Fig. 8 where both the forward and reverse shocks in model W15-2-cw-IIb-HD (blue contours) expand to NW more than in the other models (red and green circles). The forward shock slows down significantly as it propagates through the densest portion of the shell, bringing the shock front at a smaller distance from the center of the explosion than in model W15-2-cw-IIb-HD. After the interaction with the shell, the forward shock starts propagating again with the same velocity as in model W15-2-cw-IIb-HD (see Fig. 6) but with a smaller radius. As for the reverse shock, it interacts with the reflected shock from the shell which causes it to move inward in the observer frame in the NW region. As a result, in models in-cluding the shell, also the reverse shock is at smaller radii from the center of the explosion than in model W15-2-cw-IIb-HD. The asymmetry introduced by the interaction of the remnant with the shell leads to the geometric centers of the forward and reverse shocks (red and green crosses, respectively, in each panel of Fig. 8) that are shifted to the SE from the center of the explosion (blue cross in the figure). This is opposite to the result found in Paper I, where we found an offset of the geometric centers of the two shocks toward the NW by โ‰ˆ 0.13 pc from the center of the explosion. In fact, in models not including the interaction with the shell, the offset is caused by the initial asymmetric explosion, in which most of the 56 Ni and 44 Ti were ejected in the northern hemisphere away from the observer. Therefore, in our models, the effects of the remnant-shell interaction are opposite to those of the asymmetric explosion and dominate in the structure of the forward and reverse shocks at the age of Cas A. Fig. 8 also shows that, at the age of Cas A, the forward shock appears more affected than the reverse shock by the remnantshell interaction. We note that, while the forward shock passed through the shell at t โ‰ˆ 180 years after the SN, the reverse shock started to be affected by the reflected shock from the shell at later times, namely at t โ‰ˆ 250 โˆ’ 290 years (see Sect. 3.1). This delay caused the distance between the reverse and forward shock to gradually decrease in the time interval between 180 and 250 years; then the distance started to increase for t > 250 years when the reflected shock started to push the reverse shock inward. At the age of Cas A, the distance between the forward and reverse shocks in the NW region is still smaller than expected without the interaction with the shell. An important consequence of the asymmetry introduced by the remnant-shell interaction is that, at the age of Cas A, the geometric center of the reverse shock is offset to the NW by โ‰ˆ 0.1 pc from the geometric center of the forward shock (the values range between 0.09 pc and 0.1 pc for the different models of remnantshell interaction). This result differs from that of Paper I in which the geometric centers of the two shocks coincide. We note that the offset between the two shocks caused by the remnant-shell interaction is similar to that inferred from Cas A observations: the latter suggest that the reverse shock is offset to the NW by โ‰ˆ 0.2 pc (assuming a distance of 3.4 kpc) from the geometric center of the forward shock (e.g., Gotthelf et al. 2001). We note, however, that the shocks observed in Cas A deviate substantially from spherical shape. In fact the geometric centers of the two shocks were derived from azimuthal averages of the shock positions as inferred from observations and uncertainties can be of the order of 10 arcsec (i.e., 0.16 pc). Thus, we considered the direction of the offset as the main feature in discerning between models. While the difference in the values of the offset derived from observations and from the models may not be quantitatively significant given the uncertainties of fitting the shock locations with perfect circles, it is encouraging that the offset predicted by our models is consistent in extent and direction with that inferred from observations within the uncertainties. It is worth mentioning that models producing, at the age of Cas A, an inward-moving reverse shock in the southern and western hemispheres (as suggested, e.g., by Sato et al. 2018) predict an offset to the south-west (SW) instead of NW, at odds with the observations of Cas A (see Appendix A). Given the idealized and simplified description of the asymmetric shell considered in our models, the offset develops along the direction between the center of the explosion and the densest side of the shell. Consequently, it always points to the region characterized by the inward motion of the reverse shock. Interestingly, this is not the case for Cas A, where the offset points to the NW and inward-moving shocks are observed in the southern and western hemispheres (see Sato et al. 2018). The effects qualitatively shown here for the NW quadrant might operate as well in response to smaller-scale density enhancements in other directions; for instance, an interaction of the remnant with multiple shells or a more complex structure of the asymmetric shell may justify the observed asymmetries. Finally, we note in Fig. 8 that the surface brightness of the radio emission is the highest in the NW region. This is due to two factors: i) the post-shock plasma having the highest density in this region as a result of the densest portion of the shell being shocked and ii) the highest velocity of the reverse shock in the ejecta rest frame (see Fig. 7). In model W15-IIb-sh-MHD+dec, the compression of the magnetic field in the interaction of the remnant with the shell further enhances the radio emission in the NW region (see lower right panel in Fig. 8). We stress again that a few words of caution are needed when comparing the radio maps derived here to actual radio observations of Cas A (as, for instance, the upper panel of Fig. 1): in fact, the synthesis of radio emission from the models does not take into account some relevant aspects such as the unknown nature of the ambient magnetic field configuration and its strength and the non-linear amplification of the field in proximity of the shocks due to cosmic rays streaming instability (Bell 2004). In fact, our synthetic radio maps show substantial differences with actual radio images of Cas A (compare, for instance, Fig. 8 with the upper panel of Fig. 1). Nevertheless, these maps are useful for identifying largescale reverse shock asymmetries caused by the remnant-shell interaction. For instance, they predict a higher radio emission in the western than in the eastern hemisphere of the remnant as a consequence of the remnant-shell interaction, consistently with radio observations. Effects of the shell on the Doppler velocity reconstruction The excellent quality of the data collected for Cas A has allowed some authors to perform a very accurate 3D Doppler velocity reconstruction (Reed et al. 1995;DeLaney et al. 2010;Milisavljevic & Fesen 2013) to identify possible large-scale asymmetries of the remnant. The analysis of observed isolated knots of ejecta showed a significant blue and redshift velocity asymmetry: the ejecta traveling toward the observer (blueshifted) have, on average, lower velocities than ejecta traveling away (redshifted). As a result, the center of expansion of ejecta knots 6 appears to be redshifted with velocity v c = 760 ยฑ 100 km s โˆ’1 (Milisavljevic & Fesen 2013). The question is whether this asymmetry is due to the explosion dynamics (claimed by DeLaney et al. 2010;Isensee et al. 2010) or to the expansion of the remnant in inhomogeneous CSM (as suggested by Reed et al. 1995;Milisavljevic & Fesen 2013), possibly the circumstellar shell investigated in the present paper. Our simulations include both the effects of the initial largescale asymmetries inherited from the early phases of the SN explosion and the effects of interaction of the remnant with an asymmetric circumstellar shell. They therefore allow us to investigate the possible causes of the redshift measured for the center of expansion. From the simulations we can easily decompose the velocity of ejecta in each cell of the spatial domain into the component projected into the plane of the sky and the component Fig. 9: Projected (in the plane of the sky) and LoS velocities at the age of Cas A derived from models W15-2-cw-IIb-HD (left panel), W15-IIb-sh-HD-1eta (center panel) and W15-IIb-sh-HD-1eta-az (right panel). The solid blue line in each panel is the best-fit semicircle to the data from the models; the dashed blue line shows the semicircle but artificially scaling the velocities to match the value of v R inferred from Cas A observations; the dashed red line is the best-fit semicircle to actual data of Cas A derived by Milisavljevic & Fesen (2013). along the LoS. The first is the analog of the projected velocity of isolated ejecta knots derived from their projected radii from the center of expansion in Cas A images and the second is the analog of the Doppler velocities of the same knots measured from the spectra of Cas A (e.g., Milisavljevic & Fesen 2013). For the analysis, we selected only cells composed of at least 90% of shocked ejecta. Hence, for each x and z coordinates, we have considered the cell with the highest kinetic energy along y (i.e., along the LoS). In other words, we selected cells characterized by a high mass of shocked ejecta and a significant velocity. We first checked the apparent Doppler shift of the center of expansion introduced by the initial asymmetric SN explosion. To this end, we considered model W15-2-cw-IIb-HD, i.e. the case of a remnant that expands through the spherically symmetric wind of the progenitor star without any interaction with a circumstellar shell. The left panel of Fig. 9 shows the LoS velocities versus the projected velocities derived for this case. We fitted the data points of the scatter plot with a semicircle and found that the center of expansion is redshifted with velocity v c = 138 km s โˆ’1 . This redshift reflects the initial asymmetry of the SN explosion, resulting in a high concentration of 44 Ti and 56 Ni in the northern hemisphere, opposite to the direction of the kick velocity of the compact object (a neutron star) pointing south toward the observer (see Wongwathanarat et al. 2017). The radius of our best-fit semicircle corresponds to a velocity v R = 4249 km s โˆ’1 , lower than that found by Milisavljevic & Fesen (2013) from the analysis of observations (v R = 4820 km s โˆ’1 ). In fact, the explosion energy of our SN model (E exp โ‰ˆ 1.5 B; see Table 1) is a factor โ‰ˆ 1.33 smaller than the value inferred from the observations of Cas A, E exp โ‰ˆ 2 B (e.g., Sato et al. 2020). If we consider that the explosion energy was almost entirely the kinetic energy of the ejecta, it is not surprising that the ejecta velocities in our models are smaller than the velocities observed in Cas A. However, even by artificially scaling the velocity of the model to match the value of v R found in Cas A, the value of v c is much lower than that observed. In other words, an initial SN explosion with a large-scale asymmetry, which is capable of producing a distribution of 44 Ti and 56 Ni compatible with observations, leads to a redshift of the center of expansion significantly lower than that observed in Cas A (compare the dashed blue and red semicircles in the left panel of Fig. 9). Then, we investigated whether an asymmetric shell as that discussed in this paper can account for the observed redshift of the center of expansion. The center panel of Fig. 9 shows the result for our reference model W15-IIb-sh-HD-1eta in which the remnant interacts with an asymmetric circumstellar shell with the symmetry axis perpendicular to the LoS (hence lying in the plane of the sky; ฯ† = 0 in Eq. 1). We found that, in this case, the values of v R and v c are similar to those found with model W15-2-cw-IIb-HD. Thus, the interaction of the remnant with a shell that is symmetric with respect to the plane of the sky cannot contribute to determine the blue and redshift velocity asymmetry observed in Cas A. This was expected because, in this case, the effects of the shell on the propagation of ejecta are roughly the same in the blue and redshifted hemispheres of the remnant. On the other hand, a denser shell on the blueshifted nearside would inhibit the forward expansion of ejecta toward the observer, resulting in an apparently redshifted center of expansion (Reed et al. 1995). To test this possibility, we ran further simulations similar to model W15-IIb-sh-HD-1eta but with the asymmetric shell oriented in such a way that its symmetry axis has an angle ฯ† > 0 with respect to the plane of the sky. The right panel of Fig. 9 shows the results for our favorite model W15-IIb-sh-HD-1eta-az (see Table 1) in which the shell is similar to that adopted in model W15-IIb-sh-HD-1eta but rotated by 50 o about the z axis, counterclockwise from the [x, z] plane (so that the densest portion of the shell is located in the nearside to the NW) and with a reference density n sh = 20 cm โˆ’3 (leading to a total mass of the shell of the order of M sh โ‰ˆ 2 M โŠ™ ; see param-eters in Table 2). Increasing the reference density, n sh , with respect to the models discussed in previous sections was necessary to keep shell densities in the western and southern sides of the [x, z] plane similar to those in model W15-IIb-sh-HD-1eta and, therefore, to produce profiles of forward and reverse shock velocities as those shown in Fig. 6 (see the upper panel of Fig. 10) and an offset between reverse and forward shocks of โ‰ˆ 0.1 pc to the NW, similar to those found in Fig. 8 (see lower panel of Fig. 10). With this shell configuration, the center of expansion appears to be redshifted with velocity v c = 472 km s โˆ’1 . In this case, an artificial scaling of the modeled velocity which matches the observed value of v R leads to a value of v c much closer to that inferred from observations than in the other models, as is evident from the comparison of the dashed blue and red semicircles in the right panel of Fig. 9. By comparing the lower panel of Fig. 10 with Fig. 8, we note that model W15-IIb-sh-HD-1eta-az is characterized by a significantly higher radio emission in the NW quadrant than the other models. This is the result of the density of the shocked shell, which is a factor of 2 higher than in the other models. Hence, an accurate analysis of the radio emission in the western hemisphere of Cas A may help constraining the density (and estimate the mass) of the putative circumstellar shell. We conclude that the initial asymmetry in the SN explosion of Cas A determines a redshift in the center of expansion which, however, is not sufficient to justify the value inferred from the observations. The encounter of the remnant with a dense circumstellar shell leads to a further blue and redshift velocity asymmetry which makes the apparent redshift of the center of expansion compatible with the observations, if the nearside of the shell is denser then its opposite side. The scenario where Cas A interacted with an asymmetric circumstellar shell about a hundred years ago is also supported by the evidence that the majority of QSFs (76%) have blueshifted velocities (Reed et al. 1995), implying that there is more CSM material in the front than in the back. This provides additional reasons to suspect that the asymmetries associated with the reverse shock in Cas A can be attributed to inhomogeneities in the surrounding material. Summary and conclusions In this work we investigated if some of the large-scale asymmetries observed in the reverse shock of Cas A (in particular, the inward-moving reverse shock observed in the western hemisphere of the remnant; Vink et al. 2022) can be interpreted as signatures of a past interaction of the remnant with a massive circumstellar shell, which is possibly a consequence of an episodic mass loss from the progenitor star that occurred in the latest phases of its evolution before collapse. To this end, we performed 3D HD and MHD simulations which describe the interaction of a SNR with an asymmetric dense shell of CSM. The SNR models are adapted from those presented in Paper I, which describe the formation of the remnant of a neutrino-driven SN explosion with asymmetries and features consistent with those observed in the ejecta distribution of Cas A. The simulations follow the evolution from the breakout of the shock wave at the stellar surface (โ‰ˆ 20 hours after corecollapse) to the expansion of the remnant up to an age of โ‰ˆ 2000 yr. The initial conditions are provided by the output of a 3D neutrino-driven SN simulation that produces an ejecta distribution characterized by a large-scale asymmetry consistent with basic properties of Cas A (Wongwathanarat et al. 2017). The interaction of the remnant with the shell is assumed to occur during the first โ‰ˆ 300 years of evolution, namely at an epoch prior to the age of Cas A. We explored whether back-reaction of accelerated cosmic rays, energy deposition from radioactive decay or an ambient magnetic field (in the absence of nonlinear amplification) have a significant effect during the remnant-shell interaction by comparing models calculated with these physical processes turned either on or off. The model results at a remnant age of โ‰ˆ 350 โˆ’ 370 yr were compared with the observations of Cas A. More specifically, we explored the parameter space of the shell properties searching for a set of parameters (thickness, radius and total mass of the shell, density contrast between the two sides of the shell, orientation of the asymmetrical shell in the 3D space) which is able to produce profiles of forward and reverse shock velocities versus the position angle in the plane of the sky similar to those observed in Cas A (Vink et al. 2022). The analysis of the simulations indicates the following. -Initially the interaction of the remnant with the thin dense shell slows down the forward shock (because of its propagation through the denser medium of the shell) and drives a reflected shock, which propagates inward and interacts with the reverse shock. In case of an asymmetric shell with a side denser than the other, the effects of the interaction are the largest where the shell is the densest. -After the forward shock crosses the shell, it propagates again through the r โˆ’2 density distribution of the stellar wind with velocities similar to those observed in models not including the shell. In contrast, the reverse shock is highly affected by the reflected shock from the shell and, depending on the shell density, the reverse shock can start moving inward in the observer frame, at odds with models not including the shell. We found that the signatures of the past interaction of the remnant with a thin dense shell of CSM persist in the reverse shock for a much longer time (at least for โ‰ˆ 2000 years in our models) than in the forward shock (just a few tens of years). -Among the models analyzed, those producing reverse shock asymmetries analogous to those observed in Cas A predict that the shell was thin (ฯƒ โ‰ˆ 0.02 pc), with a radius r sh โ‰ˆ 1.5 pc from the center of the explosion and that it was asymmetric with the densest portion in the nearside to the NW (model W15-IIb-sh-HD-1eta-az in Table 1). -In the models listed in Table 1, the remnant-shell interaction determines the following asymmetries at the age of Cas A: i) the reverse shock moves inward in the observer frame in the NW region, while it moves outward in other regions; ii) the geometric center of the reverse shock is offset to the NW by โ‰ˆ 0.1 pc from the geometric center of the forward shock and both are offset to the SE from the center of the explosion; iii) significant nonthermal emission is expected from the reverse shock in the NW region because there the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km s โˆ’1 ) than in other regions (below 2000 km s โˆ’1 ). -The interaction of the remnant with a dense circumstellar shell can help explain the origin of the 3D asymmetry measured by Doppler velocities (e.g., Reed et al. 1995;DeLaney et al. 2010;Milisavljevic & Fesen 2013). We found that the asymmetry of the initial explosion, which is capable of producing distributions of 44 Ti and 56 Ni remarkably similar to those observed, leads to a redshifted center of expansion with velocity much lower than that observed. On the other hand, the interaction of the remnant with a shell which is denser in the (blueshifted) nearside than in the (redshifted) farside inhibits more the forward expansion of ejecta toward the observer, thus resulting in a center of expansion apparently redshifted with a velocity similar to that inferred from observations. -The parameters of the shell do not change significantly if the back-reaction of accelerated cosmic rays, the energy deposition from radioactive decay or an ambient magnetic field are taken into account, although we have adopted a simplified modeling of these processes. We emphasize that the primary aim of our study was to investigate whether the inward-moving reverse shock observed in the western hemisphere of Cas A may be the signature of a past interaction with a circumstellar shell. Though our study does not aim at deriving a unique and accurate reconstruction of the pre-SN circumstellar shell, it clearly demonstrates that the main large-scale asymmetries observed in the reverse shock of Cas A can be interpreted as evidence that the remnant interacted with a thin shell of material, most likely ejected from the progenitor star before core-collapse. According to our study, the shell was not spherically symmetric but had one side denser than the other oriented to the NW and rotated by โ‰ˆ 50 o toward the observer. This caused the main large-scale asymmetries now observed in the reverse shock of Cas A. We note that the remnant-shell interaction could also explain the different structure of the western jet compared to the eastern one, with the former less prominent and more jagged than the latter. Indeed this difference may indicate some interaction of the western jet with a dense CSM (maybe the circumstellar shell originated from the WR phase of the progenitor star, as suggested by Schure et al. 2008), at odds with the eastern jet which was free to expand through a less dense environment. According to our favorite scenario, the shell was the result of a massive eruption that occurred between โ‰ˆ 10 4 and 10 5 years before the core-collapse, if we consider that the shell material was ejected, respectively, either at a few 10 2 km s โˆ’1 (i.e., during an hypothetic common-envelope phase of the progenitor binary system) or at 10 โˆ’ 20 km s โˆ’1 (namely during the red supergiant phase of the progenitor 7 ). Then the remnant started to interact with the shell โ‰ˆ 180 years after the SN explosion, when the shell had an average radius of 1.5 pc. Some authors have found that a hypothetical WR phase for the progenitor of Cas A may have lasted no more than a few thousand years, leading to a shell not larger than 1 pc (e.g., Schure et al. 2008;van Veelen et al. 2009). Our estimated radius of the shell (1.5 pc) and the presumed epoch of mass eruption (about 10 4 โˆ’ 10 5 years before the SN), therefore, suggest that the shell has originated well before the WR phase (if any) of the progenitor of Cas A. From our simulations, we have estimated a total mass of the shell of the order of M sh โ‰ˆ 2 M โŠ™ (see Sect. 3.4). Considering that the progenitor of Cas A was probably a star with a main sequence mass between 15 and 20 M โŠ™ (according to the values suggested, e.g., by Aldering et al. 1994;Lee et al. 2014) and that the mass of the star before collapse was โ‰ˆ 6 M โŠ™ (Nomoto et al. 1993), we expect a total mass lost during the latest phases of evolution of the progenitor to be between 9 and 14 M โŠ™ . The estimated amount of mass of the shocked wind within the radius of the forward shock (assuming a spherically symmetric wind with gas density proportional to r โˆ’2 ) is โ‰ˆ 6 M โŠ™ (see Orlando et al. 2016). Thus, including the shell, the mass of shocked CSM is โ‰ˆ 8 M โŠ™ , so we can infer that less than 5 M โŠ™ of CSM material is still outside the forward shock. We note that the mass of shocked CSM derived from our models is consistent with that inferred from the analysis of X-ray observations of Cas A (Borkowski et al. 1996;Lee et al. 2014). It is worth to mention that the earliest radio images of Cas A collected in 1962 (i.e., when the remnant was โ‰ˆ 310 years old) already show a bright radio ring in which the western region is brighter than the eastern one (Ryle et al. 1965), indicating an increase in the synchrotron emissivity there over its value elsewhere in the ring, where our model predicts an increase in the reverse-shock strength. A stronger reverse shock could produce the asymmetry either through an increased efficiency of electron acceleration or through greater magnetic-field 7 Note that the progenitor of Cas A was not a red supergiant at the time of core-collapse, namely after its outer H envelope was stripped away (e.g., Koo et al. 2020). At the same time, we cannot exclude that the progenitor was in the phase of red supergiant at the time of the mass eruption that produced the circumstellar shell. amplification. This explanation requires that the encounter with an asymmetric shell must have taken place well before 1962. This is consistent with our models listed in Table 1 which predict that the remnant-shell interaction occurred between 180 and 240 years after the SN (i.e., between years 1830 and 1890) and that the reflected shock from the shell reached the reverse shock โ‰ˆ 290 years after the SN (i.e., in 1940). The observations collected in 1962, therefore, could witness the early inward propagation of the reverse shock in the western hemisphere. Interestingly, the scenario supported by our models is consistent with that proposed by Koo et al. (2018) from the analysis of a long-exposure image centered at 1.644 ยตm emission collected with the United Kingdom Infrared Telescope. From the analysis of the spatial distribution of QSFs, these authors have found a high concentration of these structures in the western hemisphere of Cas A and noted that their overall morphology is similar to that expected from a fragmented shell disrupted by fast-moving dense ejecta knots. Thus, they have interpreted the QSFs as the result of interaction of the remnant with an asymmetric shell of CSM. They have proposed that the progenitor system most likely ejected its envelope eruptively to the west about 10 4 โˆ’10 5 years before the explosion (consistently with our findings). Koo et al. (2018) have estimated a total H+He mass of visible QSFs โ‰ˆ 0.23 M โŠ™ ; considering a lifetime of QSFs 60 yr, the mass can be a little larger (โ‰ˆ 0.35 M โŠ™ ). The estimated mass of QSFs is lower than that estimated with our models (of the order of 2 M โŠ™ ). However, if the QSFs are residuals of a shocked shell, they are certainly its densest component and a significant fraction of the shocked material from the shell may not be visible in the form of QSFs. Since the formation mechanism of the QSFs is still unknown, it is hard to estimate which fraction of the shell mass condensed to them. According to model W15-IIb-sh-HD-1eta-az, a shell mass of the order of 2 M โŠ™ is strongly favored by the Doppler velocities discussed in Sect. 3.4, so that our conclusion here is that the mass determined for the QSFs can only be a small fraction of the total mass of the original shell. Our models show that the radio emission from the region of interaction of the remnant with the densest side of the shell is sensitive to the mass density of the shell. Thus, the analysis of radio emission may offer observational possibilities to better constrain the shell mass. Although our models produce reverse shock asymmetries that qualitatively agree with the observations of Cas A, some issues still remain unexplained, for instance: i) the offset between the geometric centers of the reverse and forward shocks (โ‰ˆ 0.1 pc) is much smaller than that observed (โ‰ˆ 0.2 pc; Gotthelf et al. 2001); ii) in our models this offset points always to the region characterized by the inward motion of the reverse shock, whilst in Cas A the offset points to the NW but inwardmoving shocks are also observed in the southern and western hemispheres. The discrepancies between models and observations might originate from the idealized and simplified description of the asymmetric shell adopted in the present study. For instance, observations have shown some density enhancements in the north-east region (Weil et al. 2020), where some evidence for deceleration of the forward shock was recently reported (Vink et al. 2022). These findings suggest that the CSM structure may be more complex than modeled here. It is plausible that a few mass eruptions at different epochs may have occurred before the collapse of the progenitor star, and each ejected shell of material may have had its own, generally non-spherical, structure. In this case, the more complex density structure of the circumstellar shell(s) may have induced large-scale asymmetries in the reverse shock that our models are unable to produce. Nevertheless, our models naturally recover reverse shock asymmetries similar to those observed, thus supporting the scenario of interaction of the remnant with a circumstellar shell. A more accurate description of the CSM structure certainly require more observational input. In the case of Cas A, several lines of evidence suggest that the progenitor star has experienced significant mass loss in its life time. For instance, from the analysis of XMM-Newton observations, Willingale et al. (2003) have estimated a total mass lost from the progenitor star before stellar death as high as โ‰ˆ 20 M โŠ™ and suggested that the progenitor was a WR star that formed a dense nebular shell before collapse. In this respect, van Veelen et al. (2009) have performed HD simulations that describe the formation of the CSM around the progenitor of Cas A before the SN, considering several WR life times, and the subsequent expansion of the SNR through this CSM. Comparing the model results with observations, they have concluded that, most likely, the progenitor star of Cas A did not have a WR stage or that it lasted for less than a few thousands years (see also Schure et al. 2008). These authors, however, considered an almost spherically symmetric 8 cavity of the WR wind, so that, after interaction, the reverse shock was moving inward at all position angles. Thus, it still remains to investigate whether an asymmetric WR wind-cavity could have similar effects to those found here for the asymmetric shell. It is well possible that both these structures of the CSM contributed to determine the asymmetries observed in the reverse shock of Cas A. Erratic mass-loss episodes are known to occur in massive stars before core-collapse. This is the case, for instance, of Hrich massive stars that are progenitors of SNe showing evidence of interaction with dense H-and/or He-rich CSM (hence of Type IIn and/or Ibn, respectively), the result of mass-loss episodes occurred shortly before core-collapse (e.g., Smith et al. 2010;Fraser et al. 2013;Pastorello et al. 2013;Smith 2014;Ofek et al. 2014 and references therein). Large eruptions are observed in LBVs that, in fact, show large variations in both their spectra and brightness (e.g., Humphreys & Davidson 1994;Humphreys et al. 1999). In these cases, the dense and highly structured CSM in which the star explodes strongly influences the dynamics of the expanding remnant, which keeps memory of the interaction with the dense and structured CSM for hundreds of years after the SN (e.g., Ustamujic et al. 2021). In recent years, observations have shown that H-poor progenitor stars can also experience significant mass-loss events before stellar death. Signs of interaction of the SN blast wave with a dense medium have been found in H-stripped Type-Ibn SNe (e.g., Foley et al. 2007;Pastorello et al. 2007), Type-IIb SNe (e.g., Gal-Yam et al. 2014;Maeda et al. 2015) and Type-Ib SNe (e.g., Svirski & Nakar 2014). Margutti et al. (2017) analyzed observations of SN 2014C during the first 500 days of evolution and found the signatures of the SN shock interaction with a dense shell of โ‰ˆ 1 M โŠ™ at a distance of โ‰ˆ 0.02 pc, most likely the matter of a massive eruption from the progenitor in the decades before the collapse. Interestingly, the mass of the shell inferred from SN 2014C observations is similar (a factor of 2 lower) to that estimated here for the shell that interacted with Cas A (but, in the case of SN 2014C, the shell was ejected immediately before the collapse). The above examples indicate mass loss events occurred in the decades to centuries before collapse. Similar events could also have occurred in the hundreds of thousands of years before the SN, so that the remnant hits the relic of these mass eruptions even several hundreds of years after the explosion. For instance, observations of the Vela SNR have shown evidence of interaction of the remnant with a circumstellar shell with mass โ‰ˆ 1.26 M โŠ™ (again similar to that found here) most likely blown by the progenitor star about 10 6 years before collapse (Sapienza et al. 2021). If the scenario of a massive shell of material erupted by the progenitor of Cas A before collapse is confirmed, the information on the mass of the shell and time of the episodic mass loss can be useful to delineate the mass loss history of the progenitor star. This information may help to shed light on the question whether the progenitor of Cas A was a single star or a member in a binary and, more in general, whether the progenitor of Cas A (and of other Type-IIb SNe) might have lost its hydrogen envelope by an episode of interaction with a companion star in a binary system. Most likely the shell may be the result of one or multiple mass eruptions from the progenitor star during the late stages of star evolution (e.g., Smith et al. 2014;Levesque et al. 2014;Graham et al. 2014). However, it is interesting to note that shell asymmetries similar to that adopted here can also be produced by simulations describing the CSM of runaway massive stars in which lopsided bow shock nebulae result from the wind-ISM interaction (e.g., Meyer et al. 2017Meyer et al. , 2020Meyer et al. , 2021. These stars, moving supersonically through the ISM, can originate from the break-up of binary systems following the SN explosion of one of the binary components (e.g., Blaauw 1961;Stone 1991;Hoogerwerf et al. 2001;Dinรงel et al. 2015) or as a consequence of dynamical multi-body encounters in dense stellar systems (e.g., Gies & Bolton 1986;Lada & Lada 2003;Gvaramadze & Gualandris 2011). Ascertain the interaction of Cas A with an asymmetric circumstellar shell reminiscent of those observed in runaway massive stars may help understanding the reason why the Cas A progenitor was stripped. Addressing the above issues may certainly be relevant to shed some light on the still uncertain physical mechanisms that drive mass loss in massive stars (e.g., Smith 2014). This is of pivotal importance given the role played by mass loss from massive stars in the galactic ecosystem, through its influence on the life time, luminosity and final fate of stars and its contribution on the chemical enrichment of the interstellar medium. The asymmetry of the circumstellar shell investigated in the main body of the paper leads to a reverse shock moving inward in the observer frame, in the NW hemisphere. This feature does not fit well with observations of Cas A that show inwardmoving shocks preferentially in the western and southern hemispheres (e.g., Anderson & Rudnick 1995;Keohane et al. 1996;DeLaney et al. 2004;Helder & Vink 2008;Sato et al. 2018). A better match of the models with observations may be obtained by changing the orientation of the asymmetric shell. We considered, therefore, the setup of model W15-IIb-sh-HD-1eta and rotated the shell by approximately 90 o clockwise about the y axis (model W15-IIb-sh-HD-1eta-sw). The upper panel in Fig. A.1 shows the forward and reverse shock velocities versus the position angle in the plane of the sky at the age of Cas A derived from the analysis of this model. As expected, in model W15-IIbsh-HD-1eta-sw, the reverse shock moves inward in the western and southern hemispheres. We note, however, that, in this way, the model is able to roughly match the velocity profiles of forward and reverse shocks in the NW quadrant, but it fails in reproducing the profiles in the south-west quadrant (compare red and magenta curves in the upper panel of Fig. A.1). Furthermore, we found that other asymmetries that characterize the remnant morphology are not reproduced by simply changing the orientation of the asymmetric shell. In particular, model W15-IIb-sh-HD-1eta-sw predicts an offset of โ‰ˆ 0.1 pc to the SW between the geometric center of the reverse shock and that of forward shock, at odds with observations that show an offset of โ‰ˆ 0.2 pc to the NW (e.g., Gotthelf et al. 2001). This is evident from the lower panel in Fig. A.1 that reports the radio map normalized to the maximum radio flux in model W15-IIbsh-HD-1eta-sw. The blue dotted contours show cuts of the forward and reverse shocks in the plane of the sky passing through the center of the explosion (marked with a blue cross). The red and green circles mark the same cuts but for spheres roughly delineating the forward and reverse shocks in model W15-IIb-sh-HD-1eta-sw, respectively. The centers of these spheres (marked with a cross of the same color of the corresponding circles) represent the geometric centers of the forward and reverse shocks, respectively, that are offset to the north-east from the center of the explosion. For the simple asymmetry considered for the circumstellar shell, we found that, in general, the offset between the geometric center of the reverse shock and that of forward shock points toward the densest portion of the shell where the reverse shock propagates inward in the observer frame. We concluded that an interaction with multiple shells or with a shell with a more complex structure and geometry may be required to better match the observations of Cas A.
Photothermally induced, reversible phase transition in methylammonium lead triiodide Metal halide perovskites (MHPs) are known to undergo several structural phase transitions, from lower to higher symmetry, upon heating. While structural phase transitions have been investigated by a wide range of optical, thermal and electrical methods, most measurements are quasi-static and hence do not provide direct information regarding the fundamental timescale of phase transitions in this emerging class of semiconductors. Here we investigate the timescale of the orthorhombic-to-tetragonal phase transition in the prototypical metal halide perovskite, methylammonium lead triiodide (CH3NH3PbI3 or MAPbI3) using cryogenic nanosecond transient absorption spectroscopy. By using mid-infrared pump pulses to impulsively heat up the material at slightly below the phase-transition temperature and probing the transient optical response as a function of delay time, we observed a clean signature of a transient, reversible orthorhombic-to-tetragonal phase transition. The forward phase transition is found to proceed at tens of nanoseconds timescale, after which a backward phase transition progresses at a timescale commensurate with heat dissipation from the film to the underlying substrate. A high degree of transient phase transition is observed accounting for one third of the steady-state phase transition. In comparison to fully inorganic phase-change materials such as VO2, the orders of magnitude slower phase transition in MAPbI3 can be attributed to the large energy barrier associated with the strong hydrogen bonding between the organic cation and the inorganic framework. Our approach paves the way for unraveling phase transition dynamics in MHPs and other hybrid semiconducting materials. Abstract Metal halide perovskites (MHPs) are known to undergo several structural phase transitions, from lower to higher symmetry, upon heating. While structural phase transitions have been investigated by a wide range of optical, thermal and electrical methods, most measurements are quasi-static and hence do not provide direct information regarding the fundamental timescale of phase transitions in this emerging class of semiconductors. Here we investigate the timescale of the orthorhombic-to-tetragonal phase transition in the prototypical metal halide perovskite, methylammonium lead triiodide (CH3NH3PbI3 or MAPbI3) using cryogenic nanosecond transient absorption spectroscopy. By using mid-infrared pump pulses to impulsively heat up the material at slightly below the phase-transition temperature and probing the transient optical response as a function of delay time, we observed a clean signature of a transient, reversible orthorhombic-totetragonal phase transition. The forward phase transition is found to proceed at tens of nanoseconds timescale, after which a backward phase transition progresses at a timescale commensurate with heat dissipation from the film to the underlying substrate. A high degree of transient phase transition is observed accounting for one third of the steady-state phase transition. In comparison to fully inorganic phase-change materials such as VO2, the orders of magnitude slower phase transition in MAPbI3 can be attributed to the large energy barrier associated with the strong hydrogen bonding between the organic cation and the inorganic framework. Our approach paves the way for unraveling phase transition dynamics in MHPs and other hybrid semiconducting materials. Metal-halide perovskites (MHPs) have emerged in the past decade as a promising class of semiconducting materials for photovoltaic and optoelectronic applications. [1][2][3][4][5] Their appealing features include facile solution processability, low material cost, and defect tolerance to enable exceptional device performance. The possibility of incorporating organic spacers in MHPs to form reduced-dimensional structures has lead to remarkable structural diversity, and with it a wide range of properties not only for photovoltaics and optoelectronics, 6 but also for lateral heterojunctions, spintronics and optical chirality. 3,7,8 Unlike conventional semiconductors such as Si and GaAs, MHPs, similar to oxide perovskites, undergo a sequence of phase transitions, typically going from a less symmetric space group at low temperatures to a higher symmetry space group upon heating. 9 Understanding the phase transitions in MHPs is crucially important since phase transitions are naturally encountered in the solution synthesis and processing of these materials, as well as their degradation and failure over time. 10 On the other hand, manipulating phase transitions of MHPs offers a unique 'tuning knob' of the material properties such as optical transparency and stimuliresponsiveness. [11][12][13][14][15] Phase transitions in MHPs have been investigated in the past by a myriad of techniques such as X-ray diffraction, 16 time-of-flight neutron scattering, 17 Raman spectroscopy, 18 and differential scanning calorimetry. 19 Recently, the kinetics and energetics of phase transitions between the nonperovskite and perovskite phases for several MHPs have been explored using advanced microphotoluminescence spectroscopy and cathodoluminescence techniques, [20][21][22] and it was revealed that a liquid-like interface between the two disparate phases facilitates the phase transition owing to configurational entropy. Nevertheless, most investigations on phase transitions of MHPs have temporal resolutions in the second range, and fundamental timescales of phase transitions in prototypical MHPs, such as methylammonium lead triiodide (CH3NH3PbI3, or MAPbI3), remain unexplored so far. Developing new experimental schemes with high temporal resolution can help unravel the fundamental timescales of phase transition in this technologically important class of materials. In this work, we report on an ultrafast, optical pump-probe spectroscopic study of a phase transition in MAPbI3. We leverage the strong vibrational absorption of this material in the midinfrared range (near 3200 cm -1 ), which permits strong vibrational excitation to transiently heat up the material in an impulsive fashion. Following the mid-infrared pump with an electronically-delayed broadband probe pulse, we observe a clean and unambiguous signature, attributable to the orthorhombic-to-tetragonal phase transition. This phase transition is found to proceed in the tensof-ns timescale, which is significantly slower than phase transitions in inorganic materials such as correlated oxides. 23 Comparing transient spectra with steady-state data reveals that in a spatially averaged sense, the transiently induced phase transition can account for as much as one third of a full steady-state phase transition, which is currently limited by the competing heat transport process from the film to the substrate. Our method can be generalized to the study of other hybrid semiconductors exhibiting phase transitions and opens up pathways for realizing optically triggered and controlled switching functionalities when the two distinct phases have disparate electronic, optical, or magnetic properties. Figure 1a presents the steady-state optical transmittance measured for a ~600-nm thick MAPbI3 sample as a function of temperature. The MAPbI3 film is deposited on a mid-infrared transparent CaF2 substrate, with a cross-sectional scanning electron microscopy (SEM) image shown in Fig. 1b. As the temperature increases from 78 K, the MAPbI3 film is initially in the orthorhombic phase with a space group of Pnma. 16,17,24 At about 158 K, a structural phase transition to the tetragonal phase with an I4/mcm space group is observed, which is associated with an abrupt redshift of the absorption onset wavelength from ~745 nm to ~790 nm as seen in the transmittance data. 25 The crystal structures of the two relevant phases of MAPbI3 in this work are sketched in Fig. 1c. The first-order, orthorhombic-to-tetragonal phase transition involves the rotation of Pb-I octahedra (a combination of in-plane rotation along the b axis and out-of-plane rotations along the a and c axes), coupled with an order-disorder transition of the MA cations. 17,26 The MA cations with C3v point group are frozen in the orthorhombic phase, but undergo liquidlike reorientational motions in the tetragonal phase with characteristic timescales of a few ps. 27,28 In both the orthorhombic and tetragonal phases, the material's optical absorption onset wavelength blueshifts with increasing temperature. Steady-state characterization The signature of the phase transition is also reflected in steady-state, temperature dependent infrared absorption spectra shown in Fig. 1d. Specifically, when transitioning from orthorhombic to tetragonal, a strong absorption near 3100 cm -1 arising from the asymmetric N-H stretching vibrations drops to nearly zero, although some vibrational absorption near 3200 cm -1 from the CH3NH3 + cations remains active in the tetragonal phase. 29 Such vibrational absorption of MAPbI3 in the mid-infrared (~0.4 eV) furnishes us a possibility of pure vibrational (i.e., photothermal) stimulation of the material in optical pump-probe spectroscopy measurements (Fig. 1d, inset). Comparing to the above-bandgap pumping scheme widely adopted in pump-probe measurements, mid-infrared pumping does not involve the excitation of charge carriers from the valence to the conduction band, and hence can provide a clean signature of photoinduced transient phase transitions, as reported in this work. In our ns-ยตs transient absorption measurements, the samples were photothermally excited by mid-infrared pump pulses centered at 3170 nm with a pulse width of ~170 fs and a spectral fullwidth-half-maximum of about 200 cm -1 . Broadband probe pulses covering a spectral range from 450 nm to 900 nm have a pulse width of 1~2 ns. The overall instrument response function of our setup is about 4 ns (see Supplementary Fig. S1). As we have shown in past work focusing on fsto-ns transient absorption, 30 mid-infrared pump pulses at ~3170 nm resonantly and transiently populate the N-H antisymmetric stretching vibrations of the organic CH3NH3 + cations, which subsequently relax by dissipating the excess vibrational energy into the inorganic Pb-I octahedral framework. Following fs mid-infrared pulsed laser excitation, the organic and inorganic sublattices reach a mutual thermal equilibrium via phonon-phonon interactions in a sub-ns timescale. As such, during the ns-to-ยตs time window investigated in this work, the various phonon modes of MAPbI3, whose frequencies span from 3200 cm -1 on the organic cations down to nearly zero wavenumber for the inorganic sublattice, 18 maintain an internal thermal equilibrium. The relevant processes here are heat-induced phase transition, concurrent with heat transfer from MAPbI3 to the substrate, since after the vibrational excitation the MAPbI3 film occupies a higher temperature than the CaF2 substrate. Note that the CaF2 substrate is transparent to mid-infrared excitation and hence remains cold relative to the thermally excited MAPbI3 film before heat transfer takes place. The positive โˆ† / signal corresponds to an increased transmittance, which, once compared with the steady-state transmittance data in Fig. 1a, again indicates an impulsive lattice temperature rise followed by a decay. Similar to the data for 78 K (Fig. 2a), the elevated lattice temperature relaxes in a few hundred ns. We note that the transient โˆ† / spectral map measured at 162 K is different in shape from its counterpart at 78 K, as no derivative-like line shape is seen for the spectra at 162 K. We ascribe such difference to the following: although blueshifts in the absorption onset wavelengths are observed for both phases, there is a lack of an apparent exciton absorption resonance in the tetragonal phase, and as a result, a blueshift in the absorption onset should only give rise to a transiently increased transmittance. Equivalently speaking, a derivative-like โˆ† / line shape must come from the shift of a resonant steady-state feature, which only holds true for the orthorhombic phase due to its strong excitonic absorption. Transient results near the phase transition After examination of photothermally induced transient behavior of MAPbI3 away from the phase transition, we then performed measurements at temperatures slightly below โˆ’ . We hypothesized that if the lattice temperature transiently climbs above the phase transition point, the phase transition should be inducible. Fig. 2c displays the โˆ† / spectral map measured at 142 K, and the result drastically differs from those shown in Fig. 2a-b. Notably, the spectral feature is composed of a positive narrow โˆ† / signal centered at 735 nm, as well as a much broader, negative โˆ† / signal centered at about 760 nm. The positive โˆ† / feature has a fast rise in the first few ns (falling below the instrument response of our setup), which is similar to the fast rise time of โˆ† / observed at both 78 K and 162 K. In contrast, the time evolution of the negative โˆ† / signal has a much longer rise time in the ~100 ns regime, suggesting that a different photoinduced process is at play. At 142 K, MAPbI3 is in the orthorhombic regime (Fig. 1a), hence upon photothermal excitation an impulsive temperature rise, followed by a temperature decay, can account for the positive โˆ† / band centered at 735 nm. As expected, the decay timescale of the positive โˆ† / signal matches with the results obtained at 78 K and 162 K. In order to determine the origin of the broadband negative โˆ† / feature, we can revisit the steady-state, temperaturedependent transmittance data in Fig. 1a. Notably, accompanying the steady-state orthorhombic-totetragonal phase transition is an abrupt redshift of the absorption onset wavelength (from ~740 nm to ~780 nm), so a photoinduced phase transition is expected to lead to an enhanced absorption in the range of 740 nm ~ 780 nm, which matches very well with the negative โˆ† / signal in Fig. 2c. As a result, we can attribute the broad negative โˆ† / band to a photothermally induced orthorhombic-to-tetragonal phase transition. Since 142 K is still relatively far from โˆ’ , the most negative โˆ† / value induced by photothermal excitation reaches about -1.5%, indicating that only a small fraction of the material undergoes the induced phase transition. We note that since our The result in Fig. 2d is qualitatively similar to that obtained at 142 K, in that the โˆ† / is mainly composed of a positive band centered at 728 nm and a negative band centered at 757 nm. In addition, we observe a minor, third feature, which is a short-lived positive โˆ† / signal spanning from 760 nm to 800 nm. Because this third feature has a fast rise time that is similar to the positive โˆ† / signal arising from heat dissipation of the orthorhombic phase, we can attribute it to transient impulsive lattice temperature rise and subsequent temperature dissipation of the pre-existing minor tetragonal phase in the matrix. Note that at 154 K, the steady-state transmittance data (Fig. 1a) indicates that some minor tetragonal phase has already formed in the orthorhombic matrix. The much faster decay time of the third โˆ† / feature, however, arises since this positive โˆ† / signal spectrally overlaps with, and is largely overwhelmed by, the broadband, phase-transition induced negative โˆ† / signal centered at 757 nm with a much stronger amplitude. Note that at 156 K and 158 K ( Supplementary Fig. S6), this positive โˆ† / band from 760 nm to 800 nm exhibits a longer decay time and larger amplitude, since a growth in the equilibrium fraction of tetragonal phase leads to a larger โˆ† / amplitude and associated with it a diminishing degree of photothermally induced orthorhombic-to-tetragonal phase transition. Comparing the positive โˆ† / signal centered at 728 nm measured at 154 K (Fig. 2d) and its counterpart centered at 735 nm obtained at 142 K (Fig. 2c), we find that the former signal undergoes much smaller decay over the plotted 400 ns time window. The โˆ† / kinetics at 757 nm represents transient, photothermally induced forward and backward phase transitions, whereas the โˆ† / kinetics at 728 nm depicts temporal evolution of the fraction of the orthorhombic phase that has not undergone any phase transition. At 154 K, the photothermally induced phase transition leads to a maximal transmittance loss of 8% at 757 nm, which is substantially higher than that seen at 142 K (~1.5%). As a result, the tetragonal-to-orthorhombic backward phase transition, which kicks in at around 200 ns after the signal from transiently formed tetragonal phase reaches its plateau, "replenishes" the higher-temperature (with respect to the lattice temperature before excitation) orthorhombic phase, thereby contributing to a positive โˆ† / signal centered at 728 nm and leading to a slowdown in its decay in comparison to the result collected at temperatures further away from the phase transition (e.g., Fig. 2a-c). Temperature dependent kinetics To further examine the amplitude and kinetics of โˆ† / associated with the transient phase transition, we plot the temperature dependent decay kinetics of โˆ† / in Fig. 3a and 3b, extracted at the wavelengths of 757 nm and 728 nm, respectively. From Fig. 3a we see that the photothermally induced phase transition is clearly observed at temperature as low as 130 K, which is about 26 K below the steady-state thermodynamic โˆ’ (156~158 K). It is seen that as the temperature approaches โˆ’ from below, the strength of the negative โˆ† / signal grows in a superlinear fashion with temperature. Although a constant pump fluence of 2.4 mJยทcm -2 was used in these temperature dependent measurements, the absorbed pump fluence, and the resulting transient rise of lattice temperature near time zero, is expected to decrease with increasing measurement temperature based on the steady-state infrared absorption spectra which show a decreasing absorption of the mid-infrared pump (Fig. 1b). The superlinear growth of the transiently formed tetragonal phase reflects a similar picture in the steady-state condition, that is, a superlinear growth of the volume fraction of the incipient tetragonal phase in the orthorhombic matrix with linearly increasing temperature. The presence of ionic defects 31, 32 and grain and twin boundaries 33 typically found in MHPs can facilitate the formation of tetragonal phase below the โˆ’ . Spatially resolved X-ray diffraction experiments, 34 either static or time-resolved, can provide more direct and conclusive evidence regarding the spatial distribution of the activation energy lowering by the incipient tetragonal phase domains in the orthorhombic matrix. When the measurement temperature reaches 156 K, a decline in the amplitude of the negative โˆ† / is observed, which corresponds to a diminishing steady-state orthorhombic volumetric fraction in favor of nearly complete formation of the tetragonal phase. The temperature dependent โˆ† / kinetics at 728 nm in Fig. 3b capture the temporal evolution of the orthorhombic phase. At low temperatures (130 K to 140 K), the fast rise in โˆ† / (<5 ns) is followed by a nearly mono-exponential decay in the time window up to 600 ns, reflecting heat transfer. As temperature increases from 144 K, the fast โˆ† / rise starts to be followed by a much slower rise, the timescale of which quantitatively matches with the growth of the negative โˆ† / signal in Fig. 3a at the corresponding temperatures. The agreement between the two timescales (i.e., slow growth of the negative โˆ† / signal at 757 nm and slow rise of the positive โˆ† / at 728 nm) stems from the fact that the transmittance of MAPbI3 at 728 nm is higher in the tetragonal phase than in the orthorhombic phase (see Fig. 1a and Supplementary Fig. S2) due to the stronger excitonic absorption by the latter, so a transient orthorhombic-to-tetragonal phase transition leads to a slowly increased transmittance at 728 nm. At longer delay time, a slower โˆ† / decay is seen at higher temperature, due to the backward phase transition as discussed before. We note that the backward phase transition generally proceeds at a slower pace in comparison to the forward phase transition, since even at 600 ns the effects from backward phase transition can still be seen especially at higher measurement temperatures. The slower backward phase transition stems from the inefficient heat dissipation from the MAPbI3 film to the substrate due to the low thermal conductivity of MAPbI3, 35 which is slower than the forward phase transition timescale. (Fig. 3a). Specifically, it arises since the tetragonal fraction in MAPbI3 grows superlinearly with lattice temperature near the phase transition, and the lattice temperature rise scales linearly with the pump fluence. As expected, the fluence dependent โˆ† / kinetics at 757 nm (in Fig. 4b) shows a longer rise time for โˆ† / to reach the negative peak. By comparing the transient โˆ† / spectra at the delay time when the negative โˆ† / band reaches its peak value (i.e., at ~200 ns) with the โˆ† / spectra calculated using the steady-state transmittance data with respect to the transmittance at 154 K ( Supplementary Fig. S6 To examine if the observed timescale of photothermally induced phase transition of MAPbI3 grown on CaF2 using the antisolvent method is a generic behavior, we performed additional measurements on an MAPbI3 film made by the antisolvent-bath method on a c-plane sapphire substrate (see Methods). Steady-state transmittance data ( Supplementary Fig. S7) shows that the โˆ’ of this film is about 134 K. Upon cooling-heating cycles, this sample shows a stronger hysteresis behavior than the film on CaF2, and the โˆ’ slightly varies by a couple of degrees over the course of several heating-cooling cycles. The โˆ† / kinetics at 757 nm are presented in Fig. 3c and โˆ† / kinetics at 728 nm in Fig. 3d (full transient โˆ† / spectral maps are shown in Supplementary Fig. S8). Based on mono-exponential fits to the negative โˆ† / signal growth, we find this sample exhibits faster rate of phase transition (within the same order of magnitude), but the degree of transient phase transition is notably lower than MAPbI3 on CaF2, which is further evident from the drastically different โˆ† / response at 728 nm shown in Fig. 3d in comparison to surface-to-volume ratio of the film, which were found to influence โˆ’ . [36][37][38] Transient response with a visible pump Considering that carrier thermalization in MAPbI3 via carrier-phonon interactions and carrierimpurity scattering, 39-43 which are relevant processes in device operations, can lead to lattice heating, we further explored whether the transient phase transition can result from carrier thermalization by employing above-bandgap photoexcitation at 700 nm in transient absorption measurements. The โˆ† / spectral map acquired at 148 K is shown in Fig. 4c, and several representative decay kinetics are plotted in Fig. 4d. Firstly, a positive โˆ† / bleach signal is observed to center at 728 nm. This signal has an instantaneous rise, followed by an initial rapid decay that is notably faster than the counterpart measured under mid-infrared pump (Fig. 3b). We attribute this faster decay to rapid charge carrier recombination. In addition, we observe a broadband photoinduced absorption at wavelengths bluer than 700 nm above the bandgap (Fig. 4d, cyan curve). Such above-bandgap photoinduced absorption in tetragonal MAPbI3 has been reported previously to arise from a transient reflectivity change of the film, 44 which should underpin the similar observation here for the orthorhombic phase. Most interestingly, at 757 nm, a rapid rise in โˆ† / is observed, which is followed by a decay down to its negative peak value at about 100 ns. We note that this ~100-ns timescale matches well with that measured under midinfrared pumping (Fig. 3a), indicating that above-bandgap optical pumping also triggers a transient phase transition. The fact that a transient phase transition induced by above-bandgap pumping occurs at a similar rate as that driven by mid-infrared pumping suggests a common thermal origin behind these observations. Although the pump fluence in our transient measurements was in the mJยทcm -2 regime, due to charge carrier localization in the form of excitons in orthorhombic MAPbI3 and its small exciton Bohr radius (~5 nm), 45 we expect local photo(thermally) induced orthorhombic-to-tetragonal phase transition to be possible even at low levels of photoexcitation, which may impact the transient optoelectronic properties of charge carriers. 46 Relatedly, it was shown that photoinduced phase transitions in orthorhombic MAPbI3 can generate local tetragonal inclusions to enable continuous-wave lasing. 47 Photoinduced phase transitions in solids have been under intense investigation, as such studies can provide fundamental understanding of nonequilibrium states and hidden metastable phases of matter which are essential to achieve novel material properties for applications in optical switchable devices. 23,[48][49][50] When fs laser pulses are used as the driving force, photoinduced adiabatic nonthermal phase transitions typically occur in the ps timescales. Examples include the metal-to-insulator transition in the prototypical strongly correlated vanadium dioxide, 23, 51 phase transition and metastable phase formation in manganites, 50,52 paraelectric-to-ferroelectric transition in SrTiO3, 51,53 and various chalcogenides 48,54 and charge-density-wave systems. 55 Even for MHPs, a photoinduced orthorhombic-to-cubic phase transition was observed in all-inorganic perovskite CsPbBr3 nanocrystals with ps transition timescales and a ~500 ps recovery time. 56 The investigation of photothermally driven phase transitions, on the other hand, has been less reported. This is in part because of the lack of a suitable optical pumping scheme that can resonantly perturb the material without charge carrier excitation, as most inorganic materials have phonon vibrations at the far-infrared to terahertz regime where pulsed laser sources are much lower in peak intensity than the mid-infrared pulses used here. Our result reported here shows that the reversible orthorhombic-to-tetragonal phase transition in MAPbI3 is at least three orders of magnitude slower than most of the previously reported photoinduced phase transitions in inorganic solids. The extremely slow transition can be largely attributed to the high energetic barriers against the cooperative orientational unlocking of the CH3NH3 + cations 57 due to the strong hydrogen bonding between the organic cations and the iodide anions. Our report suggests that in other soft material systems with phase transitions that involve breaking of hydrogen bonding networks, the timescale is likely to be slow as well. The reported technique can be adapted for studying fundamental timescales and energetic barriers of phase transitions in several important MHP compositions beyond MAPbI3, such as formamidinium lead triiodide (FAPbI3), CsPbI3, and CsSnI3, to name a few. [58][59][60] For CsPbI3 and CsSnI3 that lack organic cations for mid-infrared resonant pumping, one strategy is to measure nanocrystals of these inorganic MHPs embedded in liquid or solid matrix made of organic materials. Information from future experiments of this kind can be useful for understanding and developing strategies to prevent undesirable phase transitions in MHPs for the fabrication of stable and efficient photovoltaic devices. In conclusion, using vibrational-pump visible-probe spectroscopy in a ns-to-ยตs delay time from the orthorhombic to the tetragonal phase. 62 The impulsive vibrational pumping approach can also be generalized to the investigation of phase transitions in other hybrid and organic materials, such as spin crossover materials 63 as well as superatomic crystals, 64 for potential applications towards spintronics, nonvolatile memory, and thermal switching. Compared to other techniques to manipulate transient phase transitions such as the ultrafast-heating calorimetry and ยตs-electric pulsed Joule-heating, 65,66 which have heating rates in the 10 4~1 0 7 Kยทs -1 range, 67 fs vibrational pumping provides a much higher heating rate in the 10 9~1 0 10 Kยทs -1 (1~10 K temperature rise in ~1 ns) and therefore a higher temporal resolution for revealing the fundamental timescales of phase transitions in solid-state hybrid and organic materials. Methods Sample fabrication. The fabrication of MAPbI3 film on CaF2 substrate followed previous work. 68 The MAPbI3 precursor solution was prepared by dissolving 159 mg of methylammonium iodide Static characterization. A high-resolution scanning electron microscope (SEM; Quattro ESEM, ThermoFisher Scientific, USA) was used to obtain cross-sectional images of the samples as well as to quantify the film thickness. For steady-state transmittance measurements in the visible range, a deuterium-halogen light source (AvaLight-DHc-S, Avantes) and a fiber-coupled spectrometer (AvaSpec-ULS2048CL-EVO, Avantes) were used. The steady-state infrared absorbance were measured using FTIR (Nicolet 6700). Transient absorption experiments. The mid-IR pump pulses were produced by a high-energy mid-IR optical parametric amplifier (Orpheus-One-HE), which is pumped by a Pharos amplifier (170 fs pulse duration and 1030 nm pulse wavelength) with 0.9 mJ pulse energy at 1.5 kHz repetition rate. The visible pump pulses at 700 nm were generated by a separate optical parametric amplifier (Orpheus-F) pumped by the same Pharos amplifier with 0.5 mJ input pulse energy. The broadband visible probe pulses were generated by a supercontinuum ns laser (NKT Compact) and triggered and electrically delayed by a digital delay generator (SRS DG645). The instrumental timing jitter of the probe pulses was eliminated by an event timer (Picoquant MultiHarp 150P), which measures the actual arrival times of all the pump and the probe pulses during the transient measurements. Cryogenic measurements were enabled by a liquid-nitrogen optical cryostat (Janis VPF-100) at a vacuum level better than 10 -4 Torr. Supporting Information Additional figures and discussion can be found in the Supporting Information. also applies to a. c Transient โˆ† / spectral map of for MAPbI3 on CaF2 measured using a 700nm pump pulses at 148 K, using a pump fluence of 2.5 mJยทcm -2 . d โˆ† / kinetics plotted at three representative wavelengths (728 nm, 757 nm and 655 nm, as indicated by the black dashed lines in c).
Growth factor and galaxy bias from future redshift surveys: a study on parametrizations Many experiments in the near future will test dark energy through its effects on the linear growth of matter perturbations. In this paper we discuss the constraints that future large-scale redshift surveys can put on three different parameterizations of the linear growth factor and how these constraints will help ruling out different classes of dark energy and modified gravity models. We show that a scale-independent bias can be estimated to a few percent per redshift slice by combining redshift distortions with power spectrum amplitude, without the need of an external estimation. We find that the growth rate can be constrained to within 2-4% for each $\Delta z=0.2$ redshift slice, while the equation of state $w$ and the index $\gamma$ can be simultaneously estimated both to within 0.02. We also find that a constant dimensionless coupling between dark energy and dark matter can be constrained to be smaller than 0.14. I. INTRODUCTION The linear growth rate of matter perturbations is one of the most interesting observable quantities since it allows to explore the dynamical features related to the build-up of cosmic structures beyond the background expansion. For example it can be used to discriminate between cosmological models based on Einstein's gravity and alternative models like f (R) modifications of gravity (see e.g. [1]) or multi-dimensional scenarios like in the Dvali-Gabadaze-Porrati (DGP) [2] theory (e.g. [3] and references therein). In addition, the growth rate is sensitive to dark energy clustering or to dark energy-dark matter interaction. For instance, in models with scalar-tensor couplings or in f (R) theories the growth rate at early epochs can be larger than in ฮ›CDM models and can acquire a scale dependence [4][5][6] (see for instance [7] for a review on dark energy). Simultaneous information on geometry and growth rate can be obtained by measuring the galaxy power spectrum or the 2-point correlation function and their anisotropies observed in redshift space. These redshift distortions arise from peculiar velocities that contribute, together with the recession velocities, to the observed redshift. The net effect is to induce a radial anisotropy in galaxy clustering that can be measured from standard two-point statistics like the power spectrum or the correlation function [8]. The amplitude of the anisotropy is determined by the typical amplitude of peculiar velocities which, in linear theory, is set by the growth rate of perturbations 1 : where G(z) โ‰ก ฮด(z)/ฮด(0) is the growth function, ฮด(z) the matter density contrast and the scale factor a is related to the redshift z through a = (1 + z) โˆ’1 . Since however we only observe the clustering of galaxies and not that of the matter, the quantity that is accessible to observations is actually where the bias b is the ratio of density fluctuations in galaxies and matter. The bias is in general a function of redshift and scale, but in the following we will consider it as a simple scale-independent function. Once the power spectrum is computed in k-space, the analysis proposed in [9] can be exploited to constrain not only geometry but also the growth rate (as pointed out in [10]; see also [3,11]), provided that the power spectrum is not marginalized over its amplitude. In configuration space, the first analysis of the two-point correlation function explicitly aimed at discriminating models of modified gravity from the standard ฮ›CDM scenario has been performed by [12]. Currently, there are several experimental estimates of the growth factor derived from the analysis of the redshift space distortions [12][13][14][15][16][17][18][19], from the redshift evolution of the rms mass fluctuation ฯƒ 8 inferred from Lyฮฑ absorbers [20] and from the power spectrum of density fluctuations measured from galaxies' peculiar velocities [21]. Current uncertainties are still too large to allow these measurements to discriminate among alternative cosmological scenarios. (e.g. [22,23]). On-going redshift surveys like VIPERS [24] or BOSS [25] will certainly provide more stringent constraint and will be able to test those models that deviate most from the standard cosmological model. However, only next generation large-scale redshift surveys at z โ‰ˆ 1 and beyond like EUCLID [26] or BigBOSS [27] will provide an efficient way to discriminate competing dark energy models. The growth rate s clearly depends on the cosmological model. It has been found in several works [28][29][30][31][32] that a simple yet effective parameterization of s captures the behavior of a large class of models. Putting where ฮฉ m (z) is the matter density in units of the critical density as a function of redshift, a value ฮณ โ‰ˆ 0.545 reproduces well the ฮ›CDM behavior while departures from this value characterize different models. For instance the DGP is well approximated by ฮณ โ‰ˆ 0.68 [33,34] while viable models of f (R) are approximated by ฮณ โ‰ˆ 0.4 for small scales and small redshifts [5,6]. This simple parameterization is however not flexible enough to accommodate all cases. A constant ฮณ cannot for instance reproduce a growth rate larger than s = 1 in the past (as we have in f (R) and scalar-tensor models) allowing at the same time s < 1 at the present epoch if ฮฉ m โ‰ค 1. Even in standard cases, a better approximation requires a slowly-varying, but not strictly constant, ฮณ. In addition, the measures of the growth factor obtained from redshift distortions require an estimate of the galaxy bias, which can be obtained either independently, using higher order statistics (e.g. [19,35]) or inversion techniques [36], or self consistently, by assuming some reasonable form for the bias function a priori (for instance, that the bias is independent of scale, as we will assume here). The goal of this paper is to forecast the constraints that future observations can put on the growth rate. In particular we use representative assumptions for the parameters of the EUCLID survey to provide a baseline for future experiments and we focus on the following issues. i) We assess how well one can constrain the bias function from the analysis of the power spectrum itself and evaluate the impact that treating bias as a free parameter has on the estimates of the growth factor. We compare the results with those obtained under the more popular approach of fixing the bias factor (and its error) to some independently-determined value. ii) We estimate how errors depend on the parameterization of the growth factor and on the number and type of degrees of freedom in the analysis. iii) We explicitly explore the case of coupling between dark energy and dark matter and assess the ability of measuring the coupling constant. We do this in the context of the Fisher Matrix analysis. This is a common approach that has been adopted in several recent works, some of which exploring the case of a EUCLID-like survey as we do. We want to stress here that this work is, in fact, complementary to those of the other authors. Unlike most of these works, here we do not try to optimize the parameter of the EUCLID survey in order to improve the constraints on the relevant parameters, as in [37]. Instead, we adopt a representative sets of parameters that describe the survey and derive the expected errors on the interesting quantities. In addition, unlike [38] and [39], we do not explicitly aim to study the correlation between the parameters that describe the geometry of the system and the growth parameters, although in our approach we also take into account the degeneracy between geometry and growth. Finally, the main results of this paper are largely complementary to the work of [40] that perform a more systematic error analysis that does not cover the main issues of our work. Although, as we mentioned, in general s might depend on scale, we limit this paper to an exploration of time-dependent functions only. Forecasts for specific forms of scale-dependent growth factor motivated by scalar-tensor models are in progress and will be presented elsewhere. The layout of the paper is as follows. In the next section we will introduce the different parameterizations adopted for the growth rate and for the equation of state of dark energy, together with the models assumed for the biasing function, and describe the different cosmological models we aim to discriminate. In sec. III we will briefly review the Fisher matrix method for the power spectrum and define the adopted fiducial model. In sec. IV we will describe the characteristics of the galaxy surveys considered in this work, while in sec. V we will report our results on the forecast errors on the parameters of interest. Finally, in sec. VI we will draw our conclusions and discuss the results. II. MODELS The main scope of this work is to quantify the ability of future redshift surveys to constrain the growth rate of density fluctuations. In particular we want to quantify how this ability depends on the parameterization assumed for s and for the equation of state of the dark energy w and on the biasing parameter. For this reason we explore different scenarios detailed below. A. Equation of state โ€ข w-parameterization. In order to represent the evolution of the equation of state parameter w, we use the popular CPL parameterization [41,42] w(z) = w 0 + w 1 z 1 + z . As a special case we will also consider the case of a constant w. B. Growth Rate As anticipated, in this work we assume that the growth rate, s, is a function of time but not of scale. Here we explore three different parameterizations of s: โ€ข s-parameterization. This is in fact a non-parametric model in which the growth rate itself is modeled as a step-wise function s(z) = s i , specified in different redshift bins. The errors are derived on s i in each i-th redshift bin of the survey. โ€ข ฮณ-parameterization. As a second case we assume where the ฮณ(z) function is parametrized as As shown by [43,44], this parameterization is more accurate than that of eq. (3) for both ฮ›CDM and DGP models. Furthermore, this parameterization is especially effective to distinguish between a wCDM model (i.e. a dark energy model with a constant equation of state) that has a negative ฮณ 1 (โˆ’0.020 ฮณ 1 โˆ’0.016, for a present matter density 0.20 โ‰ค ฮฉ m,0 โ‰ค 0.35) and a DGP model that instead, has a positive ฮณ 1 (0.035 < ฮณ 1 < 0.042). In addition, modified gravity models show a strongly evolving ฮณ(z) [5,43,45], in contrast with conventional Dark Energy models. As a special case we also consider ฮณ = constant (only when w also is assumed constant), to compare our results with those of previous works. โ€ข ฮท-parameterization. To explore models in which perturbations grow faster than in the ฮ›CDM case, like in the case of a coupling between dark energy and dark matter [4], we consider a model in which ฮณ is constant and the growth rate varies as where ฮท quantifies the strength of the coupling. The example of the coupled quintessence model worked out by [4] illustrates this point. In that model, the numerical solution for the growth rate can be fitted by the formula (7), with ฮท = cฮฒ 2 c , where ฮฒ c is the dark energy-dark matter coupling constant and best fit values ฮณ = 0.56 and c = 2.1. In this simple case, observational constraints over ฮท can be readily transformed into constraints over ฮฒ c . C. Galaxy Biasing In the analysis of the redshift distortions, s(z) is degenerate with the bias function b(z). In absence of a well-established theory of galaxy formation and evolution, most analysis assume some arbitrary functional form for b(z). However, biasing needs to be neither deterministic nor linear. Stochasticity in galaxy biasing is supposed to have little impact on two-point statistics, at least on scales significantly larger than those involved with galaxy evolution processes [46]. On the other hand, deviations from linearity (which imply scale dependency) might not be negligible. Current observational constraints based on self consistent biasing estimators [19,36] show that nonlinear effects are of the order of a few to โˆผ 10 %, depending on the scale and the galaxy type [35,47]. To account for current uncertainties in both modeling and measuring galaxy bias we consider the following choices for the functional form of b: โ€ข Redshift dependent bias. We assume b(z) = โˆš 1 + z (already used in [48]) since this function provides a good fit to H ฮฑ line galaxies with luminosity L Hฮฑ = 10 42 erg โˆ’1 s โˆ’1 h โˆ’2 modeled by [49] using the semi-analytic GALF ORM models of [50]. We consider H ฮฑ line objects since they will likely constitute the bulk of galaxies in the next generation slitless spectroscopic surveys like EUCLID. This H ฮฑ luminosity roughly corresponding, at z = 1.5, to a limiting flux of f Hฮฑ โ‰ฅ 4 ร— 10 โˆ’16 erg cm โˆ’2 s โˆ’1 . โ€ข Constant bias. For the sake of comparison, we will also consider the case of constant b = 1 corresponding to the rather unphysical case of a redshift-independent population of unbiased mass tracers. D. Reference Cosmological Models As it will be better explained in the next section, to perform the Fisher Matrix analysis we need to adopt a fiducial cosmological model. We choose the one recommended by the Dark Energy Task Force (DETF) [51]. In this "pseudo" ฮ›CDM model the growth rate values are obtained from eq. (3) with ฮณ = 0.545 and ฮฉ m (z) is given by the standard evolution where (the subscript 0 will generally denotes the present value) Then ฮฉ m (z) is completely specified by setting ฮฉ m,0 = 0.25, ฮฉ k = 0, w 0 = โˆ’0.95, w 1 = 0. We wish to stress that regardless of the parameterization adopted, our fiducial cosmology is always chosen as the DETF one. In particular we choose as fiducial values ฮณ 1 = 0 and ฮท = 0, when the corresponding parameterizations are employed. One of the goals of this work is to assess whether the analysis of the power spectrum in redshiftspace can distinguish the fiducial model from alternative cosmologies, characterized by their own set of parameters (apart from ฮฉ m,0 which is set equal to 0.25 for all of them). The alternative models that we consider in this work are: โ€ข DGP model. We consider the flat space case studied in [52]. When we adopt this model then we set ฮณ 0 = 0.663, ฮณ 1 = 0.041 [43] or ฮณ = 0.68 [33] and w = โˆ’0.8 when ฮณ and w are assumed constant. โ€ข f (R) model. Here we consider the one proposed in [53], depending on two parameters, n and ฮป, which we fix to n = 2 and ฮป = 3. In this case we assume ฮณ 0 = 0.43, ฮณ 1 = โˆ’0.2, values that apply quite generally in the limit of small scales (provided they are still linear, see [5]) or ฮณ = 0.4 and w = โˆ’0.99. โ€ข coupled dark energy (CDE) model. This is the coupled model proposed by [54,55]. In this case we assume ฮณ 0 = 0.56, ฮท = 0.056 (this value comes from putting ฮฒ c = 0.16 as coupling, which is of the order of the maximal value allowed by CMB constraints) [56]. As already explained, this model cannot be reproduced by a constant ฮณ. III. FISHER MATRIX ANALYSIS In order to constrain the parameters, we use the Fisher matrix method [57] (see [58] for a review), that we apply to the power spectrum analysis in redshift space following [9]. For this purpose we need an analytic model of the power spectrum in redshift space as a function of the parameters that we wish to constrain. The analytic model is obtained in three steps. (i ) First of all we compute with CMBFAST [59] the linear power spectrum of the matter in real space at z = 0, P 0r (k), choosing a reference cosmology where the parameters to be given as input (i.e. ฮฉ m,0 h 2 , ฮฉ b,0 h 2 , h, n s also employed in the Fisher matrix analysis, plus the other standard parameters required by the CMBFAST code) are set to the values given in the III column of Tab. I while for the normalization of the spectrum we use ฯƒ 8 = 0.8. (ii ) Second, we model the linear redshift-space distortions as Here the subscript F indicates quantities evaluated on the fiducial model. In this expression H(z) is the expansion history in Eq. 9, D(z) is the angular diameter distance, G(z) the growth factor and P s (z) represents a scale-independent offset due to imperfect removal of shot-noise. Finally ฮฒ(z) is the redshift distortion parameter and the term (1 + ฮฒยต 2 ) 2 is the factor invoked by [60] to account for linear distortion in the distant-observer's approximation, where ยต is the direction cosine of the wavenumber k with respect to the line of sight. As shown in [10,11] and recently in [37], the inclusion of growth rate information reduces substantially the errors on the parameters, improving the figure of merits. (iii ) As a third and final step we account for nonlinear effects. On scales larger than (โˆผ 100 h โˆ’1 Mpc) where we focus our analysis, nonlinear effects can be represented as a displacement field in Lagrangian space modeled by an elliptical Gaussian function. Therefore, following [61,62], to model nonlinear effect we multiply P 0r (k) by the factor where ฮฃ โŠฅ and ฮฃ represent the displacement across and along the line of sight, respectively. They are related to the growth factor G and to the growth rate s through ฮฃ โŠฅ = ฮฃ 0 G and ฮฃ = ฮฃ 0 G(1 + s). The value of ฮฃ 0 is proportional to ฯƒ 8 . For our reference cosmology where ฯƒ 8 = 0.8 [63], we have ฮฃ 0 = 11 h โˆ’1 Mpc. The observed power spectrum in a given redshift bin depends therefore on a number of parameters, denoted collectively as p i , such as the Hubble constant at present h, the reduced matter and baryon fractions at present, ฮฉ m,0 h 2 and ฮฉ b,0 h 2 , the curvature density parameter ฮฉ k , the spectral tilt n s plus the parameters that enter in the parameterizations described in the previous section: w 0 , w 1 (or simply w); ฮณ 0 , ฮณ 1 (or ฮณ) and ฮท. They are listed in Tab. I and are referred to as "Cosmological parameters". These parameters will be left free to vary while we always fix ฯƒ 8 =0.8 since the overall amplitude is degenerate with growth rate and bias. The other free parameters depend on the redshift. They are listed in the lower part of Tab. I and include the expansion history H(z), the growth factor G(z), the angular diameter distance D(z), the shot noise P s (z), the growth rate s(z), the redshift distortion parameter ฮฒ(z) and the galaxy bias b(z). Given the model power spectrum we calculate, numerically or analytically, the derivatives where is the effective volume of the survey sampled at the scale k along the direction ยต. V survey and n represent the volume of the survey and the mean number density of galaxies in each redshift bin. As a fiducial model we assume a "pseudo" ฮ›CDM with w 0 = โˆ’0.95; the differences with the standard w 0 = โˆ’1.0 ฮ›CDM model are rather small. For example, in the case of the ฮณ-parameterization, our fiducial model has ฮณ 0 = 0.545, ฮณ 1 = 0 whereas the standard ฮ›CDM model has ฮณ 0 = 0.556, ฮณ 1 โˆ’ 0.018 [43]. To summarize, our fiducial model is the same model recommended by the Dark Energy Task Force [51], i.e.: The fiducial values for the redshift dependent parameters are computed in every bin through the standard Friedmann-Robertson-Walker relations At this point our analysis is performed in two ways, according to the choice of z-dependent parameters that characterize the power spectrum: โ€ข Internal bias method. We assume some fiducial form for b(z) (z-dependent or constant) and express the growth function G(z) and the redshift distortion parameter ฮฒ(z) in terms of the growth rate s (see eqs. (22), (2)). When we compute the derivatives of the spectrum (eq. (12)), b(z) and s(z) are considered as independent parameters in each redshift bin. In this way we can compute the errors on b (and s) self consistently by marginalizing over all other parameters. In this case we also assume the same forms for b(z) as in the Internal bias case but we do not explicit G(z) and ฮฒ(z) in terms of s. The independent parameters are now the product G(z) ยท b(z) (if we considered them separately, the Fisher matrix would result singular) and ฮฒ(z). In this case we compute the errors over ฮฒ(z) marginalizing over all other parameters. Since we also marginalize over (G ยท b) 2 , in this case we cannot estimate the error over b from the Fisher matrix. Thus, in order to obtain the error over s (related to ฮฒ through s = ฮฒ ยท b) with standard error propagation, we need to assume an "external" error for b(z). We allow the relative error โˆ†b/b to be either 1% or 10%, two values that bracket the ranges of expected errors contributed by model uncertainties and deviations from linear biasing. IV. MODELING THE REDSHIFT SURVEY The main goals of next generation redshift surveys will be to constrain the Dark Energy parameters and to explore models alternative to standard Einstein Gravity. For these purposes they will need to consider very large volumes that encompass z โˆผ 1, i.e. the epoch at which dark energy started dominating the energy budget, spanning a range of epochs large enough to provide a sufficient leverage to discriminate among competing models at different redshifts. The additional requirement is to observe some homogeneous class of objects that are common enough to allow a dense sampling of the underlying mass density field. As anticipated in the introduction, in this paper we consider as a reference case the spectroscopic survey proposed by the EUCLID collaboration [26]. We stress that our aim is not to focus on this particular redshift survey and assess how the constraints on the relevant parameters depends on the survey characteristics in order to optimize future observational strategies. On the contrary, under the hypothesis that next-generation space-based all-sky redshift surveys will be similar to the EUCLID spectroscopic survey, we consider the latter as a reference case and estimate how the expected errors on the bias, growth rate, coupling constant and other relevant quantities will change when one consider slightly different observational setups. For this purpose we take advantage of the huge effort made by the EUCLID team to simulate the characteristic of the target objects and compute the expected selection function and detection efficiency of the survey and adopt the same survey parameters presented in [65]. Here we consider a survey covering a large fraction of the extragalactic sky (|b| โ‰ฅ 20 โ€ข ), corresponding to โˆผ 20000 deg 2 capable to measure a large number of galaxy redshifts out to z โˆผ 2. A promising observational strategy is to target H ฮฑ emitters at near-infrared wavelengths (which implies z > 0.5) since they guarantee both relatively dense sampling (the space density of this population is expected to increase out to z โˆผ 2) and an efficient method to measure the redshift of the object. The limiting flux of the survey should be the tradeoff between the requirement of minimizing the shot noise, the contamination by other lines (chiefly among them the [O II ] line), and that of maximizing the so-called efficiency ฮต, i.e. the fraction of successfully measured redshifts. To minimize shot noise one should obviously strive for a low flux. Indeed, the authors in [65] found that a limiting flux f Hฮฑ โ‰ฅ 1 ร— 10 โˆ’16 erg cm โˆ’2 s โˆ’1 would be able to balance shot noise and cosmic variance out to z = 1.5. However, simulated observations of mock H ฮฑ galaxy spectra have shown that ฮต ranges between 30 % and 60% (depending on the redshift) for a limiting flux f Hฮฑ โ‰ฅ 3 ร— 10 โˆ’16 erg cm โˆ’2 s โˆ’1 [26]. Moreover, contamination from [O II ] line drops from 12% to 1% when the limiting flux increases from 1 ร— 10 โˆ’16 to 5 ร— 10 โˆ’16 [65]. Taking all this into account we adopt a conservative choice and consider three different surveys characterized by a limiting flux of 3, 4 and 5 ร— 10 โˆ’16 ergcm โˆ’2 s โˆ’1 . We use the number density of H ฮฑ galaxies at a given redshift, n(z), estimated in [65] using the latest empirical data and obtained by integrating the H ฮฑ luminosity function above the minimum luminosity set by the limiting flux L Hฮฑ,min. = 4ฯ€D L (z) 2 f Hฮฑ where D L (z) is the luminosity distance. To obtain the effective number density one has to account for the success rate in measuring galaxy redshifts from H ฮฑ lines. The effective number density is then obtained by multiplying n(z) by the already mentioned efficiency, ฮต. In the range of redshifts and fluxes considered in this work the value of ฮต varies in the interval [30%, 50%] (see Fig. A.1.4 of [26]). In an attempt to bracket current uncertainties in modeling galaxy surveys, we consider the following choices for the survey parameters: โ€ข Reference case (ref. The total number of observed galaxies ranges from 3 ยท 10 7 (pess.) to 9 ยท 10 7 (opt.). For all cases we assume that the relative error on the measured redshift is ฯƒ z = 0.001, independent of the limiting flux of the survey. V. RESULTS In this section we present the main result of the Fisher matrix analysis that we split into two sections to stress the different emphasis given in the two approaches. We note that in all tables below we always quote errors at 68% probability level and draw in the plots the probability regions at 68% and/or 95% (denoted for shortness as 1 and 2ฯƒ values). Moreover, in all figures, all the parameters that are not shown have been marginalized over or fixed to a fiducial value when so indicated. A. s-parameterization This analysis has two main goals: that of figuring out our ability to estimate the biasing parameter and that of estimating the growth rate with no assumptions on its redshift dependence. The total number of parameters that enter in the Fisher matrix analysis is 45: 5 parameters that describe the background cosmology (ฮฉ m,0 h 2 , ฮฉ b,0 h 2 , h, n, ฮฉ k ) plus 5 z-dependent parameters specified in 8 redshift bins evenly spaced in the range z = [0.5, 2.1]. They are P s (z), D(z), H(z), s(z), b(z) in the internal bias case, while we have ฮฒ(z) and G(z) ยท b(z) in the place of s(z) and b(z) when we use the external bias method. The subsequent analysis depends on the bias method adopted. โ€ข In case of the internal bias method, the fiducial growth function G(z) in the (i + 1)-th redshift bin is evaluated from a step-wise, constant growth rate s(z) as To obtain the errors on s i and b i we compute the elements of the Fisher matrix and marginalize over all other parameters. In this case one is able to obtain, self-consistently, the error on the bias and on the growth factor at different redshifts, as detailed in Tab. III and Tab. IV respectively. Tab. III illustrates one important result: through the analysis of the redshift-space galaxy power spectrum in a next-generation EUCLID-like survey, it will be possible to measure galaxy biasing in โˆ†z = 0.2 redshift bins with less than 3.5% error, provided that the bias function is independent on scale. This fact can be appreciated in Fig. 1 in which we show the expected relative error as a function of redshift for both b(z) functions and for the survey Pessimistic case. Errors are very similar in all but the outermost redshift shells. We show the Pessimistic case since with a more The precision in measuring the bias has a little dependence on the b(z) form: errors are very similar (the discrepancy is less than 1%) but in the outermost redshift shells (where however is less than 3%). favorable survey configuration, like the Reference case, the errors would be almost identical. In addition we find that the precision in measuring the bias has a little dependence on the b(z) form. The largest discrepancy between the b(z) = 1 and b(z) = โˆš 1 + z cases is โˆผ 3% and refers to the expected errors on the growth rate at z = 2 in the Pessimistic case. Differences are typically much smaller for all other parameters or, for s(z) at lower redshifts or with a more favorable survey setup. Given the robustness of the results on the choice of b(z) in the following we only consider the b(z) = โˆš 1 + z case. In Fig. 2 we show the errors on the growth rate s as a function of redshift, overplotted to our fiducial ฮ›CDM (green solid curve). The three sets of error bars are plotted in correspondence of the 8 redshift bins and refer (from left to right) to the Optimistic, Reference and Pessimistic cases, respectively. The other curves show the expected growth rate in three alternative cosmological models: flat DGP (red dashed curve), f (R) (blue dotted curve) and CDE (purple, dot-dashed curve). This plot clearly illustrates the ability of next generation surveys to distinguish between alternative models, even in the less favorable choice of survey parameters. โ€ข In case of the external bias method we marginalize over the overall amplitude (G ยท b) 2 . Since, in this case, we cannot find errors self-consistently, we assume that bias has been determined a priori with errors per redshift bin of 1% and 10%, two values that should bracket the expected range of uncertainties. We note that the external bias method can be considered more conservative, especially in the case of large errors although we see no obvious reason why it should be preferred to the internal bias method that seems to provide similar results. Indeed, the errors on s relative to the 1% bias error listed in Table V are quite similar to those of the internal bias case. As expected, errors on s increase significantly when the bias is known with 10% accuracy rather than 1%. However, even in this case, one keeps the ability of distinguishing between most of the competing cosmological models at 1ฯƒ level, as shown in Fig. 3. The main results of this section can be summarized as follows. 1. The ability of measuring the biasing function is not too sensitive to the characteristic of the survey (b(z) can be constrained to within 1.5% in the Optimistic scenario and up to 3.5% in the Pessimistic one) provided that the bias function is independent on scale. Moreover, the precision in measuring the bias has a very little dependence on the b(z) form. 2. The growth rate s can be estimated to within 1-3% in each bin for the Reference case survey with Table III: 1ฯƒ marginalized errors for the bias in each redshift bin obtained with the "internal bias" method. Table IV: 1ฯƒ marginalized errors for the growth rates in each redshift bin (Fig. 2) obtained with the "internal bias" method. no need of estimating the bias function b(z) from some dedicated, independent analysis using higher order statistics [19] or full-PDF analysis [36]. 3. If the the bias were measured to within 1% in each slice, then the error over s would be very similar (just 1-2% larger) to that obtained by the internal estimate of b(z). Table V: 1ฯƒ marginalized errors for the growth rates in each redshift bin (Fig. 3) obtained with the "external bias" method. ) and a model with coupling between dark energy and dark matter (purple, dot-dashed curve). In this case it will be possible to distinguish these models with next generation data. B. Other parameterizations. In this section we assess the ability of estimating s(z) when it is expressed in one of the parametrized forms described in Section II B. More specifically, we focus on the ability of determining ฮณ 0 and ฮณ 1 , in the context of the ฮณ-parameterization and ฮณ, ฮท in the ฮท-parameterization. In both cases the Fisher matrix elements have been estimated by expressing the growth factor as where for the ฮณ-parameterization we fix ฮท = 0. In this section we adopt the internal bias approach and assume that b(z) = โˆš 1 + z since, as we have checked, in the case of b(z) = 1 one obtains very similar results. Figure 3: Expected constraints on the growth rates in each redshift bin (using the "external bias" method), assuming for the bias a relative error of 1% (upper panel) and 10% (lower panel). For each z the central error bars refer to the Reference case while those referring to the Optimistic and Pessimistic case have been shifted by -0.015 and +0.015 respectively. The growth rates for four different models are also plotted: ฮ›CDM (green solid curve), flat DGP (red dashed curve), f (R) model (blue dotted curve) and a model with coupling between dark energy and dark matter (purple, dot-dashed curve). Even in the case of large errors (10%) for the bias it will be possible to distinguish among three of these models with next generation data. โ€ข ฮณ-parameterization. We start by considering the case of constant ฮณ and w in which we set ฮณ = ฮณ F = 0.545 and w = w F = โˆ’0.95. As we will discuss in the next Section, this simple case will allow us to cross-check our results with those in the literature. In Fig. 4 we show the marginalized probability regions, at 1 and 2ฯƒ levels, for ฮณ and w. The regions with different shades of green illustrates the Reference case for the survey whereas the blue long-dashed and the black short-dashed ellipses refer to the Optimistic and Pessimistic cases, respectively. Errors on ฮณ and w are listed in Tab. VI together with the corresponding figures of merit [FOM] defined to be the squared inverse of the Fisher matrix determinant and therefore equal to the inverse of the product of the errors in the pivot point, see [51]. Contours are centered on the fiducial model. The blue triangle and the blue square represent the flat DGP and the f (R) models' predictions, respectively. It is clear that, in the case of constant ฮณ and w, the measurement of the growth rate in a EUCLID-like survey will allow us to discriminate among these models. These results have been obtained by fixing the Fig. 4 and figures of merit. Here we have fixed ฮฉ k to its fiducial value. curvature to its fiducial value ฮฉ k = 0. If instead, we consider curvature as a free parameter and marginalize over, the errors on ฮณ and w increase significantly, as shown in Table VII, and yet the precision is good enough to distinguish the different models. For completeness, we also computed the fully marginalized errors over the other Cosmological parameters for the reference survey, given in Tab. VIII. As a second step we considered the case in which ฮณ and w evolve with redshift according to eqs. (6) and (4) and then we marginalize over the parameters ฮณ 1 , w 1 and ฮฉ k . The marginalized probability contours are shown in Fig. 5 in which we have shown the three survey setups in three different panels to avoid overcrowding. Dashed contours refer to the z-dependent parameterizations while red, continuous contours refer to the case of constant ฮณ and w obtained after marginalizing over ฮฉ k . Allowing for time dependency increases the size of the confidence ellipses since the Fisher matrix analysis now accounts for the additional uncertainties in the extra-parameters ฮณ 1 and w 1 ; marginalized error values are in columns ฯƒ ฮณmarg,1 , ฯƒ wmarg,1 of Tab. IX. We note, however, that errors are still small enough to distinguish the fiducial model from the f (R) and DGP scenarios. We have also projected the marginalized ellipses for the parameters ฮณ 0 and ฮณ 1 and calculated their marginalized errors and figures of merit, which are reported in Tab. X. The corresponding uncertainties contours are shown in Fig. 6. Once again we overplot the expected values in the f (R) and DGP scenarios to stress the fact that one is expected to be able to distinguish among competing models, irrespective on the survey's precise characteristics. As a further test we have estimated how the errors on ฮณ 0 depend on the number of parameters explicitly involved in the Fisher matrix analysis. Fig. 7 shows the expected 1ฯƒ errors on ฮณ (Yaxis) as a function of the number of parameters that are fixed when computing the element of the Fisher matrix (the different combinations of the parameters are shown on the top of the histogram elements). We see that error estimates can decrease up to โˆผ 50 % when parameters are fixed to some fiducial value, or are determined independently. We have repeated the same analysis as for the ฮณ-parameterization taking into account the possibility of coupling between DE and DM i.e. we have modeled the growth factor according to eq. (7) and the dark energy equation of state as in eq. (4) and marginalized over all parameters, including ฮฉ k . The marginalized errors are shown in columns ฯƒ ฮณmarg,2 , ฯƒ wmarg,2 of Tab. IX and the significance contours are shown in the three panels of Fig. 8 which is analogous to Fig. 5. The uncertainty ellipses are now larger than in the case of the ฮณ-parameterization and show that DGP and f (R) models could be rejected at > 1ฯƒ level only if the redshift survey parameter will be more favorable than in the Pessimistic case. Marginalizing over all other parameters we can compute the uncertainties in the ฮณ and ฮท parameters, as listed in Tab. XI. The relative confidence ellipses are shown in the left panel of Fig. 9. This plot shows that next generation EUCLID-like surveys will be able to distinguish the reference model with no coupling (central, red dot) to the CDE model proposed by [56] (white square) only at the 1-1.5 ฯƒ level. Finally, in order to explore the dependence on the number of parameters and to compare our results to previous works, we also draw the confidence ellipses for w 0 , w 1 with three different methods: i) fixing ฮณ 0 , ฮณ 1 to their fiducial values and marginalizing over all the other parameters; ii) marginalizing over all parameters plus ฮณ 0 , ฮณ 1 but fixing ฮฉ k ; iii) marginalizing over all parameters but w 0 , w 1 . As one can see in Fig. 10 and Tab. XII this progressive increase in the number of marginalized parameters reflects in a widening of the ellipses with a consequent decrease in the figures of merit. These results are in agreement with those of other authors (e.g. [37,40]). The results obtained this Section can be summarized as follows. 1. If both ฮณ and w are assumed to be constant and setting ฮฉ k = 0 then, a redshift survey described by our Reference case will be able to constrain these parameters to within 4% and 2%, respectively. 3. If w and ฮณ are considered redshift-dependent and parametrized according to eqs (6) and (4) then the errors on ฮณ 0 and w 0 obtained after marginalizing over ฮณ 1 and w 1 increase by a factor โˆผ 4, 5, i.e. we expect to measure ฮณ 0 and w 0 with a precision of 13-15% and 11-14% respectively, where the interval reflects the uncertainties in the characteristic of the survey. With this precision we will be able to distinguish the fiducial model from the DGP and f (R) scenarios with more than 2ฯƒ significance. 4. The ability to discriminate these models with a significance above 2ฯƒ is confirmed by the confidence contours drawn in the ฮณ 0 -ฮณ 1 plane, obtained after marginalizing over all other parameters. 5. If we allow for a coupling between dark matter and dark energy, and we marginalize over ฮท rather than over ฮณ 1 , then the errors on ฮณ 0 and w 0 are almost identical to those obtained in the case of the ฮณ-parameterization. However, our ability in separating the fiducial model from the CDE model is significantly hampered: the confidence contours plotted in the ฮณ-ฮท plane show that discrimination can only be performed wit 1-1.5ฯƒ significance. VI. CONCLUSIONS In this paper we addressed the problem of determining the growth rate of density fluctuations from the estimate of the galaxy power spectrum at different epochs in future redshift survey. As a reference case we have considered the proposed EUCLID spectroscopic survey modeled according to the latest, publicly available survey characteristics [26,65]. In this work we focused on a few issues that we regard as very relevant and that were not treated in previous, analogous Fisher Matrix analysis mainly aimed at optimizing the survey setup and the observational strategy. These issues are: i) the ability in measuring self-consistently galaxy bias with no external information and the impact of treating the bias as an extra free parameter on the error budget; ii) the impact of choosing a particular parameterization in determining the growth rate and in distinguishing dark energy models with very different physical origins (in particular we focus on the ฮ›CDM, f (R) and the DGP, models that are still degenerate with respect to present growth rate data); iii) the estimate of how errors on the growth rate depend on the degrees of freedom in the Fisher matrix analysis; iv) the ability of estimating a possible coupling between dark matter and dark energy. The main results of the analysis were already listed in the previous Section, here we recall the most relevant ones. 1. With the "internal bias" method we were able to estimate bias with 1% accuracy in a self consistent way using only galaxy positions in redshift-space. The precision in measuring the bias has a very little dependence on the functional form assumed for b(z). Measuring b with 1% accuracy will be a remarkable result also from an astrophysical point of view, since it will provide a strong, indirect constraint on the models of galaxy evolution. Table IX: 1ฯƒ marginalized errors for parameters ฮณ and w expressed through ฮณ and ฮท parameterizations. Columns ฮณ0,marg1, w0,marg1 refer to marginalization over ฮณ1, w1 (Fig. 5) while columns ฮณ0,marg2, w0,marg2 refer to marginalization over ฮท, w1 (Fig. 8). 2. We have demonstrated that measuring the amplitude and the slope of the power spectrum in different z-bin allows to constrain the growth rate with good accuracy, with no need to assume an external error for b(z). In particular, we found that s can be constrained at 1ฯƒ to within 3% in each of the 8 redshift bin from z = 0.5 to 2.1. This result is robust to the choice of the biasing function b(z). The accuracy in the measured s will be good enough to discriminate among the most popular competing models of dark energy and modified gravity. 3. Taking into account the possibility of a coupling between dark matter and dark energy has the effect of loosening the constraints on the relevant parameters, decreasing the statistical significance in distinguishing models (from 2ฯƒ to 1.5ฯƒ). Yet, this is still a remarkable improvement over the present situation, as can be appreciated from Fig. 9 where we compare the constraints expected by next generation data to the present ones. Moreover, the Reference survey will be able to constrain the parameter ฮท to within 0.04. Reminding that we can write ฮท = 2.1ฮฒ 2 c [4], this means that the coupling parameter ฮฒ c between dark energy and dark matter can be constrained to within 0.14, solely employing the growth rate information. This is comparable to existing constraints from the CMB but is complementary since obviously it is obtained at much smaller redshifts. Fig. 9 and figures of merit. (7) and (4) and marginalizing over ฮท and w1 (black dashed curves); marginalized error values are in columns ฯƒฮณ marg,2 , ฯƒw marg,2 of Tab. X. Yellow dots represent the fiducial model, the triangles stand for a f (R) model and the squares mark the flat DGP. coupling could therefore be detected by comparing the redshift survey results with the CMB ones. It is worth pointing out that, whenever we have performed statistical tests similar to those already discussed by other authors in the context of a EUCLID-like survey, we did find consistent results. Examples of this are the values of FOM and errors for w 0 , w 1 , similar to those in [37,40] and the errors on constant ฮณ and w [40]. However, let us notice that all these values strictly depend on the parameterizations adopted and on the numbers of parameters fixed or marginalized over. In particular, we also found that all these constraints can be improved if one uses additional information from e.g. CMB and other observations. Fig. 10 ). Figure 9: ฮท-parameterization. Left panel: 1 and 2ฯƒ marginalized probability regions for the parameters ฮณ and ฮท in eq. (7) relative to the reference case (shaded blue regions), to the optimistic case (yellow long-dashed ellipses) and to the pessimistic case (black short-dashed ellipses). The red dot marks the fiducial model while the square represents the coupling model. Right panel: present constraints on ฮณ and ฮท computed through a full likelihood method (here the red dot marks the likelihood peak) [4]; long-dashed contours are obtained assuming a prior for ฮฉm,0. We made a first step in this direction in Fig. (7), which shows how the errors on a constant ฮณ decrease when progressively more parameters are fixed by external priors. The blue dashed ellipses are obtained fixing ฮณ0, ฮณ1 to their fiducial values and marginalizing over all the other parameters; for the red shaded ellipses instead, we also marginalize over ฮณ0, ฮณ1 but we fix ฮฉ k = 0. Finally, the black dotted ellipses are obtained marginalizing over all parameters but w0 and w1. The progressive increase in the number of parameters reflects in a widening of the ellipses with a consequent decrease in the figures of merit (see Tab. XII).
pH-dependent Heterogeneity of Acidic Amino Acid Transport in Rabbit Jejunal Brush Border Membrane Vesicles* Initial rates of Na+-dependent L-glutamic and was- partic acid uptake were determined at various substrate concentrations using a fast sampling, rapid fil- tration apparatus, and the resulting data were ana-lyzed by nonlinear computer fitting to various transport models. At pH 6.0, L-glutamic acid transport was best accounted for by the presence of both high (K, = 61 PM) and low ( K , = 7.0 mM) affinity pathways, whereas D-aSpartiC acid transport was restricted to a single high affinity route (K, = 80 p ~ ) . Excess D- aspartic acid and L-phenylalanine served to isolate L-glutamic acid flux through the remaining low and high affinity systems, respectively. Inhibition studies of other amino acids and analogs allowed us to identify the high affinity pathway as the XiG system and the low affinity one as the intestinal NBB system. The pH dependences of the high and low affinity pathways of L-glutamic acid transport also allowed us to establish some relationship between the NBB and the more classical ASC system. Finally, these studies also revealed a heterotropic activation of the intestinal X ~ G transport system by all neutral amino acids but glycine through an apparent activation of V,,,. Initial rates of Na+-dependent L-glutamic and waspartic acid uptake were determined at various substrate concentrations using a fast sampling, rapid filtration apparatus, and the resulting data were analyzed by nonlinear computer fitting to various transport models. At pH 6.0, L-glutamic acid transport was best accounted for by the presence of both high (K, = 61 PM) and low ( K , = 7.0 mM) affinity pathways, whereas D-aSpartiC acid transport was restricted to a single high affinity route (K, = 80 p~) . Excess Daspartic acid and L-phenylalanine served to isolate Lglutamic acid flux through the remaining low and high affinity systems, respectively. Inhibition studies of other amino acids and analogs allowed us to identify the high affinity pathway as the XiG system and the low affinity one as the intestinal NBB system. The pH dependences of the high and low affinity pathways of L-glutamic acid transport also allowed us to establish some relationship between the NBB and the more classical ASC system. Finally, these studies also revealed a heterotropic activation of the intestinal X~G transport system by all neutral amino acids but glycine through an apparent activation of V,,,. The early studies of Gibson and Wiseman (1) have provided the first evidence that, in the rat small intestine, the uptake of acidic amino acids is a carrier-mediated process. Since this work, a substantial body of evidence, as recently reviewed (2,3), has accumulated showing a multiplicity of Na+-dependent and Na+-independent transport pathways for this class of amino acids in mammalian cells and tissues. Heterogeneity in Na+-dependent acidic amino acid transport systems has also been described in numerous cell types (2,3). For example, both high and low affinity systems for L-glutamic acid transport have been reported in membrane preparations of the central nervous system (2-4), in cultured human skin fibroblasts (5) and rat hepatocytes (6), and in rat (7) and rabbit (8) renal brush border membrane vesicles. In the brush border membrane of the small intestine, contradictory results have appeared with regard to the heterogeneity of Na+-dependent acidic amino acid transport (2). For example, Lerner and Steinke (9) have reported a single transport system with a K , of intestine. In this study however, whereas the uptake of 50 WM L-glutamic acid was markedly inhibited by D-aspartic acid and L-y-methylglutamic acid, it was also partially inhibited by a number of neutral amino acids such as L-proline, Lleucine, and L-alanine. More recent studies by Wingrove and Kimmich (10) did show the presence of both high ( K , = 16 WM) and low affinity ( K , = 2.7 mM) routes for the Na+dependent transport of L-aspartic acid in a preparation of isolated chick intestinal epithelial cells. Similarly, in studies using intestinal brush border membrane vesicles, Corcelli et al. (11) found a single transport system in the rat with a K , of 1.5 mM for L-glutamic acid and 1.0 mM for L-aspartic acid whereas, in human, Rajendran et al. (12) demonstrated the presence of a high affinity transport for L-glutamic acid with a K , of 91 p~ with no evidence for a low affinity transport pathway. Transport studies that have examined the pH dependence of acidic amino acid transport have also led to contradictory results when performed under Na' gradient conditions alone but seem to agree quite closely when done under optimum gradient conditions of both inward Na+ and outward K+ (2). This observation, in conjunction with the fact that a transport system for neutral amino acids with properties similar to those described for system ASC, according to Christensen's nomenclature (13), may serve upon protonation as a low affinity pathway for acidic amino acids in those cell types for which inhibitor specificity and pH dependence have been studied (14)(15)(16), could actually provide a rationale for the lack of consistency in the characterization of intestinal acidic amino acid transport pathways. Accordingly, the presence in the brush border membrane of the small intestine of both an ASC-like transporter for neutral amino acids and a more specific, XiG type (13) of acidic amino acid transporter could have led to different results in the characterization of the acidic amino acid transport routes depending on the pH conditions of the uptake assay. In this study, we have tested this hypothesis by determining the kinetics of both L-glutamic and D-aspartic acids under slightly acidic conditions (pH 6.0). The choice for these substrates was dictated quite naturally by the demonstration that the high affinity acidic amino acid transport system XiG displays no stereospecificity between L-and D-aspartic acids, whereas ASC-like systems never accept D-aspartic acid as substrate (13). Since transport heterogeneity was observed in the case of L-glutamic acid only, we then tried to delineate the two transport pathways by inhibition studies. Finally, we report the pH dependence of the two transport pathways. All together, these results do support the presence of both ASCand Xic-like systems in the rabbit jejunal brush border membrane. Preparation of Brush Border Membrane Vesicles-Two large batches of rabbit jejunal brush border membrane vesicles were pre-pared, as described recently (17), using a modified homogenate media in combination with Mg2+ precipitation to ensure both vesicle preequilibration and stabilization. Briefly, for each batch of vesicles, the jejunum of 16 rabbits (male, New Zealand White, 2.0-2.5 kg) was removed and flushed with ice-cold saline. The mucosal scrapings were homogenized at a 20:1 scrapings ratio (v/w) in the modified homogenate media containing a wide spectrum of phospholipase inhibitors (17). MgC1, was then added to give a final concentration of 10 mM, and the vesicles were prepared down to the second pellet (Pz). These were resuspended in a minimum volume of 50 mM Hepesl-Tris buffer, pH 7.0, containing 300 mM mannitol, combined, mixed, and divided into 500-pl aliquots that were frozen in liquid N,. On the day of vesicle preparation, a suitable number of aliquots were thawed and resuspended in the media required for the particular experiment (see descriptions in the figure and table legends) and prepared down to the final vesicle pellet (Pl). The vesicles, resuspended in the same media to give a final concentration of about 25 mg of protein/ml, were incubated overnight at 4 "C to ensure complete equilibration of the components of the resuspension media (17). On the next morning, the vesicles were divided into 25-pl aliquots suitable for individual uptake assays and frozen in liquid N:! until the time of assay to ensure complete stabilization of the specific activity of substrate uptake over the course of an experiment (17). Under these conditions, very similar uptake rates were routinely obtained from the same batch of vesicles when the same conditions of uptake were employed in separate experiments on different days. Also, since large batches of vesicles tend to reduce variations in uptake data due to animal differences, quite comparable results were obtained between the two batches used in these studies. Transport Assays-Initial rates of Na'-dependent L-glutamic and D-aspartic acid uptakes were determined using the fast sampling, rapid filtration apparatus recently developed in our laboratory (18). For each assay, 20 p1 of vesicles were loaded into the apparatus, and uptake was initiated by injecting the vesicles into 480 pl of the uptake media required for the particular assay (see the description in the figure and table legends). Tracer uptakes were determined at 35 "C by a nine-point automatic sequential sampling of the uptake mixture at 1.5-s intervals. At each time point, the apparatus automatically injected 50 pl of the uptake mixture into 1 ml of ice-cold stop solution (see the description in the figure and table legends), filtered each stopped sample through 0.65-pm cellulose nitrate filters, and washed the filters three times with 1 ml of ice-cold stop. Radioactivities on the filters were then determined by liquid scintillation counting as described previously (19). Data Analysis-Initial rates of [3H]~-glutamic acid and [%IDaspartic acid uptake were determined by linear regression over the nine-point time course of each assay, as described previously (18,20). The kinetic parameters of acidic amino acid transport or inhibition were estimated by nonlinear regression analysis using the standard errors of regression on the initial rates as weighting factors (20). Curve fitting of various transport or inhibition models to the nontransformed data was performed after proper transformation of the corresponding rate equations as justified recently (21) and described previously (20). Both linear and nonlinear regression analyses were performed using the Enzfitter program (R. J. Leatherbarrow;1987) and an IBM-compatible microcomputer. Only the best model fit to the data is reported in the figures, together with an Eadie-Hofstee transformation of the carrier-mediated process for visual appraisal of the goodness of fit (21). Corp. Unlabeled D-aspartic and L-glutamic acids were purchased from Protein was measured with the BCA (bicinchoninic) assay kit from Pierce Chemical Co., using bovine serum albumin as a standard. RESULTS Kinetics of L-Glutamic and D-Aspartic Acids at pH 6.0-The initial rates of tracer L-glutamic acid and D-aspartic acid uptakes have been determined, as for all other experiments to be reported in this paper, under conditions of isotonicity and isoosmolarity and in the presence of an inwardly directed The abbreviations used are: Hepes, 4-(2-hydroxyethyl)-l-piperazineethanesulfonic acid; Mes, 4-morpholineethanesulfonic acid; Me-AIB, methylaminoisobutyric acid. Na+ gradient with the membrane potential clamped to 0 mV, using equal concentrations of the highly permeant anion iodide on both sides of the membrane (22). A direct plot of the initial rates of [3H]~-glutamic acid uptake at pH 6.0 as a function of varying concentrations of unlabeled L-glutamic acid in the uptake media is shown in Fig. 1. The best model fit to these data was that of two transport systems with high (K,,, = 61 p~) and low (K,,, = 7.0 mM) affinities for the substrate working in parallel with diffusion. Fig. 1 (inset) shows that the curved nature of the Eadie-Hofstee transformation of the transport-mediated component is indeed compatible with the presence of more than one transport system and clearly demonstrates the goodness of fit. Under the same assay conditions, Fig. 2 shows that the direct plot of the [3H]D-aspartic acid data has a completely different shape when compared with Fig. 1. In fact, the initial rates of tracer uptake remained constant at D-aspartic acid concentrations of greater than 1 mM and actually matched the values of unspecific transport of L-glutamic acid. The best model fit to the data in that case was that of a single high affinity system (K,,, = 80 p M ) plus a diffusional component with no evidence to support more complex models of substrate uptake as shown by the Eadie-Hofstee plot in Fig. 2 Effect of D-Aspartic Acid on the Kinetics of L-Glutamic Acid atpH 6.0-The good agreement between the K,,, for D-aspartic acid transport and that for L-glutamic acid transport through the high affinity pathway seems to indicate that these two amino acids might share a common transport system. Accordingly, it should be possible to block [3H]~-glutamic acid flux through the high affinity system by incorporating saturating concentrations of D-aspartic acid in the uptake media. The results of such an experiment are presented in Fig. 3, where the assay conditions were those of Fig. 1 of 5 mM D-aspartic acid in the uptake media. Under these conditions, tracer uptake rates of L-glutamic acid remained constant at unlabeled substrate concentrations of less than 1 mM, and the best model fit to the data was that of a single low affinity system ( K , = 9.4 mM) plus a diffusional component, as is also obvious from the Eadie-Hofstee plot in Fig. 3 (inset). The kinetic parameters of this system are comparable with the parameters of the low affinity system for L-glutamic acid uptake obtained in the absence of D-aspartic acid. Inhibition of t-Glutamic Acid Uptake atpH 6.0 by D-Aspartic Acid and L-Phenylalanine-Using the kinetic parameters given in Fig. 1 and a substrate concentration of 50 PM, it can be calculated that about 20% of the total flux of L-glutamic acid should occur through the high affinity system, whereas the remainder should occur through the low affinity pathway and by diffusion across the membrane. As demonstrated in Fig. 4, increasing concentrations of D-aSpartiC acid in the uptake media caused a n inhibition of L-glutamic acid uptake that reached a maximum of 25% of the rate obtained in the absence of inhibitor. Moreover, the estimated Ki for competitive inhibition was 75 PM, in close agreement with the K, for Fig. 1, except that 50 mM Lphenylalanine was incorporated into each uptake media in place of 52 mM acetate-Tris, pH 6.0. D-aspartic acid transport at pH 6.0 (Fig. 2). On the other hand, increasing concentrations of L-glutamic acid progressively reduced the initial rates of D-aspartic acid uptake down to the level of diffusion, and the estimated Kt for competitive inhibition was 129 ptM (results not shown), in good agreement also with the K,,, for L-glutamic acid transport through the isolated high affinity pathway at pH 6.0 (Fig. 5). These results thus demonstrate quite clearly that D-aspartic and L-glutamic acids share a common high affinity transport route in the rabbit small intestine. The nature of the low affinity pathway for L-glutamic acid transport cannot be inferred from these studies, however. Assuming that such a low affinity pathway could actually be represented by a neutral, ASC-like system in the brush border membrane, as has been found in other cell types (14-161, we thus tried to inhibit L-glutamic acid uptake by phenylalanine, a neutral amino acid with large specificity for neutral amino acid carriers in the rabbit intestinal brush border membrane (23, 24). As shown in Fig. 4, addition of L-phenylalanine in the uptake media on top of saturating concentrations of Daspartic acid caused a further inhibition of L-glutamic acid uptake rates down to the level of diffusion. The estimated Ki for competitive inhibition by L-phenylalanine was 0.60 mM in that experiment. If both L-glutamic acid and L-phenylalanine share the same transport system with low affinity for the former substrate, it should be possible to block [3H]~-glutamic acid flux through the low affinity pathway by incorporating saturating concentrations of L-phenylalanine in the uptake media. The results of such an experiment are presented in Fig. 5, where the assay conditions were those of Fig. 1 but for the addition of 50 mM L-phenylalanine in the uptake media. Under these conditions, tracer uptake rates of L-glutamic acid remained constant at unlabeled substrate concentrations in excess of 1 mM, and the best model fit to the data was that of a single high affinity system ( K , = 87 KM) plus a diffusional component, as was also obvious from the Eadie-Hofstee plot in Fig. 5 (inset). The K , of this system is identical with the K , of the high affinity pathway for L-glutamic acid transport obtained in the absence of L-phenylalanine (Fig. 1). However, the V,,, in the presence of L-phenylalanine was 8.85 k 1.35 pmol/mg of protein/s, a value that represents an apparent 4.2-fold activation of the maximal velocity of the high affinity transport system. Effect of Acidic Amino Acids on L-Phenylalanine Uptake at pH 6.0 and 8.0-Complete inhibition of the low affinity system for L-glutamic acid transport at pH 6.0 by L-phenylalanine implies a shared route for these amino acids under these conditions. If this were the case, it should be possible to inhibit L-phenylalanine uptake by incorporating excess concentrations of L-glutamic acid in the uptake media. Fig. 6 shows the effects of 100 mM L-glutamic or D-aspartic acid on the initial rates of L-phenylalanine uptake at pH 6.0 and 8.0. L-Glutamic acid caused a 70% inhibition of uptake at pH 6.0 and had a small but nonsignificant effect at pH 8.0. In contrast, D-aspartic acid had no effect on the initial rates of uptake at either pH. Effect of Amino Acids and Analogs on Uptake Rates of L -Glutamic and o-Aspartic Acids at p H 6.0-In order to get a better appraisal as to the nature of the transport systems involved in the high and low affinity routes for L-acidic amino acid transport, we have studied the effect of different classes of amino acids and analogs on the initial uptake rates of both D-aspartic and L-glutamic acids. Tables I and I1 show the results of such an experiment using 50 mM concentrations of these agents in the uptake media in comparison to a control run in the presence of mannitol. Using D-aspartic acid (Table I), it appears that the acidic compounds, with the possible exception of D-glutamic acid, were all potent inhibitors, reducing uptake rates down to the level of diffusion obtained with 50 mM D-aspartic acid in the uptake media. None of the other compounds tested, however, could demonstrate any capacity to inhibit D-aspartic acid uptake. On the contrary, and quite interestingly in fact, it would appear that all of the neutral amino acids but glycine stimulated the uptake rates in excess of controls, thus suggesting a possible activation of the high affinity system by neutral amino acids. Both threonine and isoleucine proved the most efficient in this respect, with mean activations of 60% above controls. With L-glutamic acid (Table 11), the situation is more complex, but it is quite clear that the addition of D-aspartic, L-cysteinesulfinic, and L-cysteic acids to the uptake media caused about 30% inhibition of 50 KM L-glutamic acid uptake, in accordance with the flux expected through the high affinity system at pH 6.0 and this substrate concentration (Fig. 1). L-Aspartic acid, however, caused a 65% inhibition of uptake, thus suggesting an inhibition of both the high and low affinity Effect of amino acids and analogs on L-glutamic acid uptake rates at pH 6.0 Conditions for vesicle resuspension and uptake assays were the same as described in the legend to Table I of the diffusion-corrected initial rates of transport. The D-aspartic acid-sensitive component of total transport is the difference between total and D-aspartic acid-insensitive transport rates. pathways. All of the neutral amino acids tested were found to inhibit uptake. Since these same compounds did not inhibit D-aspartic acid uptake, it would appear that the effect is best explained by an inhibition of the low affinity pathway for Lglutamic acid transport. However, since these same amino acids also stimulated D-aspartic acid transport, their effects may not have been maximum. The remaining imino acids, basic amino acids, and Me-AIB had no obvious effects on Lglutamic acid uptake rates. pH-dependent Heterogeneity of L-Glutamic Acid Transport-In Fig. 7 are shown the initial rates of 50 p M L-glutamic acid transport in the presence and absence of 5 mM D-aspartic acid at varying pH from 6.0 to 8.0. Total flux at any given pH represents L-glutamic acid flux through both the high and low affinity transport systems, whereas the D-aspartic acid-insensitive component of total flux represents the isolated flux of L-glutamic acid through the low affinity system. By subtracting the mean uptake rates obtained in the presence of Daspartic acid from the total flux, one can thus estimate the fraction of L-glutamic acid transport occurring through the high affinity system. It is quite clear from Fig. 7 that the low and high affinity routes for L-glutamic acid transport have quite different pH dependences. Although the former is most active at the acidic pH of 6.0 and declines steadily with increasing pH such that it is almost inactive at pH 7.5 and 8.0, the latter showed an optimum pH around 7.0 with somewhat lower rates in moving to the basic or acidic sides of neutrality. This apparent "bell-shaped" pH dependence for acidic amino acid transport through the high affinity route was also obtained when D-aspartic acid was used as a specific substrate (Fig. 8), thus indicating that the D-isomer of aspartic acid behaves as a comparable substrate to L-glutamic acid, at least with regard to determining the pH dependence of the high affinity system. In a first attempt to determine the effect of pH on the kinetics of the high affinity transport pathway for acidic amino acids, we have measured the initial rates of [%IDaspartic acid uptake at pH 8.0 as a function of varying concentrations of unlabeled D-aspartic acid. Fig. 9 shows the results of this experiment, and the best fit model was that of a single high affinity system ( K , = 79 pM) plus diffusion, with no evidence of a more complex model, as evidenced by the Eadie-Hofstee plot in the inset. Comparing the kinetic parameters for D-aspartic acid uptake at pH 6.0 (Fig. 2) and 8.0 (Fig. 9) and considering the pH dependence of transport (Fig. 8), it would appear that the pH effect might be entirely explained through an effect on V,,, with no effect on K,,,. DISCUSSION Using stabilized and fully equilibrated preparations of rabbit jejunal brush border membrane vesicles (17) and following a protocol of nonlinear regression analysis of nontransformed true initial rates of tracer uptakes (20,21), as determined with the fast sampling, rapid filtration apparatus (18), we found evidence for both high and low affinity Na+-dependent transport systems for L-glutamic acid at pH 6.0 (Fig. 1). Under the same conditions, however, the uptake of D-aspartic acid was restricted to a single high affinity system (Fig. 2) that was Fig. 1 and 2, except for the buffer in the vesicle resuspension media and the uptake media, which consisted of 50 m M Hepes-Tris buffer, pH 8.0. All other solutions were also adjusted to pH 8.0 when required. shown to be identical with the high affinity route of L-glutamic acid transport (Figs. 3 and 4). Moreover, this high affinity pathway was completely inhibited by L-aspartic, L-glutamic, L-cysteinesulfinic, and L-cysteic acids (Tables I and 2) but was not inhibited by any other amino acids or analogs when using its specific substrate, D-aspartic acid (Table I). These basic properties are the same as those originally described for system XiG in the hepatocyte plasma membrane (6,(13)(14)(15). Therefore, it is reasonable to identify the high affinity acidic amino acid transporter in the rabbit intestinal brush border membrane as a member of the system X~G family of transporters. The transport of 50 p~ L-glutamic or D-aspartic acids by system Xic in the vesicles showed a pH optimum of 7.0 with declining rates of transport observed in shifting the pH toward 6.0 or 8.0 (Figs. 7 and 8). Bell-shaped pH-dependent profiles for L-glutamic acid have already been reported in brush border membrane vesicles from rabbit (25) and human (26) jejunum and from rat kidney (7), and in the hamster kidney cell line BHK21-Cl3 (27). Since the major substrate species at pH 7.0 is the anionic form of these amino acids (-99%) and since its concentration is increased by less than 1% when the pH is increased to 8.0 and drops by 7-9% when the pH is decreased to 6.0, it is quite clear from Fig. 8 that the pH-dependent variations in the initial rates of transport (22-28% from pH 7.0 to 8.0 and 38-44% from pH 7.0 to 6.0) cannot be ascribed solely to changes in the ionization state of the substrates. It must then be concluded that the anionic form of the substrates is the transported species and that the pH dependence of transport reflects changes in the ionization state of the carrier molecule itself. Accordingly, Berteloot and Maenz (2) have recently speculated that system XiG in the intestine and kidney possesses two H+-titratable regulatory sites with separate pK values and that optimal transporter function occurs when one site is protonated while the other site is in the deprotonated state. Moreover, it would appear that the pH effect may be entirely due to a V,,, effect (compare Figs. 2 and 9). Kinetic parameter determinations on a wider range of p H values may, however, be required to ascertain this conclusion. The nature of the low affinity transport system for acidic amino acids was investigated using L-phenylalanine as a potential inhibitor since this amino acid is known to be a substrate for the PHE (phenylalanine) and NBB (neutral brush border) systems present in the intestinal brush border membrane (23, 24). In our vesicle preparation, at pH 6.0, Lphenylalanine did block the flux of L-glutamic acid through the low affinity pathway (Fig. 5) with kinetics compatible with those expected from a competitive inhibition (Fig. 4), and excess concentrations of L-glutamic acid caused a marked inhibition of L-phenylalanine uptake (Fig. 6). In addition, all of the neutral amino acids tested caused a partial inhibition of L-glutamic acid uptake at pH 6.0, whereas proline, arginine, methylaminoisobutyric acid, and lysine had no inhibitory effects (Table 11). These results rule out the possibility that systems IMINO, y+, A, or L, as defined by Stevens et al. (23) in the small intestine, represent the low affinity route for Lglutamic acid transport. Instead, these results provide good evidence that the low affinity pathway may actually represent L-glutamic acid flux through the NBB system. Transport of 50 p~ L-glutamic acid through the low affinity system was marginal at the basic pH values of 8.0 and 7.5 but increased steadily thereafter as the pH decreases to 6.0 (Fig. 7). In agreement with these results, excess L-glutamic acid caused little or no inhibition of L-phenylalanine uptake at pH 8.0 (Fig. 6). This pH dependence is similar to that described for L-acidic amino acid flux through the neutral amino acid transport system ASC in the hepatocyte plasma membrane (13-15) for which it was hypothesized that a H+-titrable site regulates the functional capacity to transport acidic amino acids (13). These results thus indicate that the NBB system of the intestinal brush border membrane may belong to the ASC family of transport systems despite a substrate specificity that was found not to conform to the classical ASC pathway (24). An unexpected finding during these studies was that most of the neutral amino acids tested, with the exception of glycine, produced an activation of the high affinity transport of acidic amino acids (Tables I and 11). At least at pH 6.0 and in the presence of 50 mM L-phenylalanine in the uptake media with L-glutamic acid as substrate, it would appear that this heterotropic activation occurs through a V,,, effect only (Fig. 5). Such an activation of system XiG by neutral amino acids has never been reported before in any cell type, and this property may well be specific to intestinal (and maybe renal) cells. It should be noted however that leucine has already been proposed as an allosteric modulator of the lysine transporter in the rat intestinal basolateral membrane (28). We are currently characterizing the effects of neutral amino acids on system XiG in our vesicle preparation using D-aspartic acid as a specific substrate. In conclusion, this study demonstrates that L-glutamic acid is transported by both high and low affinity Na+-dependent transport systems at acidic pH, whereas D-aspartic acid serves as a specific substrate for the high affinity system. These findings bring to question many of the results obtained on the effects of varying pH and imposing H+ gradients across membranes using L-acidic amino acids as substrates, and they establish the necessity of using a specific substrate such as Daspartic acid in future research on characterizing system X~G in the intestinal epithelium.
Planet formation models: the interplay with the planetesimal disc According to the sequential accretion model, giant planet formation is based first on the formation of a solid core which, when massive enough, can gravitationally bind gas from the nebula to form the envelope. In order to trigger the accretion of gas, the core has to grow up to several Earth masses before the gas component of the protoplanetary disc dissipates. We compute the formation of planets, considering the oligarchic regime for the growth of the solid core. Embryos growing in the disc stir their neighbour planetesimals, exciting their relative velocities, which makes accretion more difficult. We compute the excitation state of planetesimals, as a result of stirring by forming planets, and gas-solid interactions. We find that the formation of giant planets is favoured by the accretion of small planetesimals, as their random velocities are more easily damped by the gas drag of the nebula. Moreover, the capture radius of a protoplanet with a (tiny) envelope is also larger for small planetesimals. However, planets migrate as a result of disc-planet angular momentum exchange, with important consequences for their survival: due to the slow growth of a protoplanet in the oligarchic regime, rapid inward type I migration has important implications on intermediate mass planets that have not started yet their runaway accretion phase of gas. Most of these planets are lost in the central star. Surviving planets have either masses below 10 ME or above several Jupiter masses. To form giant planets before the dissipation of the disc, small planetesimals (~ 0.1 km) have to be the major contributors of the solid accretion process. However, the combination of oligarchic growth and fast inward migration leads to the absence of intermediate mass planets. Other processes must therefore be at work in order to explain the population of extrasolar planets presently known. Introduction Since the discovery of the first extrasolar planet around a solartype star (Mayor & Queloz, 1995) about 800 extrasolar planets have been identified. Observations indicate that planets are abundant in the universe. Planets orbiting stars show a great variety of semi-major axis (from less than 0.01 AU to more than hundreds of AU) and masses (from less than an Earth mass to several Jupiter masses), as can be found in The Extrasolar Planets Encyclopaedia (http://exoplanet.eu/). Planet formation models should be able to explain this observed diversity. The sequential accretion model or also called core nucleated accretion model is currently the most accepted scenario for planetary formation (e.g. Mizuno 1980, Pollack et al. 1996, Alibert et al. 2005 a, b, among others), as it can account naturally for the formation of planets in all mass ranges 1 . It proposes that planetary growth occurs mainly in two stages. In the first stage, the formation of planets is dominated by the accretion of solids. If the protoplanet is able to grow massive enough (โˆผ 10 M โŠ• ) while the gas component of the protoplanetary disc is still present, it can bind gravitationally some of the surrounding gas, giving birth to Since it was first proposed (Mizuno 1980), the sequential accretion model has been extensively studied and improved, trying to include the many fundamental processes that occur simultaneously with the growth of the planet, and that impact directly on it. Constructing a complete model that accounts for all these processes in a reasonable way is a hard task. Among its main ingredients, it has to include a realistic model for an evolving protoplanetary disc, a model for the accretion of solids and gas to form the planets (which itself requires a knowledge of the internal structure of the planet), a model for the interactions between the planets and the disc, and a model for the interactions between the forming planets. Each of these topics represent itself an independent, ongoing area of research. Alibert et al. (2005) (A05 from now on) include some of these processes in a single planet formation model. Given the complexity of the problem or the A. Fortier et al.: Planet formation models: the interplay with the planetesimal disc unknowns related to some of these processes, many simplifications have to be assumed in order to keep the problem tractable from the physical and computational point of view. This is specially important, as we aim to compute thousands of simulations in order to account for the wide range of possible initial conditions (see Mordasini et al. 2009, M09 from now on, for more details). Therefore, our models represent a compromise between accuracy and simplicity in their physical description. The first stage of planetary formation corresponds to the growth of the solid embryo, which is dominated by the accretion of planetesimals. The growth of an embryo proceeds itself in two different regimes (Ida & Makino 1993, Ormel et al. 2010. At the beginning, big planetesimals, that have larger cross-sections, are favoured to grow even bigger by accreting planetesimals that they encounter on their way. Being more massive, in turn, enlarges the gravitational focusing, which leads to accretion in a runaway fashion. However, at some point, these runaway embryos become massive enough to stir the planetesimals around them. This results in an increase of the relative velocities and the corresponding reduction of the gravitational focusing. Growth among small planetesimals is stalled and only big embryos have the possibility to continue accreting, although at a slower pace. This second regime in the growth of solid embryos is known as oligarchic growth, as only the larger planetesimals or embryos (the oligarchs) are able to keep on growing. One important aspect is that the transition between runaway and oligarchic growth occurs for very small embryos. As shown in Ormel et al. (2010), the actual mass for this transition depends upon many factors: size of the accreted planetesimals, location in the disc, surface density of solids, among others. In most of the cases, an embryo of โˆผ 0.01 M โŠ• , or even smaller, is already growing in the oligarchic regime. Our model is building up on the model of A05 and M09. Our primary aim is to study the formation of planets of different sizes, and in particular the cases for which the accretion of gas is important. Therefore, in the computations presented here, we focus on the first phase of planetary formation, when the gas component of the disc is still present (in the case of small rocky planets, collisions between embryos after the dissipation of the disc should be included in order to calculate their final masses). For the formation of giant planets, the growth of the solid core is dominated by the oligarchic growth. One of the weak points of the majority of previous giant planet formation models (e.g. Pollack et al. 1996, Hubickyj et al. 2005, A05, M09, Lissauer 2009, Mordasini et al. 2012 is the description of the solid disc, in particular of the interactions between forming planets and planetesimals. Such simplified models lead to an overestimation of the solid accretion rate which, in turn, results in an underestimation of the formation time of the whole planet. Indeed, in those works, the model for the accretion rate of solids is oversimplified: the whole formation of giant planets' cores is assumed to proceed very fast, underestimating the excitation that planetesimals suffer due to the presence of the embryos. When oligarchic growth is adopted as the dominant growth model, giant planet formation turns out to be more difficult. Formation times become much longer than the typical lifetime of the protoplanetary disc. Fortier et al. (2007Fortier et al. ( , 2009) and Benvenuto et al. (2009) studied the formation of giant planets adopting the oligarchic growth for the core. Assuming in situ formation for the planets and a simple, non evolving protoplanetary disc, they show that the formation of giant planets is unlikely if planetesimals populating the disc are big (more than a few kilometres in size). However, formation could be accelerated if most of the accreted mass is in small planetesimals (less than 0.1 km). Guilera et al. (2010Guilera et al. ( , 2011, also considering in situ models, studied the simultaneous formation of several planets where planetesimal drifting is included. They consider different density profiles for the disc and they find that, only in the cases of massive discs, the formation of the giant planets of the Solar System is possible only if planetesimal radii are smaller than 1 km. These models, however, do not take into account that planets would likely migrate during their formation. In this work we include in our planet formation model a more realistic description for the accretion of solids. In Sect. 2, we review the basics of the A05 formation model, presenting some improvements in the computation of the disc structure, internal structure and migration. In Sect. 3 we describe the new treatment of the accretion of planetesimals. In Sect. 4 we present the results obtained for the formation of isolated planets (the formation of planetary systems is described in Alibert et al. 2012, andCarron et al. 2012). In Sect. 5 we discuss our results and put them in context. Finally, in Sect. 6 we summarise our results and underline the main conclusions. Formation model The model and the numerical code used to calculate the formation of planets is in essence the same as in A05. In what follows, we make a summary of the most relevant aspects of the model and the improvements that have been introduced since that work. In the next section, we focus on the accretion rate of solids and we describe in detail the adopted model for the protoplanet-planetesimals interactions. Protoplanetary disc: gas phase The structure and evolution of the protoplanetary disc is computed by first determining the vertical structure of the disc, for each distance to the central star, and second, computing the radial evolution due to viscosity, photoevaporation, and mass accretion by forming planets. Vertical structure The vertical disc structure is computed by solving the following equations: and F = โˆ’16ฯ€ฯƒ SB T 3 3ฮบฯ gas โˆ‚T โˆ‚z . (3) They reflect the hydrostatic equilibrium, the energy conservation, and the diffusion for the radiative flux. In these equations, z is the vertical coordinate, ฯ gas the gas density, P the pressure, T the temperature, ฮฝ the macroscopic viscosity, F the radiative flux, ฮบ is the opacity (Bell & Lin 1994), and ฯƒ SB is the Stefan-Boltzmann constant. The Keplerian frequency, ฮฉ, is given by G being the gravitational constant, M the mass of the central star and a the distance to the star 2 . The equations (1)-(3) are solved with four boundary conditions. The first three are the temperature, the pressure and the energy flux at the surface. The surface of the disc is defined as the place where the vertical optical depth (between the surface and infinity) is equal to 0.01. The fourth boundary condition is that the energy flux equals 0 in the midplane (see A05 for details). The three differential equations, together with the four boundary conditions, have a solution only for one value of the disc thickness H which gives the location of the disc surface. The macroscopic viscosity ฮฝ is calculated using the standard Shakura & Sunyaev (1973) ฮฑโˆ’parametrization, ฮฝ = ฮฑc 2 s /ฮฉ. The speed of sound c s is determined from the equation of state (Saumon et al. 1995). The temperature at the surface T surf is computed as in A05. In the models presented in this paper, ฮฑ is set to 7ร—10 โˆ’3 . This value of the alpha parameter has to be taken as an example. In the calculations of this work we neglect irradiation and the possible presence of a dead zone. These effects will be included in future works. Evolution The evolution of the gas disc surface density (ฮฃ = H โˆ’H ฯ gas dz) is computed by solving the diffusion equation: whereฮฝ is the effective viscosity,ฮฝ โ‰ก 1 ฮฃ H โˆ’H ฮฝฯ gas dz . Photoevaporation is included using the model of Veras & Armitage (2004): where R g = 5 AU. The total mass loss due to photoevaporation is a free parameter. The sink termQ planet is equal to the gas mass accreted by the forming planets. For every forming planet, mass is removed from the protoplanetary disc in an annulus centred on the planet, with a width equal to the planet's Hill radius where M is the total mass of the planet and a M is the location of the planet. Eq. 5 is solved on a grid which extends from the innermost radius of the disc to 1000 AU. At these two points, the surface density is constantly equal to 0. The innermost radius of the disc is of the order of 0.1 AU. Fig. 1 presents a typical evolution of a disc, whose parameters correspond to the first row of table 1, where the curves are plotted every 10 5 years. In this model, the photoevaporation term is adjusted in order to obtain a disc lifetime equal to 3 Myr. The characteristics of the protoplanetary disc are chosen to match as close as possible the observations. The initial disc density profiles we consider are given by: 2 Note that we assume that the disc is thin, and the distance to the central star does not vary on a vertical slide of the disc. where a 0 is equal to 5.2 AU. The mass of the disc (M disc ), the characteristic scaling radius (a C ) and the power index (ฮณ) are derived from the observations of Andrews et al. (2010). Adopting this kind of initial density profile is a difference from previous works (A05 and M09). For numerical reasons, the innermost disc radius, a inner , is always greater than or equal to 0.1 AU, and differs in some cases from the one cited in Andrews et al. (2010). The afore-mentioned parameters used to generate the initial disc's profile are listed in table 1. Note that Andrews et al. (2010) derive also a value for the viscosity parameter ฮฑ. On the contrary, and for simplicity, we assume here that the viscosity parameter is the same for all the protoplanetary discs we consider (ฮฑ = 7 ร— 10 โˆ’3 ). Using a different ฮฑ parameter will be the subject of future work. Note also that, in the observations of Andrews et al. (2010), the mass of the central star ranges from 0.3 to 1.3 M . However, we assume here that these disc profiles are all suitable for protoplanetary discs around solar mass stars. Future disc observations will help improving this part of our models. As in A05, the planetesimal-to-gas ratio is assumed to scale with the metallicity of the central star. For every protoplanetary disc we consider, we select at random the metallicity of a star from a list of โˆผ 1000 CORALIE targets (Santos, private communication). Finally, following Mamajek (2009), we assume that the cumulative distribution of disc lifetimes decays exponentially with a characteristic time of 2.5 Myr. When a lifetime T disc is selected, we adjust the photoevaporation rate in order that the protoplanetary disc mass reaches 10 โˆ’5 M at the time t = T disc , when we stop the calculation. Protoplanetary disc: solid phase We consider that the planetesimal disc is composed of rocky and icy planetesimals. Here we assume a mean density of 3.2 g/cm 3 for the rocky ones, and 1 g/cm 3 for the icy. The rocky planetesimals are located between the innermost point of the disc (given by the fourth column of table 1), and the initial location of the ice line, whereas the disc of icy planetesimals extends from the ice line to the outermost point in the simulation disc. The location of the ice line is computed from the initial gas disc model, using the central temperature and pressure. The ice sublimation temperature we use depends upon the pressure. Note that in our model, the location of the ice line does not evolve with time. In particular, no condensation of moist gas, or sublimation of icy planetesimals is taken into account. Moreover, the location of the ice line is based on the central pressure and temperature, meaning that the ice line is taken to be independent of the height in the disc. Note that in reality the ice line is likely to be an "ice surface" whose location depends upon the height inside the disc (see Min et al. 2011). For the models presented here, we assume that all planetesimals have the same radius. Planetesimals' mass is calculated assuming that they are spherical and have constant density (which depends on their location in the disc), and it does not evolve with time. The extension of our calculations towards non-uniform and time evolving planetesimal mass function is something we are working on and will be included in the next paper. The surface density of planetesimals, ฮฃ m , is assumed to be proportional to the initial surface density of gas ฮฃ 0 . This means that where f D/G is the dust-to-gas ratio of the disc and it scales with the metallicity of the central star (the lowest value being 0.003 and the largest one 0.125), and f R/I takes into account the degree of condensation of heavy elements. As in M09, we consider f R/I = 1/4 inside the ice line, and f R/I = 1 beyond it, for rocky and icy planetesimals respectively. The surface density of planetesimals evolves as a result of accretion and ejection by the forming planets. The same procedure as in A05 is adopted. Planetesimals that can be accreted by a growing planet are those within the planet's feeding zone, here assumed to be an annulus of 5 R H at each side of the orbit. The solids surface density inside the feeding zone is considered to be constant, i.e. the protoplanet instantaneously homogenises it as a result of the scattering it produces among planetesimals. Ejected planetesimals are considered to be lost. We stress that this work is the first one of a series of papers. Here, effects as planetesimal drifting due to gas drag, fragmentation and planetesimal size distribution are neglected, but will be included in future works. Equations Planetary growth proceeds through solids and gas accretion. Gas accretion is a result of a planet's contraction, and is computed by solving the standard internal structure equations: where r, P, T are respectively the radius, the pressure and the temperature inside the envelope. These three quantities depend upon the gas mass, M r , included in a sphere of radius r. The temperature gradient is given by the adiabatic (โˆ‡ ad ) or by the radiative gradient (โˆ‡ rad ), depending upon the stability of the zone against convection, which we check using the Schwarzschild criterion. These equations are solved using the equation of state (EOS) of Saumon et al. (1995). The opacity, which enters in the radiative gradient โˆ‡ rad is computed from Bell and Lin (1994). In this work we assume that the grain opacity is the full interstellar opacity. However, Podolak (2003) and Movshovitz & Podolak (2008) showed that the grain opacity in the envelope of a forming planets should be much lower than the interstellar one. Reducing the grain opacity accelerates the formation of giant planets (Pollack at al. 1996, Hubickyj et al. 2005 because it allows the runaway of gas to start at smaller core masses. Since the objective of the present paper is to explore the consequences of the planet-planetesimal interactions, we only consider a single opacity, corresponding to the full interstellar opacity. Note that we have omitted the energy equation that gives the luminosity, which itself enters in the radiative gradient (in the parts of the planet that are stable against convection), and in the determination of the planet stability to convective motions. Including the energy equation in its standard form dL dM r = m โˆ’ T dS dt (the first term resulting from the accretion of planetesimals, the second one to the contraction of the planet) brings usually numerical difficulties. Here we follow Mordasini et al. (2012b) to calculate the luminosity in an easier way. Note, however, that we improved their approach to take into account analytically the energy of the core. The total luminosity is given by L = L cont + L m , L cont being the contraction luminosity and L m the accretion luminosity. We assume L cont to be constant in the whole planetary envelope. L cont is computed as the result of the change in total energy of the planet between two time steps t and t + dt: where E tot is the total planetary energy and E gas,acc = dtแน€ gas u int is the energy advected during gas accretion (u int being the internal specific energy). E gas,acc is negligible compared to the other terms. The luminosity due to accretion of planetesimals is However, the energy at the time t + dt is not known before computing the internal structure at this given time. To circumvent this problem, we use the following approach: the energy is split in two parts, one related to the core, one to the envelope. The core energy is given by E core = โˆ’(3/5) GM 2 core /R core , the core density being assumed to be uniform. The envelope energy is assumed to follow a similar functional form: E env = โˆ’k env M env g, where g is a mean gravity, taken to be G(M core /R core + M tot /R planet ). This last formula defines k env , in which all our ignorance of the internal structure is hidden. In order to calculate the envelope energy at time t + dt, the value of k env is first taken to be the value resulting from the structure at time t. Then, iteration on k env is performed until convergence is reached. In general, only a first order correction is enough to reach a satisfactory solution. Boundary conditions The internal structure equations are solved for M r varying between the core mass M core , and the total planetary mass. Four boundary conditions are given, namely the core radius, the total planetary radius, and T surf,planet and P surf,planet , the temperature and pressure at this point. Given the boundary conditions, the differential equations have only one solution for a given total planetary mass. The core radius is given as a function of the core luminosity and the pressure at the core surface by the following formula, which constitute a fit to the results of Valencia et al. (2010): where P core is the pressure a the core-envelope interface, expressed in GPa. The radius of the planet, R M , is given by (Lissauer et al. 2009): where c 2 s is the square of the sound velocity in the disc midplane at the planet's location, k 1 = 1 and k 2 = 1/4. At the planet's surface, the temperature and pressure are given by: and P surf,planet = P disc , with ฯ„ = ฮบ(T disc , ฯ disc )ฯ disc R M , L is the planet luminosity, and T disc , ฯ disc , P disc are the temperature, density and pressure in the disc midplane at the location of the planet. Gas accretion: detached phase By solving the differential equations (10) to (12) with the boundary conditions mentioned above, one can derive the planetary envelope mass as a function of time, and therefore the gas accretion rateแน€ gas . However, the rate of gas accretion that can be sustained by the protoplanetary disc is not arbitrary, and is in particular limited by the viscosity. When the gas accretion rate required by the forming planet is larger than the one that can be delivered by the disc,แน€ gas,max , the planet goes into the detached phase. In the detached phase, the planetary growth rate by gas accretion does not depend upon its internal structure, but is rather given by the structure and evolution of the disc. During this phase, the internal structure is given by solving the same equations (10) to (12), this time for a mass M r ranging from M core to M planet (which is known). The boundary conditions are the same, except two of them: the pressure, that includes the dynamical pressure due to gas free falling from the disc to the planet, In this equation, v ff is the free falling velocity from the Hill radius to the planetary radius, v ff = โˆ’ โˆš 2GM ร— (1/R M โˆ’ 1/R H ). Note that the planetary radius is not known a priori, but computed as a result of integrating Eqs. (10) to (12). the maximum accretion rate,แน€ gas,max , which is equal tศฏ (20) where F = 3ฯ€ฮฝฮฃ + 6ฯ€r โˆ‚ฮฝฮฃ โˆ‚a is the mass flux in the disc. Geometrically, the maximum accretion rate that can be provided by the disc is equal to the mass flux entering the planet's gas feeding zone. The gas can enter either from the outer parts of the disc (which is the general case), or from the inner part of the disc (which can be the case in the outer part of the disc). Therefore, during a time step, a planet has access to the mass delivered at its location by the disc (แน€ gas,max ร— dt), and a mass reservoir made of the gas mass already present in the planet's gas feeding zone (see also Ida and Lin 2004). This reservoir of gas is assumed to be empty when the planet is massive enough to open a gap (which coincides with the transition to type II migration, see next section). However, the feeding zone continues receiving gas due to viscosity at the local accretion rate (Eq. 20). Orbital evolution: disc-planet interaction Disc-planet interaction leads to planet migration, which can occur in different regimes. For low mass planets, not massive enough to open a gap in the protoplanetary disc, migration occurs in type I (Ward 1997, Tanaka et al. 2002, Paardekooper et al. 2010, 2011. For higher mass planets, migration is again subdivided in two modes: disc-dominated type II migration, when the local disc mass is larger than the planetary mass (the migration rate is then simply given by the viscous evolution of the protoplanetary disc), and planet-dominated type II migration in the opposite case (see M09). The transition between type I and type II migration occurs when 3 4 (Crida et al. 2006), where H disc is the disc scale-height at the location of the planet, and Re = a 2 M ฮฉ ฮฝ is the macroscopic Reynolds number at the location of the planet (ฮฝ is the same as the one used for the disc evolution). First models of type I migration (Ward 1997, Tanaka et al. 2002 predicted so rapid migration rates that it was necessary to reduce arbitrarily the migration rate by a constant factor, named f I in A05 and M09, in order to reproduce observations. Since these first calculations, type I migration has been studied in great details, and new formulations for type I migration rates are now available (Paardekooper et al. 2010(Paardekooper et al. , 2011. We use in our model an analytic description of type I migration, which reproduces the results of Paardekooper et al. (2011). A detailed description of this model is presented in Dittkrist et al. (in prep.), and preliminary results have been presented in Mordasini et al. (2010). The accretion rate of solids The growth of the solid component of a protoplanet, M core , is assumed to be due to the accretion of planetesimals. Adopting the particle-in-a-box approximation, its growth rate can be calculated with (Chambers 2006), where ฮฃ m is the solids surface density at the location of the protoplanet and P orbital is its orbital period. The collision rate, P coll , is the probability that a planetesimal is accreted by the protoplanet. This probability depends upon the relative velocity between planetesimals and the protoplanet which, in turn, depends upon the planetesimals eccentricities and inclinations. We represent by e (i) the root mean square of the eccentricity (inclination) of planetesimals. Planetesimals are found to be in different velocity regimes depending on their random velocities. These regimes are known as high-, medium-and lowvelocity regime. Each regime is characterised by a range of values of planetesimal's reduced eccentricities (แบฝ = ae/R H ) and inclinations (ฤฉ = ai/R H ): the high-velocity regime is defined bแปน e,ฤฉ 2, the medium-velocity regime by 2 แบฝ,ฤฉ 0.2 and the low-velocity regime byแบฝ,ฤฉ 2. This leads to different collision rates (see Inaba et al. 2001 and references therein) where R is the radius of the protoplanet (in the case of a solid body without a gaseous envelope R is its geometrical radius), r m is the radius of the planetesimals, ฮฒ =ฤฉ/แบฝ, and the functions I F and I G are well approximated by for 0 < ฮฒ โ‰ค 1, which is the range of interest for this work (Chambers 2006). According to Inaba et al. (2001) the mean collision rate can be approximated by When an embryo is able to gravitationally bind gas from its surroundings it becomes more difficult to define its radius, which is not just the core radius. For the purpose of the collision rate, the capture radius of the protoplanet should depend upon the mass of the protoplanet, upon the planetesimals' velocity with respect to the protoplanet, upon the density profile of the envelope, ฯ(r), and upon the size of the accreted planetesimals (smaller planetesimals are more affected by the gas drag of the envelope and therefore are easier to capture). As in Guilera et al. (2010), here we adopt the prescription of Inaba & Ikoma (2003) where the capture radius R can be obtained by solving the following equation where ฯ m is the planetesimals' bulk density, G is the gravitational constant and the relative velocity v rel is given by with v k is the keplerian velocity (v k = ฮฉa). This simple formula for the capture radius approximates well more complex models (as the one described in A05) with the advantage that it reduces the computational time. It is clear from the above equations that the accretion rate of solids depends upon the eccentricities and inclinations of planetesimals, which define their relative velocities with respect to the embryo: the higher the relative velocity is, the less likely planetesimals are captured by the embryo. The eccentricities and inclinations of planetesimals are affected by the damping produced by the nebular gas drag, by the gravitational stirring of the protoplanet (protoplanet-planetesimal interactions) and, to a lesser extent, by their mutual gravitational interactions (planetesimal-planetesimal interactions): The first term represents the effect of the nebular gas drag, the second term the viscous stirring produced by an embryo of mass M and the third term the planetesimal-planetesimal viscous stirring. The drag force experienced by a spherical body depends upon its relative velocity with respect to the gas. If we consider that the protoplanetary nebula is mainly composed by H 2 molecules, the mean free path of a molecule of gas is where n H 2 is the number density of H 2 molecules and ฯƒ H 2 is the collision cross-section of an H 2 molecule. Depending upon the ratio between the planetesimal's radius and the mean free path of the molecules, three drag regimes can be defined (Rafikov 2004 and references therein). The first two drag regimes are for planetesimals which radii are larger than the mean free path, r m ฮป. These are the quadratic and the Stokes regime. For the distinction of these regimes we adopt the criterion proposed by Rafikov (2004) in terms of the molecular Reynolds number Re mol โ‰ก v rel r m /ฮฝ mol , where ฮฝ mol is the molecular viscosity, ฮฝ mol = ฮปc s /3. If Re mol 20 we assume that the gas drag is in the quadratic regime, and the differential equations for the evolution of the eccentricity and inclination are given by (Adachi et al. 1976 corrected by Inaba et al. 2001) 3 , with ฮพ 1.211. The value of ฮท depends upon the distance to the star, on the gas density and on the pressure gradient, dP/da, where ฯ gas and dP/da are derived from the disc model. The gas drag timescale is where C D is the drag coefficient, which is of the order of unity. The Stokes regime occurs for r m ฮป and Re mol < 20, and the equations for the eccentricity and inclinations of planetesimals are (Adachi et al. 1976, Rafikov 2004. Note that in this paper, as for example in Stepinski & Valageas (1996), we defined two different Reynolds numbers, Re and Re mol , and two different viscosities, ฮฝ and ฮฝ mol . The macroscopic quantities (Re and ฮฝ) are a measure of the fluid dynamics of the disc in a global scale (to compute the evolution of the disc) while the microscopic quantities (Re mol and ฮฝ mol ) characterise the local state of the gas and are used to calculate the eccentricities and inclinations of planetesimals. When r m ฮป, the third regime, Epstein regime, takes place, and eccentricities' and inclinations' evolution follow the equations (Adachi et al. 1976, Rafikov 2004. In this paper we consider that the population of planetesimals is represented by spherical bodies of a single size. It is worth mentioning that, although we allow for the three drag regimes according to the abovementioned criterions, for the ranges of planetesimal sizes considered in this work (100-0.1 km) and for the kind of interaction we are mostly interested in here (protoplanet-planetesimal interactions), planetesimals are found to be mainly in the quadratic regime. Therefore, in most of the cases, for determining the solids accretion rate the effect of the gas drag is governed by Eqs. (34) -(35). Planetesimals' eccentricities and inclinations are excited by the presence of a protoplanet. Ohtsuki et al. (2002) studied the evolution of the mean square orbital eccentricities and inclinations and introduced semi-analytical formulae to describe the stirring produced by the protoplanet where b is the full width of the feeding zone of the protoplanet in terms of their Hill radii (here we adopt b โˆผ 10), and P VS and Q VS are given by with ฮ› =ฤฉ(แบฝ 2 +ฤฉ 2 )/12. The functions I PVS (ฮฒ) and I QVS (ฮฒ) can be approximated, for 0 < ฮฒ โ‰ค 1, by (Chambers 2006). The excitation that the protoplanet produces on the planetesimals weakens with the increase in the distance between the protoplanet and the planetesimals, i.e. further away planetesimals are less excited. Here we follow the approach of Guilera et al. (2010) and consider that the effective stirring is given by where f (โˆ†) ensures that the perturbation of the protoplanet is confined to its neighbourhood, with โˆ† = |a M โˆ’ a m |, where a M is the semi-major axis of the protoplanet and a m is the semi-major axis of the planetesimal. Although the functional form is arbitrary, the scale on which the stirring acts is similar to the one found in N-body calculations (excluding the effects of resonances). For this work we have chosen n = 5 to limit the perturbation of the planet to its feeding zone. In the future, with the aid of N-body calculations, we plan to have a better semi-analytical function to characterise the extent of the planetary perturbation. We also consider that planetesimals' eccentricities and inclinations are stirred by their mutual interactions. For a population of planetesimals of equal mass m, the evolution of their eccentricities and inclinations are well described by (Ohtsuki et al. 2002), with In this case P VS and Q VS are evaluated with the reduced eccentricity and inclination relative to the planetesimal mass (i.e. e = 2e/h m ,ฤฉ = 2i/h m ). Note that there is no dynamical friction term in Eqs. (51) -(52), as it vanishes when a single mass population of planetesimals is considered (Ohtsuki et al. 2002). Although, strictly speaking we have two populations of planetesimals (rocky and icy bodies depending if they are inside or beyond the ice line) that have the same size but not the same mass (because of the difference in their density), the region where the two types of planetesimals are present at the same time is very narrow. In this work we neglect changes in the mass of planetesimals due to fragmentation and changes in the surface density owe to planetesimal drifting. Comparison with the previous solids accretion rate In previous works (A05, M09), the solids accretion rate has been treated in a very simple way leading to an underestimation of the formation timescale of planets. In those works, the prescription for planetesimals' eccentricities and inclinations was the same as in Pollack et al. (1996), were it was assumed that planetesimals' inclinations depend only on planetesimal-planetesimal interactions. Under this assumption, the reduced value of planetesimals' inclination,ฤฉ, was prescribed as: where v E is the escape velocity from the surface of a planetesimal. This means that planetesimal's inclination i =ฤฉ R H /a is constant independently of the mass of the planet. On the other hand, eccentricities are assumed to be controlled by both planetesimals and protoplanet stirring, its value given by: Therefore, ifแบฝ = 2ฤฉ the protoplanet is growing according to the runaway regime because e and i would be independent of the mass of the protoplanet. Ifแบฝ = 2, the eccentricity of planetesimals would be affected by the presence of the protoplanet, so to a certain extent the stirring of the embryo is taken into account. However, this condition corresponds to the protoplanetplanetesimal scattering in the shear-dominated regime. Ida & Makino (1993) showed that the shear-dominated regime lasts for only a few thousand years, after which planetesimals are strongly stirred by the protoplanet. During the shear-dominated period, eccentricities and inclinations of planetesimals in the vicinity of the protoplanet remain low. This leads to an accretion scenario which is much faster than that corresponding to the oligarchic regime (the oligarchic regime usually occurs in the dispersion-dominated regime). To show clearly the difference between this, let's call it "quasi-runaway" accretion of solids, and the oligarchic regime, we performed two simulations that are identical in all parameters except for the prescription of e and i. The planet is assumed to form in situ at 6 AU. Accreted planetesimals are 100 km in radius. No disc evolution is considered, so simulations are stopped when the planet reaches one Jupiter mass. For the quasi-runaway regime we use Eqs. (55) and (54) to calculate e and i, while for the oligarchic growth we solve the differential equations presented in the previous section. Fig. 2 (left panel) shows the ratio of oligarchic and quasi-runaway eccentricities and inclinations as a function of the mass of the planet. In the case of the eccentricity, the one corresponding to the oligarchic regime is โˆผ 4 times larger than in the quasi runaway regime. The oligarchic inclination is several tens of times bigger than the quasi-runaway one. As a consequence, the accretion rate of solids is much smaller in the oligarchic than in the quasi runaway regime because in the oligarchic regime planetesimals are more exited and are more difficult to accrete. A comparison between the two accretion rates of solids is shown in Fig. 2 (right panel). Due to the smaller accretion rate in the oligarchic regime, while in the quasi runaway regime it takes less than 1 Myr to form the planet, in the oligarchic regime formation is much longer, taking 3.25 ร— 10 7 years. Results In the previous section we introduced the main characteristics of the planet formation model, with special focus on the differences in the physical and numerical model with respect to A05. In the following sections we will concentrate on the impact that the accretion rate of solids has on the formation of giant planets. Here we will consider the formation of isolated planets, i.e. only one planet per disc. The computations of planetary system formation will be presented in other papers (Alibert et al. 2012, Carron et al. 2012). As we described in Sect. 3, the treatment of the evolution of eccentricities and inclinations of planetesimals intends to minimise the assumptions on their values (keeping in mind that it is not an N-body calculation, but the adopted formulas reproduce N-body results of planetesimals accretion rates and excitation). In order to consider a realistic accretion rate but not too computationally expensive, Thommes et al. (2003) considered that planetesimals' eccentricities and inclinations can be estimated assuming that the stirring produced by the protoplanet is instantaneously balanced by the gas drag. The approximation to the equilibrium values of e (e eq ) can be derived by equating the stirring timescale and the damping timescale, resulting in e eq = 1.7 m 1/15 M 1/3 ฯ 2/15 m b 1/5 C 1/5 D ฯ 1/5 gas M 1/3 a 1/5 . The equilibrium value for i (i eq ) is assumed to be half the value of e eq , as this relationship has been shown to be a good approximation in the high-velocity cases (Ohtsuki et al. 2002): However, it is not clear whether planetesimals are always in equilibrium, especially if we consider that depending on their mass, planetesimals are differently affected by gas drag, and that during its formation, a planet migrates and the protoplanetary disc evolves. On the other hand, if equilibrium is attained, it is interesting to compare the equilibrium values obtained by solving explicitly Eqs. (31)-(32) with the approximations given by Eqs. (56)-(57) based on timescales considerations. Fortier at al. (2007) and Benvenuto et al. (2009) assume, for simplicity, the equilibrium approximation of Thommes et al. (2003) in their in situ, giant planet formation models. However, Chambers (2006) finds that important deviations from equilibrium occur at the very beginning of the growth of the embryo but eventually equilibrium is attained. In the cases he shows, these deviations do not seem to have a noticeable effect in the final mass of the planet as long as no time restriction for the lifetime of the disc is assumed. On the other hand, Guilera et al. (2010Guilera et al. ( , 2011 do not use any approximations and calculate explicitly e and i by solving the corresponding time evolution Left panel shows the ratio between eccentricities (red) and inclinations (blue) as a function of the planet's mass considering both the oligarchic and the quasi runaway growth. Clearly, planetesimals excitation is several times higher in the oligarchic growth than in the quasi-runaway growth. Right panel shows the impact of planetesimals excitation in the accretion rate of solids: in the case of the quasi-runaway growth (yellow) the accretion rate is much larger than in the oligarchic growth (green). differential equations in their giant planet formation model. No study comparing the equilibrium approximation and the explicit calculation of e and i has been made using a self consistent giant planet formation model. Moreover, out of equilibrium effects can be important, not only at the beginning of the formation of a planet. When planets migrate they can enter regions where planetesimals are in principle cold (low values of e and i), or already excited if there is another planet growing in the neighbourhood. Depending on the ratio between the stirring and the migration timescale, one can expect some cases where the equilibrium approximation may not be accurate. We study the formation of giant planets considering a selfconsistent model for the interplay between the disc evolution, accretion by the growing planets and gas driven migration. In this paper, we focus our study on the formation of single planets, looking in detail at the dependence of planetary growth upon the planetesimal size and the differences in the final results between the equilibrium approximation and the explicit calculation of e and i, all together with planetary migration. We do this analysis in four steps. First we consider the formation of 1 M โŠ• planet, neglecting the presence of an envelope, of planetary migration and of disc evolution (Sect. 4.1). Second, we compute the full formation of a planet (except migration), which is assumed to be over when the gaseous component of the disc disappears. We keep the in situ formation hypothesis to ease the analysis (Sect. 4.2). Third, we allow for gas driven migration during the formation of the planet (Sect. 4.3). Fourth, we generalise the examples presented in the third step by considering a wide range of plausible protoplanetary discs and initial locations for the embryo to have an overview of all possible outcomes (Sect. 4.4). Formation of 1 M โŠ• planet We analyse first the formation of 1 M โŠ• planet (we stop the calculation when this mass is reached), neglecting the presence of an envelope, planetary migration and disc evolution. The point of this case is to focus on the initial stages of the accretion and analyse the importance of the size of the accreted planetesimals and the implications for the growth of the embryo when considering the equilibrium approximation. It is because we want to show clearly the consequences of these assumptions that we neglect other physical processes that act simultaneously. This means, for example, that for a fixed mass of the embryo and independently of the elapsed time, the state of the protoplanetary disc is the same in terms of surface density (solids and gas). The same applies for the capture radius of the planet. In this example, the embryo is assumed to be located at 6 AU, where the initial solids surface density is ฮฃ m = 10 g cm โˆ’2 and the density of the nebular gas is ฯ gas = 2.4 ร— 10 โˆ’9 g cm โˆ’3 . For this disc the snow line is at 3.5 AU. The initial surface density profile of the disc is given by Eq. (8) with ฮณ = 0.9 and a C = 127 AU. The initial mass of the embryo is 0.01 M โŠ• . For the equilibrium approximation we adopt Eqs. (56) -(57) to calculate the values of e and i. For the explicit calculation of the eccentricity and inclination of planetesimals we solve Eqs. (31) -(32). To solve Eqs. (31) -(32), initial conditions for e and i must be given. We consider two possibilities for the initial conditions that we think bracket the parameter space. On one hand, we consider that the planetesimal disc is initially cold, and planetesimals' eccentricities and inclinations are given by the equilibrium value between their mutual stirring and the gas drag. These values can be derived by equating the stirring timescale and the damping timescale, that results in e mโˆ’m eq = 2.31 m 4/15 ฮฃ 1/5 a 1/5 ฯ 2/15 m C 1/5 D ฯ 1/5 gas M 2/5 , This means that we are assuming that the embryo instantaneously appears in an unperturbed planetesimal disc. The other extreme situation is to assume a hot disc, where planetesimals are already excited by the embryo and their initial eccentricities and inclinations are those corresponding to the value of equilibrium between the stirring of the embryo (0.01 M โŠ• ) and the gas drag, approximated by Eqs. (56) -(57). The initial values of e for these cases are given in table 2, where e mโˆ’m eq,0 correspond to Eq. (58) and e eq,0 to Eq. (56). The initial values of i are assumed to be e/2 (i 0 = e 0 /2). We perform calculations for the equilibrium approximation and the two sets of initial conditions for the explicit calculation, for four radii of the accreted planetesimals: 100, 10, 1 and 0.1 km. Fig. 3 shows the results of the simulations mentioned above. The top-left panel depicts the mass growth of the planets as a function of time. The first evident thing from this plot is the timescale difference in the formation of 1 M โŠ• planet depend- Table 2. Two sets of initial values of the eccentricity for the explicit calculation of its evolution. ing on the size of the accreted planetesimals: while for accreted planetesimals of 100 km it takes โˆผ 10 7 yr to grow from 0.01 M โŠ• mass to 1 M โŠ• , it takes โˆผ 10 5 yr in the case of 0.1 km planetesimals. This difference in the growth timescale is entirely due to the fact that large planetesimals are less damped by gas drag than the smaller ones. Hence, while they are stirred up by the massive embryos to about the same value, they keep larger e and i values (bottom-left panel) making the accretion process much slower. Keep in mind that the disc does not evolve and planets do not migrate, so differences depend only on the planetesimal size. Note that for a fixed mass of the embryo the accretion rate of solids differs in two orders of magnitude between the two extreme cases (bottom-right panel). If we now turn our attention to the differences in growth rate for a fixed planetesimal size, but for different approaches in the calculation of the eccentricities and inclinations, we see that adopting the equilibrium approximation may lead to significant differences in the mass of the planet, more evident when the accreted planetesimals are small (top-left panel, compare solid line with dashed or dotted lines of the same colour). For 100 km and 10 km planetesimals we do not see deviations from equilibrium when the initial conditions for e and i are given by Eqs. (56) -(57) (red and blue dotted lines, top-right panel). If the initial values for e and i are given by Eqs. (58) -(59) the equilibrium values given by Eqs. (56) -(57) are reached in an almost negligible fraction of the formation timescale. We conclude that the equilibrium approximation for e and i gives results that nicely agree with more complex calculations, as far as the planetesimal size is relatively big or the evolution time is sufficiently long. However, as shown on the same picture, the growth of a small planet is very long when considering such massive planetesimals, and the formation of a gas giant under these conditions is highly compromised. The situation becomes more critical when we consider smaller planetesimals. Deviations from equilibrium are evident regardless the initial values adopted for e and i, especially for r m = 0.1 km. Nevertheless, equilibrium is always attained, although the equilibrium values for e and i are lower than those given by Eqs. (56) -(57), especially in the case of the inclination. As we can see from Fig. 4, the approximation ฮฒ eq = i/e = 1/2 is not very good for smaller planetesimals. Note that the real equilibrium value of ฮฒ depends upon the planetesimal size. While ฮฒ eq = 1/2 is a good approximation for planetesimals larger than โˆผ 1 km, it is not the case for smaller planetesimals, which tend to have lower inclination values than 1/2 e. We find that for small planetesimals, the velocity regime is in the limit between the high and medium velocity regime (แบฝ ,ฤฉ 2), therefore eccentricities are more effectively excited than inclinations (see Ohtsuki et al. 2002). The low values of e, i and ฮฒ increase the accretion rate, as can be seen in the bottom-right panel of Fig. 3, speeding up the formation of the embryo (relative to the equilibrium approximation case) . Planet formation without migration In this section we analyse the complete formation of a planet for the same cases as before. Now, the planet accretes mass (solids and gas) as long as the disc is still present. This means that we allow for disc evolution (Sect. 2.1), therefore planet formation is considered to be over when the disc disperses. The presence of the gaseous envelope is taken into account for the calculation of the capture radius. However, we still neglect the migration of the embryo to ease the analysis of the dependence on planetesimals' e and i. As it has been already shown in other works, the presence of the envelope increases the capture radius, speeding up the formation of the planet. The enhancement in the capture cross section makes the accretion of solids more effective. This effect is more noticeable for small planetesimals. Compared to the cases of Sect. (4.1), the formation timescale of a 1 M โŠ• can be reduced up to โˆผ 35 % (for the smallest planetesimals, r m = 0.1 km), due only to the enhancement in the capture cross section. The presence of an atmosphere, even if its mass is negligible compared to the total mass of the planet, has to be considered for embryos as small as 0.1 M โŠ• as it plays an important role for the accretion of solids. Fig. 5 is the analog of Fig. 3 for the complete formation of the planets. As the disc disperses after 6 Myr, in the cases where the accretion of planetesimals is slow (r m = 100, 10 km), the final masses of the planets are lower than 1 M โŠ• . The differences in growth observed in the previous section as a function of the planetesimal size have dramatical consequences for the final mass of the planet, which can be โˆผ 0.1 M โŠ• if the accreted planetesimals have a radius of 100 km, โˆผ 0.8 M โŠ• for planetesimals of 10 km, โˆผ 1200 M โŠ• (3.7 Jupiter masses, M J ) for 1 km planetesimals, and 7100 M โŠ• (22 M J ) for 0.1 km planetesimals. These numbers evidence how differences in the accretion rate of solids (here regulated by the size of the accreted planetesimals, bottom-right panel) impact on the final mass of a planet and the non-linear aspect of planet formation in the core accretion model: once the critical mass is attained, the very rapid accretion of gas leads rapidly to massive planets. The fact that bigger planets form when the accreted planetesimals are small is a consequence of the gas drag, which operates in two ways that combine positively: nebular gas drag is more effective in damping planetesimal's eccentricities and inclinations when planetesimals are small (bottom-left panel) and atmospheric gas drag is able to deflect more distant planetesimals' trajectories therefore enlarging the capture radius of the planet. In fact, for the cases of 1 km and 0.1 km accreted planetesimals, embryos grow to become big giants. When a massive solid embryo is formed it triggers the accretion of gas, leading to the formation of a gas giant planet (green and orange lines). Indeed, planets can end up very massive if they enter the runaway phase of gas. As explained before, during the attached phase, the accretion of gas is a result of solving the equations presented in Sect. 2.3. The planet will remain attached to the disc until its accretion rate exceeds the maximum amount of gas that the disc can deliver. When this condition is not any more satisfied, it goes into the detached phase. During the detached phase, the maximum accretion rate is given by Eq. the accretion of gas during the detached phase. Planet-disc interactions can lead to eccentric instabilities, which means that the planet can enter regions that are outside its gap and have full access to the gas present in the disc, with no limitation for accretion except for the ability of the disc to supply it. Although it is not clear whether all planets can suffer from an eccentric instability, for the sake of simplicity we assume this is the general situation in our simulations. On the other hand, other works (e.g. Lissauer at al. 2009) include a limitation for the accretion of gas when the planet opens a gap in the disc, as they consider that planets are in circular orbits. Comparatively, this assumption leads to less massive planets. The equilibrium approximation and the explicit calculation of e and i also bring differences in the final mass of the planet. Whether for large planetesimals (100 and 10 km) the equilibrium approximation works fine as showed in the previous section, for smaller planetesimals it might not be the case. For r m = 1 km, the final mass of the planet in the equilibrium approximation is 537 M โŠ• , whether with the explicit calculation considering an initial hot disc is 950 M โŠ• and with an initial cold disc, 1200 M โŠ• . For r m = 0.1 km, the final mass of the planet in the equilibrium approximation is 4745 M โŠ• , whether with the explicit calculation considering an initial hot disc 6745 M โŠ• and with an initial cold disc, 7100 M โŠ• . Differences in mass are the consequence of a timing effect. When the planet grows faster, the damping produced by the nebular gas is more effective because the gas density in a younger disc is larger than in an older one. Then, the faster an embryo grows, the more it profits from the nebular gas drag. Moreover, the cross-over mass (the mass of the core for which the mass of the envelope equals the mass of the core), which in this case is โˆผ 30 M โŠ• , is achieved earlier when the accretion rate of solids is bigger. When a protoplanet reaches the crossover mass its growth is dominated by the accretion of gas, that is already in the runaway regime. If the protoplanet starts the accretion of gas when the disc is younger and more massive, it provides a larger reservoir of gas which translates, in the end, in a bigger planet. It is important to remark that Eqs. (56) -(57) are obtained assuming that the stirring timescale is equal to the damping timescale (Thommes et al. 2003), and the equilibrium approximation (i = 0.5 e). On the other hand, Eqs. (31) -(32) explicitly solve the coupled evolution of e and i given a set of initial conditions. This means that out of equilibrium states are allowed (e.g. as seen at the beginning of the calculations) and equilibrium states are reached naturally and without a fixed ratio between e and i. It is clear that the explicit resolution of the differential equations is a better physical approach. We have included here calculations with the equilibrium approximation just for comparison. Results show that the equilibrium approximation works fine for larger planetesimals, but overestimates the excitation of smaller ones making their accretion less effective. This delays the whole process of planet formation. As the planet is embedded in a disc with a finite (and short) lifetime, this delay impacts in the final mass of the planet. When solving explicitly the equations of e and i, initial conditions for these quantities are needed. This is a problem because we can not be certain about the state of the disc at the beginning of our calculations. So assumptions for the initial values can not be avoided. The "initially cold disc" favours the growth of a planet allowing for high accretion rates in the first thousands of years (Fig.5,bottomright panel). In the "initially hot disc" this effect is not present as departures from equilibrium are smaller and therefore equilibrium is attained faster (keep in mind that the equilibrium attained when solving the differential equations can be different to the equilibrium approximation for the case of small planetesimals). However, final results do not strongly depend upon the initial choices. The differences in the final mass are of 20% and 5% for r m = 1 km and r m = 0.1 km respectively. Given all the uncertainties of the model these differences are acceptable. Planet formation with migration We now complete our analysis including the migration of the protoplanets. The migration model is the one presented in Dittkrist et al. (in prep.) and Mordasini et al. 2010. Fig. 6 shows the total mass of the planet and its semi-major axis as a function of time for the same cases as in the previous sections. Clearly, the situation is very different from the in situ hypothesis. To start with, in all the cases the final location of the planet is far from its initial location. Actually, the three calculations for 1 km planetesimals result in lost planets (planets cross the inner edge of the disc and are considered to fall into the central star). The same is the case for the equilibrium approximation of 0.1 km planetesimals. For the cases where planets are not lost into the star, we find that the final masses for accreted planetesimals of r m = 100 km and 10 km are independent of the way e and i were computed (โˆผ 0.2 M โŠ• and 0.7 M โŠ• respectively). The final locations are also similar for the different considerations for e and i (โˆผ 1.7 AU for both r m = 100 km and 10 km). For a fixed planetesimal size, the migration paths for the different assumptions on the eccentricities and inclinations are somehow shifted in time but are similar if we consider them as a function of mass (Fig. 7). In the case of r m =1 km, protoplanets are lost in the central star when their mass is โˆผ 20 M โŠ• , most of which is in the solid core. At โˆผ 8 M โŠ• the protoplanet undergoes inward migration in the adiabatic saturated regime 4 (see Paardekooper et al. 2010, 2011, Mordasini et al. 2010. This timescale turns out to be shorter than the accretion timescale. The protoplanet covers โˆ†a 5 AU in โˆผ 7 ร— 10 5 yr. In this time it doubles its mass which, however, is not big enough for the planet to enter type II migration. Planet migration in general slows down when the protoplanet is able to open a gap in the disc (type II migration) which, as a rule of thumb, happens when the planet mass is โˆผ 100 M โŠ• . This situation is not reached in this case, where ac-cretion is too slow compared to migration, resulting in the loss of the forming planet. For r m = 0.1 km the differences between the equilibrium approximation and the explicit calculation of e and i are more dramatic: adopting the equilibrium approximation leads to the loss of the planet in the central star (for the same reason as in the previous case, the growth rate is very slow) whether for the explicit calculation of e and i, although the planet ends in an orbit very close to the central star (โˆผ 0.2 AU), the previous growth of the embryo is fast enough to enable a large accretion of gas. In this case the planet reaches a mass that enables it to switch to type II migration. The planet decelerates its migration speed until it stops. The final masses are: 13 M J for in initially hot disc and 15 M J for an initially cold disc. The fate of an embryo (becoming a big or a low mass planet, surviving or being lost in the central star), as it is shown here, depends upon the size of the accreted planetesimals and on the assumptions we adopt to describe their dynamics, as these strongly impact on the accretion rate of solids and therefore on the whole formation process through the regulation of the growth timescale. Here we have shown the interplay between the evolution of the protoplanetary disc, the growth of the protoplanet and the operation of migration. Note that in all the cases we are considering the same disc, that globally evolves with time in the same way. Differences in the local evolution of the disc arise due to accretion (solids and gas accreted by the planet are removed from the disc) and ejection of planetesimals (when the planet is massive enough). The fact that in Fig. 7, for the same mass of a protoplanet, the location in the disc can be different is a consequence of this interplay: protoplanets reach a certain location earlier or later in the evolution of the disc, depending on their growth rate. The state of the disc and the mass of the protoplanet at that moment determines its migration rate. So, independently of what regulates the growth of the planets, the examples here show that planet's growth and migration rate are tightly coupled. Fig. 8 shows a comparison between the migration timescale (ฯ„ mig = a/|ศง|) and the protoplanet's growth timescale (ฯ„ growth = M/|แน€|) for the four sizes of the accreted planetesimals we have considered, solving explicitly the equations for e and i and adopting an initially cold planetesimal disc. For accreted planetesimals of 100 km and 10 km, the jump in the growth rate of the planets at M 0.1 M โŠ• corresponds to the crossing of the ice line. Planetesimals in the inner region (inside the ice line) are denser and the gas drag is less effective on them (see Eqs. (34) -(35) and (37)), therefore their random velocities are higher and accretion rates are lower. Also, the solids surface density is lower, which contributes to decrease the accretion rate. The peaks in the migration timescale correspond to changes in the sense of migration 5 . When the accreted planetesimals have a radius of 1 km, the migration timescale becomes lower than the growth timescale when the mass of the protoplanet is โˆผ 10 M โŠ• . Planetesimals relative velocities are very high in the neighbourhood of the protoplanet and accretion becomes more difficult as the protoplanet increases its mass. The protoplanet grows slowly and migrates fast. This situation is never reverted and the protoplanet is lost in the central star. Being the migration rate high and the accretion rate not enough to counteract it, the protoplanet migrates without limit, almost at a constant mass, until it reaches the central star. 5 Depending on the regime -isothermal versus adiabatic, and saturated versus unsaturated, the migration can be inward or outward, see Paardekooper et al. 2010, 2011, Mordasini et al. 2010 r m = 100 km r m = 10 km r m = 1 km r m = 0.1 km Fig. 7. Mass versus semi-major axis for the cases shown in Fig. 6. Note that the accretion rate of solids plays a major role not only in the growth of the planet but also in its migration path. In the case of accreted planetesimals of 0.1 km, the protoplanet's migration timescale is shorter than the protoplanet's accretion timescale (just as in the former case) for the "critical" mass interval of a few tens to around one hundred of Earth masses. However, in this case accretion proceeds fast enough for the protoplanet to start runaway accretion of gas before reaching the inner edge of the disc. Therefore the planet is able to accrete mass very fast (gas accretion dominates the growth of the planet), which means that it becomes massive enough to enter type II migration. The migration rate (in type II with gap opening) decreases as the protoplanet grows in mass. Therefore, being in the runaway phase is what saves the planet from falling into the star. The situation can be summarised as follows: when the growth of the planet is dominated by the accretion of solids in the oligarchic regime (before gas runaway accretion), the growth timescale is proportional to M 1/3 (Ida & Makino 1993). However, for planets massive enough to trigger rapid gas accretion, the growth timescale is much shorter. On the other hand, migration typical time scales with M โˆ’1 in type I, and is independent of the planet's mass in type II, being generally much longer than in type I. As a consequence, if a planet migrates in type I and is dominated by the oligarchic growth, it is very likely to be lost in the star. It is only if the planet succeeds to become massive enough to start the runaway gas accretion, and quasisimultaneously enter type II migration, that it can brake before reaching the central star. Therefore, there is a critical mass range between โˆผ 10 โˆ’ 100 M โŠ• : a planet in this mass range is likely to be undergoing inward type I migration and a decreasing growth rate, which in this stage is dominated by the accretion of solids. Indeed, being massive, the protoplanet excites the random velocities of planetesimals making its accretion difficult. Therefore the growth of the planet is very slow at this stage, as planetesimals can not be effectively accreted. Although this is the pathway to the runaway accretion of gas, the transition between being dominated by the accretion of solids and the accretion of gas, even fast, is not immediate. On the other hand, inward type I migration in this mass range is still very fast. Therefore, if planets enter the runaway gas phase they would probably grow massive, change to type II migration and avoid a fatal end in the central star. If they remain very small, type I migration is not very harm- Fig.(6)). The peaks in the migration timescale correspond to the changes in the sense of migration of the protoplanet. From the two bottom panels, it can be seen that when the protoplanet's mass is 10 M โŠ• both timescales become comparable and eventually the migration timescale becomes shorter than the accretion timescale, leading to a fast migration of the protoplanet with very little accretion. In the case of 1 km planetesimals the protoplanet growth is too slow to gain enough mass to change the type of migration before it is lost in the star (when its mass is โˆผ 30 M โŠ• ). In the case of 0.1 km planetesimals, the protoplanet can grow more massive because its accretion rate is larger than in the previous case, then it is able to open a gap in the disc and migration switches to type II, preventing it to fall in the star. ful. Finally, if they grow up to around Neptune mass while the disc is still young but they do not manage to grow fast enough their migration accelerates and they end in the central star. Exploring the parameter space of initial conditions One may wonder if the examples presented in the previous section are representative of a general case. In order to answer this question we apply our planet formation model for a variety of protoplanetary discs (including different lifetimes, masses, metallicities, etc.) and initial locations for the embryo. In this framework, we performed sets of more than 10 000 simulations. Each set considers a different size for the accreted planetesimals (r m = 100, 10, 1 and 0.1 km). For these simulations we adopt the disc models described in Sec. 2.1.2. The initial mass of the embryo is, in all the cases, 0.01 M โŠ• . The initial location of the embryo is varied between 0.2 and 30 AU. For an embryo to start at a certain location, we check that the initial mass of solids at its location is greater than the mass of the embryo. We subtract the mass of the embryo from the initial mass of solids of its feeding zone, which means we are assuming instantaneous, in situ formation for it. Planets grow by accreting solids and gas, and can migrate in the disc, according to the model we presented in the previous sections. The evolution of the eccentricity and inclination of planetesimals are calculated solving the differential equations (Eqs. (31) -(32)) all throughout the disc for every time step. As initial conditions for e and i we assume that they are given by Eqs. (58) -(59), what we have called before "an initially cold disc". The parameter space explored for the initial mass and lifetime of protoplanetary discs is schematically shown in Fig. 9. The rectangle surrounded by a black box shows the whole set of initial conditions studied. Note however that the distribution of these parameters is likely not uniform, and not all combinations have the same probability of occurrence: long lived discs, as well as very low and very massive discs are unlikely. We found that giant planets 6 do not form for the initial conditions corresponding to the grey region in Fig. 9. In fact, for a population of 100 km and 10 km planetesimals, giant planets do not form under any of the initial conditions considered in this work. In the case of 100 km population of planetesimals this is due to the fact that the accretion time to form a massive solid core is longer than the discs' lifetimes. For 10 km planetesimals, some giant planets would form if migration were not at work (see Sec. 5). However, according to disc-planet interaction models, planets do migrate and the migration timescale turns out to be much shorter than the accretion timescale. Planets are lost in the central star before they are able to accrete enough solids to trigger the runaway accretion of gas (as seen in the previous section, this would allow them to grow faster and switch from type I to type II migration, therefore preventing their loss in the central star). However, the situation reverts when the accreted planetesimals are smaller. Fig. 9 shows that massive discs favour the formation of giant planets. The coloured regions depict the characteristics of discs where, for a particular initial position, the embryo succeeded in growing to a giant planet. This does not mean that a giant planet will form in any location of the disc, but that formation is possible at certain locations. The yellow region shows the parameter space of giant planets that succeeded to survive in the disc accreting 1 km planetesimals. Clearly, there is a dependence on the discs lifetime: less massive but longer lived discs favour the formation of giant planets. In red is plotted the situation for the case of r m = 0.1 km accreted planetesimals. As smaller planetesimals are accreted faster and more efficiently, the disc parameter space that leads to the formation of giant planets is bigger. Interestingly, in this case there is no dependence on the lifetime of the disc. When the amount of solids present is large enough, accretion to form a core with a critical mass proceeds so fast that it is always shorter than the lifetime of the discs considered here. If the planetesimal disc is massive, accretion can be fast enough to enable protoplanets to reach the cross-over mass. Massive protoplanets switch from fast type I migration to a much slower type II, therefore decelerating and eventually braking before reaching the central star, preventing this way the loss of planets. Very small mass planets are the most abundant outcome in all simulations (regardless the size of the accreted planetesimals) and their final location could be anywhere in the disc. Planets more massive than 10 M โŠ• , in general, start their formation beyond the ice line (where solids are more abundant and feeding zones are larger), and due to migration they reach the inner regions of the disc. The final location of these planets extends between the inner edge of the disc and 10 AU. Planets in the range of 10 2 to 10 3 M โŠ• are the less abundant ones, and their final locations are very different from their initial emplacements. These planets undergo a lot of migration, and those that remain are the ones that were able to accrete gas fast enough to enter the runaway accretion of gas which prevented their loss in the central star. Planets in the mass range 10 -10 2 M โŠ• are those that undergo the largest net displacement from their original location. Most of the surviving planets in this mass range have masses lower than 20 M โŠ• . These planets, that were not able to reach their crossover masses while they were migrating towards the central star are preserved because the disc dissipated before they could fall into the star. In the case of r m = 0.1 km, โˆผ 18% of the simulations lead to lost planets, 85% of which were not able to reach 6 By giant planets we mean protoplanets with masses larger than the cross-over mass and which survive from disc-planet angular momentum exchange and migration. their cross-over mass. This confirms our analysis of the previous section. Planets that are massive enough to undergo rapid inward type I migration ( 10 M โŠ• ) but whose growth rate is still dominated by the accretion of solids are likely to be lost in the star. In the previous sections we have noticed that results of giant planet formation depend upon the dynamical model adopted to describe planetesimals' dynamics. The equilibrium approach has the advantage of being numerically not very time consuming. However, it can lead to very different results when compared to the case of explicitly solving the differential equations for e and i. When solving the equations, we also have the problem of setting the initial conditions, which are unknown. To test the importance of these assumptions, we performed 10 000 simulations under three different conditions. Each calculation differs from the other only in the treatment of the planetesimals e and i. This means that for a given set of initial conditions, we calculate the formation of the planet three times: one assuming the full equilibrium situation for planetesimals, another solving the explicit differential equations using as initial conditions for e and i the equilibrium value when the stirring timescale of the embryo equals the nebular gas drag timescale (we call this scenario "hot disc") and finally also solving the differential equations but using as initial conditions the equilibrium value between mutual planetesimal stirring and gas drag damping (we call this scenario "cold disc"). Fig. 10 shows the fraction of surviving planets with respect to the total amount of planets formed (surviving planets plus planets lost in the central star). Accreted planetesimals are 0.1 km size. To make this plot we classified the planets according to their final mass in five mass bins. Clearly, in all the cases, the most abundant planets are the low mass ones (< 1 M โŠ• ). The equilibrium scenario being the one corresponding to the slowest accretion rate, is the one with more planets in the lowest mass A. Fortier et al.: Planet formation models: the interplay with the planetesimal disc Three different approaches in the treatment of the evolution of the eccentricities and inclinations of planetesimals are considered: equilibrium, and explicit calculation of the differential equations for a "hot" and "cold" initial conditions. bin (< 1 M โŠ• ). On the other extreme, in the case of the cold initial disc, accretion of solids at the beginning is more efficient, embryos grow bigger in a shorter time which, in turn, gives them more chances to continue accreting. That is why in the case of an initial cold disc planets are more massive. This is evident in the histogram: for a cold initial planetesimal disc, the fraction of planets in each mass bin is the highest (except, of course, in lowest mass bin). The amount of planets in the interval of 10 to 10 2 M โŠ• represents around 2% of the surviving planets. Most of them are in the mass range of 10 to 20 M โŠ• , and just a few in the range of 20 to 50 M โŠ• . There are no planets in the range of 50 to 10 2 M โŠ• , evidencing the dramatic effect of type I migration on these intermediate mass planets. In the mass interval 10 2 -10 3 M โŠ• it can be noticed a profound decrease in the number of planets, independently of the approach used for the planetesimal dynamics. This can be understood as follows: protoplanets with masses greater than 10 2 M โŠ• are usually in the runaway phase of gas accretion. This means that hundreds of Earth masses can be accreted in a very short time. Therefore planets in the runaway phase easily grow more massive than Jupiter. Only if the disc dissipates during this process of accretion, final planetary masses can be between Saturn and a few Jupiter masses. To a less extent, planet migration has also some consequences in depleting this bin: although the mass bin in which planet migration is most effective in eliminating planets is the 10 -10 2 M โŠ• range, it is still very important in this mass regime, eliminating around 25% of these planets (especially those closer to the 10 2 M โŠ• edge of the mass interval). Discussion The calculations presented in this paper focus on the formation of planets and how this process is highly affected by the accretion rate of solids. The accretion rate of solids introduced intends to be realistic while computationally tractable. Some other aspects of the model are therefore rather simplified here; for example we consider the formation of a single planet. The application of these calculations to the formation of planetary systems will be presented elsewhere (Alibert et al. 2012). We introduced a semi-analytical description for the eccentricities and inclinations of planetesimals. In this work we calculated explicitly planetesimals' eccentricities and inclinations, taking into account the stirring of the growing planets, the gas drag from the nebula and the mutual stirring of the planetesimals themselves. The stirring produced by the growing planet excites planetesimals and makes their accretion more difficult as it grows. On the other hand, gas drag counteracts the stirring effect, an effect more important for smaller planetesimals. We have considered three approaches to determine the rms eccentricity and inclination of planetesimals: -"analytical equilibrium" calculation, as described by Eqs. (56) - (57), out of equilibrium, by solving the time evolution of the rms eccentricity and inclination of planetesimals (Eqs. (31) -(32)), starting from a "cold" planetesimal disc (their excitation state being the results of planetesimal-planetesimal interaction and gas drag only), and out of equilibrium, starting from a "hot" disc, where planetesimals are already excited by the planetary embryo. We have shown that these three approaches lead to different accretion rates, the difference depending on the planetesimal size, and being more important as a result of the migration and disc evolution feedbacks. For large planetesimals, the three approaches lead to similar results, but the accretion rate of solids under this hypothesis is very small, preventing the formation of massive planets. On the other hand, for low mass planetesimals, the excitation state of planetesimals as derived from the "analytical equilibrium" calculations, and the excitation state of planetesimals following the second or third approach, even after a long time (when the equilibrium solution of Eqs. (31) -(32) is reached), are different, resulting in different accretion rate of solids. In particular, the ratio of the eccentricity to the inclination can deviate substantially from the 1/2 ratio assumed in the first approach. As a consequence of the size dependence of gas drag, small planetesimals are easier to accrete, leading to a faster formation of planets. We have shown that in the framework of the model hypothesis outlined in Sect. 2, the formation of planets ranging from a fraction of an Earth mass to several Jupiter masses can only be accomplished under the assumption of a population of small planetesimals or, more generally, if planetesimals are, by any process, maintained in a "cold" dynamical state. Similar conclusions were already identified in other works (e.g. Fortier et al. 2007, 2009, where it was shown that, in order to achieve the formation of the giant planets of the solar system in less than 10 Myr most of the accreted planetesimals have to be small. Those models, however, assumed in situ formation, and did not consider a consistent calculation of the planet's final mass (the computations were stopped when the masses of the giant planets of the solar system were reached). Although the approach for the calculation of planet formation (regarding the accretion of gas and solids) is similar, our work accounts for the In order to compare our results with the afore-mentioned works, we performed simulations where migration is switched off. We calculate the in situ formation of 10 000 planets that accrete planetesimals of 100, 10, 1 and 0.1 km in radius. Indeed, as can be seen from Fig. 11, small planets are the most abundant ones. Larger planets form when there are small planetesimals to accrete (smaller than r m = 10 km). Planets in the mass range 10 โˆ’ 10 2 M โŠ• (which growth is not yet dominated by the accretion of gas) are produced in about the same fraction as giant planets in the mass range > 10 3 M โŠ• , which growth is dominated by the a runaway accretion of gas. The runaway gas accretion leads to the difficulty of forming planets in the mass range of 10 2 โˆ’ 10 3 M โŠ• : only a fine tuned timing effect, the gas disc disappearing when the planet is in this mass range, can lead to planets in the Saturn-Jupiter domain. If we focus on the mass bin 10 to 10 2 M โŠ• , one difference between the migration and in situ scenario is that, while in the former there are no planets in the mass range between 50 and 100 M โŠ• (because they are all engulfed by the central star), in the in situ case planets are distributed in the whole mass spectrum of the bin (however, most of them are in the lowest mass range [10 to 20 M โŠ• ]). It is also interesting to compare for these two scenarios the mass percentage of solids for planets between 5 โˆ’ 10 2 M โŠ• . As it can be seen in table 3, planets that migrate have a more massive core than planets formed in situ. In general, when planets migrate they have access to regions of the disc that have not been depleted of planetesimals. Therefore, the solids surface density is higher and so is the accretion rate, which results in the formation of a more massive core. In this paper we have considered that the opacity of the envelope corresponds to the full interstellar medium opacity. However, several works (e.g. Pollack et al. 1996, Podolak 2003, Hubickyj et al. 2005, Movshovitz & Podolak 2008 suggest that opacity in the planet's envelope should be much smaller, leading to a faster formation. As a complementary result, we computed one in situ population and one population including migration (10 000 planets each) reducing the opacity to 2% of that of the interstellar medium. We found that with a reduced opacity the loss of planets decreases, increasing in every mass bin the number of planets that survive in the protoplanetary disc. Nevertheless, the shape of the mass distribution is very similar to that shown in Fig. 10: the fraction of surviving planets decreases with the mass of the planet, with a minimum in the interval 10 2 โˆ’ 10 3 M โŠ• (although with the reduced opacity the fraction of planets in this bin is four times larger than with the full opacity), and raising again in the last mass interval. In ta- Table 3. For planets whose total mass is between 5 and 10 2 M โŠ• , this table shows the percentage of the total mass contained in the solid core. A comparison between a migration and non migration scenario is also made. In the migration case there are no planets with masses greater than 50 M โŠ• . ble 4 we show the solids mass fraction for planets with masses between 5 to 100 M โŠ• , considering both in situ and migration scenario. In both cases, the fraction of solids is smaller than in the corresponding case calculated with full opacity. Using the full model (including migration and disc evolution), giant planet formation by accretion of 100 km planetesimals is quite unlikely, if not impossible. In our simulations, to actually form giant planets we had to reduce the planetesimal size to the 0.1 km -1 km radius range. Such a conclusion raises the question of the most likely planetesimal size during the epoch of planet formation. Recent studies on planetesimal formation however give different conclusions. On one hand, models that explain the formation of planetesimals by direct collapse in vortices in turbulent regions (e.g. Johansen et al. 2007) predict a fast formation of very big planetesimals (r m > 100 km). We note however that this formation process may not be totally efficient and only a small fraction of solids initially present in protoplanetary discs is likely to end up in such big planetesimals. The conclusions of Johansen et al. (2007) are consistent with the results of Morbidelli et al. (2009) on the initial function of planetesimals. On the other hand, a recent study of Windmark et al. (2012) shows that direct growth of planetesimals via dust collisions can lead to the growth of 0.1 km planetesimals. In addition, initially small planetesimals show better matches to the observed size distribution of objects in the asteroid belt and among the TNOs: Weidenschilling (2011) shows that the size distribution currently observed in the asteroid belt in the range of 10 to 100 km can be better explained by an initial population of 0.1 km planetesimals. Kenyon and Bromley (2012) conclude, by combining observations of the hot and cold populations of TNOs with time constraints on their formation process, that TNOs form from a massive disc mainly composed of 1 km planetesimals. More investigations on the formation of planetesimals, and planetary embryos, are definitely required in order to test the viability of planetary core formation by accretion of low mass planetesimals. We note finally that what is important in the work we have presented now is not the initial mass function of planetesimals, but their mass function at the time of planet formation. The two quantities are likely to differ due to planetesimal-planetesimal collisions, and the resulting mass growth and/or fragmentation. Conclusions In this paper we presented calculations of planetary formation considering the formation of a single planet at each time, and starting with embryos of 0.01 M โŠ• . Our simulations consist in the calculation of the formation of a planet, including its growth in mass by accretion of solids and gas, its migration in the disc, and the evolution of the disc until the gas component of the disc is dispersed. When the nebular gas is gone, simulations are stopped. Therefore, the subsequent growth of the planets, by accretion of residual planetesimals or collisions among embryos if we were considering planetary systems, is not considered. During their formation, the growth of the planets is calculated self-consistently coupling the accretion of solids and gas. The accretion of solids is computed assuming the particle-in-a-box approximation, and computing the excitation state of planetesimals, which in turn regulates to a great extent the accretion rate of the planet. The accretion of gas is computed solving the differential equations that govern the evolution of the structure of the planet. Finally, protoplanets grow in an evolving protoplanetary disc, which density, temperature and pressure is calculated at every time step. The combination of oligarchic growth (for the solid component of the planet) with the migration of the planet has severe consequences for protoplanets that are able to grow up to a few tens Earth masses: these planets tend to collide with the central star (or at least to migrate to the innermost location of the protoplanetary disc). Indeed, planets that are between 10 and 100 Earth masses are usually undergoing very rapid inward type I migration, but are not massive enough to switch to a slower, type II migration. In our simulations, the only surviving planets in the range of 10 to 20 Earth masses correspond to cases where the gas component of the disc dissipates during their growth, preventing them to fall into the star. On the opposite, if the solid core grows fast enough, it enables the accretion of large amounts of gas, when the critical mass is reached. At this point, the runaway of gas ensures an extremely quick growth in the mass of the planet, and the planet migration rate decreases. In the model we have presented in this paper, we have assumed a uniform population of small planetesimals which size remains unchanged during the whole formation of the planet. Most probably the initial population of planetesimals in protoplanetary discs is not uniform in size, but follows a size distribution. We have shown, however, that without small planetesimals giant planet formation is difficult to explain, at least in the way we understand it now. However, even with an initial population of small planetesimals, the collisions among themselves are likely to be disruptive as soon as their random velocities start to be excited by a planetary embryo.Therefore, it is also unlikely that an initial population of only small planetesimals can be used to explain the formation of giant planets. Moreover, even assuming this to be true, only a few amount of planets in the range of several tens Earth masses to a few Jupiter masses can be formed. In addition, small mass planetesimals are subject to large radial drift, as a result of gas drag. Planetesimal drift can have positive or negative consequences in the formation of planetary systems, as has been shown by Guilera et al. (2010Guilera et al. ( , 2011. Similarly, fragmentation and coagulation can hasten or delay planet formation as a whole: fragments of smaller mass are easier to accrete but, if they also can leave a planet's feeding zone very quickly as a result of gas drag. Finally, in a planetary system, fragments that are not accreted by the embryo that generated them can be accreted by another embryo located in an innermost region. It is not clear what the possible outcomes of putting together planetesimals drifting, fragmentation and many embryos forming in an evolving disc could be. This however represents a very important step in the understanding of the first stage of planet formation. Because most of the accretion of solids should, at some point, be dominated by small planetesimals or fragments, our calculations can be understood as a description of that stage. An interesting scenario to analyse, in particular if planetesimals are born massive (Johansen et al. 2007, Morbidelli et al. 2009) would be the following. An initial population dominated by โˆผ 100 km planetesimals would prevent a fast growth of the embryos at the beginning, a time during which the planetary embryos would suffer only of a little migration and the protoplanetary disc would evolve, reducing its gas surface density. Fragments (smaller planetesimals), resulting from collisions between big planetesimals, would start to be generated later (the timescale of fragmentation of 100 km planetesimals affected by the stirring of Moon to Mars mass embryos is of the order 10 6 years), therefore accelerating the formation of the embryo in a later stage of the disc evolution. The collisional cascade would probably still produce small fragments fast enough to help the growth of an embryo, even if they leave the feeding zone very fast. Therefore, protoplanets could grow by the accretion of fragments, not necessarily generated by themselves, but generated by another distant embryo. Putting together these different processes will give us a better insight on the formation process and would help us to constrain, from planetary formation models, possible initial size distributions for planetesimals. It is also important to mention that we have not considered the possibility of planetesimal-driven migration. Although in general planetesimal-driven migration acts on a longer timescale that Type I migration, recently Ormel et al. (2012) find that planetesimal-driven migration can have a mild effect on midsized planets in massive planetesimal discs, competing with Type I migration. The main conclusions of our work are that formation of giant planets in the framework of the sequential accretion model needs the presence of unexcited planetesimals. One obvious way to deexcite planetesimals is through gas drag, but this requires this latter to be efficient, which in turn translates to low mass planetesimals. These planetesimals can be primordial or fragments of originally bigger ones. But, at some point, small boulders are needed to build protoplanetary massive cores before the dissipation of the disc. The combination of migration and oligarchic growth, on the other hand, prevents the formation of intermediate mass planets. However, this result can change when considering the formation of planets in planetary systems where their gravitational interactions are taken into account. Captures in resonances can prevent planets from colliding with the central star, preserving them in planetary systems. Exploring this effects will be the subject of future works.
Automorphisms of Torelli groups In this paper, we prove that each automorphism of the Torelli group of a surface is induced by a diffeomorphism of the surface, provided that the surface is a closed, connected, orientable surface of genus at least 3. This result was previously announced by Benson Farb for genus at least 4 and has recently been announced by Benson Farb and Nikolai Ivanov for subgroups of finite index in the Torelli group for surfaces of genus at least 5. This result is also directly analogous to previous results established for classical braid groups, mapping class groups, and surface braid groups. Introduction Let S be a closed, connected, oriented surface of genus g. The extended mapping class group M * of S is defined to be the group of isotopy classes of homeomorphisms S โ†’ S. The mapping class group M of S is defined to be the subgroup of index 2 in M of isotopy classes of orientation preserving homeomorphisms S โ†’ S. The Torelli group T of S is the subgroup of M consisting of the isotopy classes of those orientation preserving homeomorphisms S โ†’ S that induce the identity permutation of the first homology group of S. In this paper, we show that any given automorphism ฮจ : T โ†’ T is induced by a homeomorphism S โ†’ S provided that g > 2. This result is trivially true for g < 2, as T is trivial for g < 2. On the other hand, by a result of Mess, this result is false for g = 2. Indeed, Mess showed that T is a nonabelian free group of infinite rank for g = 2 [Me]. It follows that the automorphism group of T for g = 2 is uncountable as it contains a copy of the permutation group of an infinite set, any free basis for T . An automorphism of T induced by a homeomorphism h : S โ†’ S only depends upon the isotopy class of h in M * . Since M * is finitely generated [B] and, hence, countable, it follows that only countably many elements of the automorphism group of T are induced by homeomorphisms S โ†’ S. In several formal and informal announcements made between October 2001 and March 2002, Benson Farb announced that he had proved that any given automorphism ฮจ : T โ†’ T is induced by a homeomorphism S โ†’ S provided that g > 3 [F]. In his thesis, [V1], the second author laid the foundation for proving that it is true for g > 2. This involves three basic steps, two of which were finished in [V1]. In the first step, completed in [V1], Vautaw characterized algebraically certain elements of the Torelli group, namely powers of Dehn twists about separating curves and powers of bounding pair maps. The second step, left unfinished in [V1], is to show that ฮจ induces an automorphism ฮจ * : C โ†’ C, where C is the complex of curves of S. In the last step, completed in [V1], Vautaw used a theorem of Ivanov which states that ฮจ * is induced by a homeomorphism H : S โ†’ S of the surface S [Iv1], and concluded, under the assumption that it is possible to complete the second step, that the automorphism of the Torelli group induced by H agrees with the given automorphism ฮจ (i.e. ฮจ( [F ] for every mapping class [F ] in T ). The purpose of this paper is to complete the second step and, hence, establish the main theorem, Conjecture 1 of [V1]: Theorem 1 . Let S be a closed, connected, orientable surface of genus g = 2, and let ฮจ : T โ†’ T be an automorphism of the Torelli group T of S. Then ฮจ is induced by an homeomorphism h : S โ†’ S. That is, there exists an homeomorphism H : S โ†’ S such that for any mapping class [F ] in T , we have ฮจ( [F ] This result is analogous to results previously established for classical braid groups [DG], mapping class groups [I], [Iv2], [IvM], [K], [M] and surface braid groups [IIvM]. As noted above, it fails for g = 2. Recently, Farb and Ivanov have established the analogue of this result for subgroups of finite index in the Torelli group of surfaces of genus at least 5 [FIv]. They obtain this result as a consequence of an analogue of Ivanov's result on automorphisms of complex of curves [Iv1] (see also [I] and [K]). This result of Farb and Ivanov subsumes Farb's previous result except for surfaces of genus 4. In this work, they use a characterization of powers of Dehn twists about separating curves and powers of bounding pair maps (Proposition 8 of [FIv]) equivalent to the second author's characterization (Theorem 3.5 of [V1]). Preliminaries We have the following results from the second author's thesis [V1]. Proposition 2.2. Let V nonsep denote the set of isotopy classes of nonseparating circles on S. Let BP be the set of ordered pairs (a, b) of elements of V sep such that {a, b} is a bounding pair on S. Let (a, b) be an element of BP. Then there exists a unique element (e, f ) of BP such that ฮจ(D a โ€ข D โˆ’1 b ) = D e โ€ข D โˆ’1 f . As explained in [V1], these results are derived from the characterizations of elementary twists in the second author's thesis, Theorems 3.5, 3.7, and 3.8 of [V1]. These characterizations are analogous to corresponding characterizations used in previous works on automorphisms of subgroups of the mapping class group (e.g. [I], [Iv2], [IvM], [K], [M]). As is true for these previous characterizations, these characterizations are established by using the theory of abelian subgroups of mapping class groups developed in the work of Birman-Lubotzky-McCarthy [BLM] from Thurston's theory of surface diffeomorphisms [FLP]. Two-holed tori We shall use the following generalization of the Centralizer Lemma of [V2]. Proposition 3.1. Let S be a connected, closed, orientable surface of genus g > 2. Let R be an embedded two-holed torus in S such that both components, a and b, of the boundary โˆ‚R of R are essential separating curves on S. Let ฮจ : T โ†’ T be an automorphism of the Torelli group T of S. Let ฮ“ R denote the subgroup of T consisting of mapping classes of homeomorphisms S โ†’ S which are supported on R. Then the restriction of ฮจ to ฮ“ R is induced by a homeomorphism H : S โ†’ S. That is, there exists an homeomorphism H : S โ†’ S such that for any mapping class [F ] in ฮ“ R , we have ฮจ( [F ] The proof of this proposition follows the lines of the proof of the Centralizer Lemma of [V2]. From Propositions 2.1 and 2.2, it follows that we may assume without loss of generality that ฮจ : T โ†’ T restricts to an automorphism ฮจ : ฮ“ R โ†’ ฮ“ R . The result is established by relating the restriction ฮจ : ฮ“ R โ†’ ฮ“ R to an automorphism ฯ‡ : ฯ€ 1 (T, x) โ†’ ฯ€ 1 (T, x) of the fundamental group ฯ€ 1 (T, x) of a oneholed torus T . The one-holed torus T corresponds to the quotient of R obtained by collapsing one of the boundary components a of R to a basepoint x for T . The relevant relationship between ฮ“ R and ฯ€ 1 (T, x) is obtained from the long-exact homotopy sequence of an appropriate fibration. One of the key ingredients in the argument establishing this relationship is the well-known contractibility of the identity component of the group of diffeomorphisms of surfaces of negative euler characteristic [EE]. It is a well-known classical fact that an automorphism of the fundamental group ฯ€ 1 (T, x) of a one-holed torus T is induced by a diffeomorphism (T, x) if and only if it preserves the peripheral structure of ฯ€ 1 (T, x) [ZVoC]. As in the proof of the Centralizer Lemma of [V2], we show that this peripheral structure corresponds to Dehn twists about separating curves of genus 1 in the two-holed torus R. Together with Propositions 2.1 and 2.2, this allows us to appeal to the aforementioned classical fact to obtain a homeomorphism H : (S, R, a, b) โ†’ (S, R, a, b) whose restriction H : (R, a, b) โ†’ (R, a, b) descends to a homeomorphism (T, x) โ†’ (T, x) inducing the automorphism ฯ‡ : ฯ€ 1 (T, x) โ†’ ฯ€ 1 (T, x). We then show, as in the proof of the Centralizer Lemma of [V2], using Propositions 2.1 and 2.2, that H satisfies the conclusion on Proposition 3.1. We note that a similar argument relating certain subgroups of mapping class groups to fundamental groups of punctured surfaces appears in the work of Irmak-Ivanov-McCarthy on automorphisms of surface braid groups, in which they prove the analogue of Theorem for certain surface braid groups [IIvM]. Orientation type In this section, we show how to ascribe an orientation type, วซ, to any given automorphism ฮจ : T โ†’ T . This is achieved by studying the action of ฮจ on the separating twists in T . We show that either (i) ฮจ sends right separating twists to right separating twists or (ii) ฮจ sends right separating twists to left separating twists. If (i) holds, we say that ฮจ is orientation-preserving and let วซ = 1. If (ii) holds, we say that ฮจ is orientation-reversing and set วซ = โˆ’1. The proof of this result uses the following connectivity result. Proposition 4.1. Let C sep be the subcomplex of C whose simplices are those simplices of C all of whose vertices are isotopy classes of essential separating circles on S. Then C sep is connected. Proof. Let a and b be elements of C sep . Choose circles A and B in the isotopy classes a and b so that A and B intersect transversely and the geometric intersection i(a, b) of a and b is equal to the number of points of intersection of A and B. Since A is a separating circle on S and the genus of S is at least 3, A separates S into two one-holed surfaces with boundary A, at least one of which has genus at least 2. Let R be one of these two-holed surfaces with boundary A and genus at least 2. Our goal is to find a path in C s ep from a to b. Since A and B are separating circles, i(a, b) is even. We shall prove the result by induction on i(a, b). We may assume that i(a, b) > 0. Simple arguments show that an appropriate path exists if i(a, b) is equal to 2 or 4. Hence, it suffices to find a separating circle C such that i(a, c) โ‰ค 4 and i(c, b) < i(a, b). Roughly speaking, such a circle C can be constructed as follows. Surger A into two essential simple closed curves A 1 and A 2 in the interior of R by using one of the component arcs J of B โˆฉ R, so that A โˆช A 1 โˆช A 2 is the boundary of a tubular neighborhood N of A โˆช J in R. If either A 1 or A 2 is separating, then we choose C as either A 1 or A 2 , whichever is separating. Otherwise, by tubing A 1 to A 2 along an appropriately chosen arc K joining A 1 to A 2 and intersecting A twice, we may construct an essential separating circle C such that i(a, c) โ‰ค 4 and i(c, b) < i(a, b). Remark 4.2. Farb and Ivanov announced this result in their recent paper (see Theorem 4 of [FIv]). They point out that they deduce this result as a consequence of a stronger result regarding separating circles of genus 1. We note that this stronger result follows from Proposition 4.1. Indeed, suppose that a 1 ,...,a n+1 is a sequence of isotopy classes of essential separating circles, a 1 has genus 1, i(a j , a j+1 ) = 0 for 1 โ‰ค j โ‰ค n and a n+1 has genus 1. Suppose that a k has genus > 1 for some integer k with 1 < k < n + 1. Represent a kโˆ’1 , a k and a k+1 by essential separating circles A kโˆ’1 , A k and A k+1 on S with A kโˆ’1 and A k+1 disjoint from A k . Note that A k separates S into two one-holed surfaces, P and Q, each of genus > 1. We may assume, without loss of generality, that A kโˆ’1 is in the interior of P . Either (i) A k+1 is in the interior of P or (ii) A k+1 is in the interior of Q. In case (i), we may replace a k by the isotopy class of some separating circle of genus 1 in Q. In case (ii), we may delete a k . In either case, we obtain a path in C sep from a 1 to a n+1 with fewer vertices of genus > 1 than the chosen path a 1 ,...,a n+1 . Hence, the stronger result follows by induction on the number of vertices of genus > 1 in the chosen path. Using Proposition 4.1, we may now establish the main result of this section. Proof. Let a be an essential separating circle on S. By Proposition 2.1, either ฮจ(D a ) = D c or ฮจ(D a ) = D โˆ’1 c , for some essential separating circle c on S. Suppose, for instance, that ฮจ(D a ) = D c . Then ฮจ(D a ) is a right twist. Let b be an essential separating circle on S. By Proposition 4.1, there exists a sequence a i , 1 โ‰ค i โ‰ค n of essential separating circles on S such that (i) a 1 = a, (ii) a i is disjoint from a i+1 for 1 โ‰ค i < n, and (iii) a n is isotopic to b. We may assume, furthermore, that (iv) a i is not isotopic to a i+1 for 1 โ‰ค i < n. Let i be an integer such that 1 โ‰ค i < n. Then, by the above assumptions, a i โˆช a i+1 is the boundary of a compact connected subsurface R i of S of genus g i โ‰ฅ 1. Clearly, by appropriately "enlarging" the sequence a i , 1 โ‰ค i โ‰ค n, we may assume, furthermore, that (v) g i = 1 for 1 โ‰ค i < n. Let f i denote the twist about a i , so that f 1 = D a and f n = D b . Suppose that (i)-(v) hold and let i be an integer such that 1 โ‰ค i < n. Then, we may apply Proposition 3.1 to the embedded torus with two holes R i in S, since both boundary components, a i and a i+1 , of this torus are essential separating circles on S. It follows, therefore, that the restriction of ฮจ to ฮ“(R i ) is induced by a homeomorphism h : S โ†’ S. Note that D i and D i+1 are both in ฮ“(R i ). Hence, ฮจ(D i ) = h * (D i ) and ฮจ(D i+1 ) = h * (D i+1 ). If h is orientation reversing, then h * (D i ) and h * (D i+1 ) are both left twists. If h is orientation reversing, then h * (D i ) and h * (D i+1 ) are both right twists. Hence, ฮจ(D i ) and ฮจ(D i+1 ) are either both left twists or both right twists. It follows that the images ฮจ(D j ), 1 โ‰ค j โ‰ค n are either all left twists or all right twists. Since ฮจ(D a ) is a right twist, ฮจ(D 1 ) is a right twist. Hence, ฮจ(D n ) is a right twist. That is to say, ฮจ(D b ) is a right twist. In other words, ฮจ(D b ) = D d for some essential separating circle d on S. Hence, if ฮจ(D a ) = D c for some essential separating circle c on S, then the result holds with วซ = 1. Likewise, if ฮจ(D a ) = D โˆ’1 c for some essential separating circle c on S, then the result holds with วซ = โˆ’1. Henceforth, we say that an automorphism ฮจ : T โ†’ T is orientationreversing if วซ = โˆ’1 and orientation-preserving if วซ = 1, where วซ is the constant in Proposition 4.3. In other words, an automorphism ฮจ : T โ†’ T is orientation-reversing (orientation-preserving) if and only if it sends right twists about essential separating circles on S to left (right) twists about essential separating circles on S. We shall refer to the constant วซ as the orientation type of ฮจ. We have shown that each automorphism of the Torelli group is either orientation-preserving or orientation-reversing. In other words, there are no "hybrid" automorphisms of the Torelli group, sending some right twists to right twists and other right twists to left twists. Induced automorphisms of the complex of separating curves In this section, we show how any given automorphism of the Torelli group induces an automorphism of the complex of separating curves on S. By Proposition 4.3, it follows that each automorphism of T induces a permutation of the set of isotopy classes of essential separating circles on S. Proposition 5.1. Let ฮจ : T โ†’ T be an automorphism. Let วซ be the orientation type of ฮจ. Let V sep denote the set of isotopy classes of essential separating circles on S. Then: โ€ข There exists a unique function ฮจ * : Since two essential separating circles have trivial geometric intersection if and only if the twists about the circles commute, it follows that the induced map ฮจ * : V sep โ†’ V sep is simplicial. Proposition 5.2. Let ฮจ : T โ†’ T be an automorphism of T . Let ฮจ * : V sep โ†’ V sep be the unique function of Proposition 5.1. Then ฮจ * : V sep โ†’ V sep extends to a simplicial automorphism ฮจ * : C sep โ†’ C sep of C sep . Nonseparating circles In this section, we show how ฮจ induces a map of the set V nonsep of isotopy classes of nonseparating circles on S. Suppose that (a, b) is an ordered bounding pair on S. That is to say, a and b are elements of V nonsep represented by disjoint nonseparating circles, A and B, on S such that the complement of A โˆช B in S is disconnected. Note that AโˆชB is the boundary of exactly two embedded surfaces in S, L and R. Moreover, It follows that to each ordered bounding pair (a, b) there exists a unique ordered bounding pair (c, d) Proposition 6.1. There exists a unique function ฮจ * : where วซ is the orientation type of ฮจ. We shall obtain the induced map ฮจ * : V nonsep โ†’ V nonsep by showing that the first coordinate c of ฮจ * (a, b) does not depend upon the second coordinate b of (a, b). To this end, let a be an element of V nonsep and let BP a be the set of elements b of V nonsep such that (a, b) is an ordered bounding pair on S. Suppose that b and c are distinct elements of BP a . We say that (a, b, c) is a k-joint based at a if the geometric intersection number of b and c is equal to k. Since (a, b) and (a, c) are ordered bounding pairs on S, we may orient any representative circles, A, B, and C, of a, b, and c so that they are homologous on S. Since the oriented representative circles B and C of b and c are homologous, i(b, c) is even. The following result will help us to establish the desired invariance of the first coordinate of ฮจ * (a, b). Proof. Note that a, b, c, j 1 and j 2 may be represented by transverse nonisotopic nonseparating circles A, B, C, J 1 , and J 2 on S such that A is disjoint from B and C; and J 1 and J 2 intersect each of B and C in exactly one point. Let P i denote a regular neighborhood of the graph AโˆชJ i โˆชB, i = 1, 2. Let Q i denote a regular neighborhood of the graph A โˆช J i โˆช C, i = 1, 2. Note that P 1 and Q 1 are embedded two-holed tori on S. Since (a, b) and (a, c) are bounding pairs, it follows that each of the boundary components of P i and Q i are essential separating circles on S. Hence, by Proposition 3.1, there exists a homeomorphism G i : i ] for every homeomorphism F : S โ†’ S which is supported on P i and acts trivially on the first homology of S. Likewise, by Proposition 3.1, there exists a homeomorphism H i : i ] for every homeomorphism F : S โ†’ S which is supported on Q i and acts trivially on the first homology of S. Let R i be a regular neighborhood on S of the graph A โˆช J i such that R i is contained in the interiors of both P i and Q i . Note that R i is a one-holed torus. Let D i be the boundary of R i . Since the genus of S is not equal to one, D i is an essential separating circle on S. Let U = D 1 and u be the isotopy class of D 1 on S. By Propositions 2.1 and 4.3 , Note that D u is represented by a twist map D U supported on P 1 . This implies that x = v and ฮฑ = วซ. Hence, v is represented by the circle X and the homeomorphism G 1 has the same orientation type as the automorphism ฮจ. Likewise, if Y = H 1 (U), then v is represented by the circle Y and the homeomorphism H 1 has the same orientation type as the automorphism ฮจ. Since v is represented by both X and Y , X is isotopic to Y . Hence, we may assume that X = Y . Let R โ€ฒ 1 = G 1 (R 1 ). Note that R โ€ฒ 1 is an embedded one-holed torus with boundary X. Since the genus of S is at least 3, R โ€ฒ 1 is the unique embedded one-holed torus in S with boundary where D A is a twist map supported on P 1 and D B is a twist map supported on P 1 . Hence, by the preceding observations, It follows that e = p and f = q. Since e = p, it follows that e is represented by the circle G 1 (A). Note that G 1 (A) is contained in the interior of the embedded torus R โ€ฒ 1 . We conclude that e is represented by the circle G 1 (A) contained in the interior of the embedded torus R โ€ฒ 1 . Likewise, g is represented by the circle H 1 (A) and H 1 (A) is contained in the interior of the embedded torus R 1 ". Since R 1 " = R โ€ฒ 1 , H 1 (A) is contained in the interior of the embedded torus R โ€ฒ 1 . It follows that e and g are both represented by circles contained in the interior of the unique one-holed torus R โ€ฒ 1 bounded by G 1 (D 1 ). Likewise, e and g are both represented by circles contained in the interiors of the unique one-hole torus R โ€ฒ 2 bounded by G 2 (D 2 ). Suppose that i(e 1 , e 2 ) = 0. By Proposition 5.2, it follows that i(d 1 , d 2 ) = 0. Hence, we may assume, by an isotopy of D 2 , that D 1 and D 2 are disjoint. Since the one-holed tori, R 1 and R 2 , both contain A, it follows that D 1 is isotopic to D 2 . Hence, by a further isotopy of D 2 , we may assume that D 1 = D 2 and, hence, R 1 = R 2 . Since J 1 and J 2 are disjoint nonseparating circles, i(j 1 , j 2 ) = 0. Hence, J 1 and J 2 are isotopic to disjoint nonseparating circles in the one-holed torus R 1 . Since any two disjoint nonseparating circles in a one-holed torus are isotopic, it follows that j 1 = j 2 . This contradicts our assumptions. Hence, i(e 1 , e 2 ) is not equal to 0. By an isotopy of G 2 (D 2 ) we may assume that G 1 (D 1 ) and G 2 (D 2 ) are transverse essential separating circles with minimal intersection. By the preceding observations, i(e, e i ) = i(g, e i ) = 0, i = 1, 2. Hence, we may represent e and g by nonseparating circles E and G on S such that E and G are each disjoint from both G 1 (D 1 ) and G 2 (D 2 ). It follows that E and G are both contained in the interior of the unique one-holed torus, R โ€ฒ 1 , with boundary G 1 (D 1 ), and both miss the intersection of G 2 (D 2 ) with R โ€ฒ 1 . SInce i(e 1 , e 2 ) is not equal to 0, the intersection of G 2 (D 2 ) with R โ€ฒ 1 consists of at least one essential properly embedded arc, K. Since any two nonseparating circles in a given one-holed torus missing a given essential properly embedded arc in the given one-holed torus are isotopic in the given one-holed torus, it follows that E is isotopic to G on S. Proof. Note that a, b, and c may be represented by disjoint nonisotopic nonseparating circles A, B, and C on S. Since (a, c) is a bounding pair on S, A โˆช C is the boundary of exactly two embedded surfaces on S, L and R. Note that L and R are both connected surfaces and L โˆฉ R = A โˆช C. Since B is disjoint from A and C, B is contained in one of the two components of the complement of A โˆช C in S. That is to say, either B is contained in the interior of L or B is contained in the interior of R. We may assume that B is contained in the interior of L. Since (a, b) is a bounding pair on S, it follows that L is the union of two embedded surfaces, P and Q, on S such that AโˆชB is the boundary of P , B โˆช C is the boundary of Q, and P โˆฉ Q = B. Note that the embedded surfaces, P , Q, and R, have disjoint interiors, P โˆฉ Q = B, Q โˆฉ R = C, R โˆฉ P = A, P โˆฉ Q โˆฉ R is empty, and P โˆชQโˆชR = S. Moreover, P , Q, and R each have exactly two boundary components; โˆ‚P = A โˆช B, โˆ‚Q = B โˆช C, and โˆ‚R = C โˆช A. Since a, b, and c are distinct isotopy classes, P , Q, and R each have positive genus. Since B is not isotopic to C, the genus of Q is positive. Hence, there exists a pair of disjoint, nonseparating circles D 1 and D 2 on S such that Q is the union of two embedded connected surfaces, Q L and Q R , on S, where the interiors of Q L and Q R are disjoint; the boundary of Q L is equal to B โˆช J 1 โˆช J 2 ; and the boundary of Q R is equal to J 1 โˆช J 2 โˆช C. Choose distinct points, p 1 and p 2 , on A; distinct points, q 1 and q 2 on B; a point r 1 on D 1 ; a point r 2 on D 2 ; and distinct points, s 1 and s 2 on C. Choose disjoint properly embedded arcs, P 1 and P 2 , on L such that P 1 joins p 1 to q 1 , and P 2 joins p 2 to q 2 . Choose disjoint properly embedded arcs, M 1 and M 2 , on Q L such that M 1 joins q 1 to r 1 ; and M 2 joins q 2 to r 2 . Choose disjoint properly embedded arcs, N 1 and N 2 , on Q R such that N 1 joins r 1 to s 1 ; and N 2 joins r 2 to s 2 . Choose disjoint properly embedded arcs, R 1 and R 2 , on R such that R 1 joins s 1 to p 1 and R 2 joins s 2 to p 2 . Let Note that J 1 and J 2 are disjoint circles on S; each of these two circles is transverse to A, B, D 1 , D 2 , and C; and each of these two circles intersects each of the circles, A, B, and C, in exactly one point. It follows that each of these two circles are nonseparating circles on S. Let j i denote the isotopy class of J i on S, i = 1, 2. Note that each of the circles, J i , is transverse to D 1 . J 1 intersects D 1 in exactly one point, whereas J 2 is disjoint from D 1 . It follows that the geometric intersection of J 1 with D 1 is equal to 1, whereas the geometric intersection of J 2 with D 1 is equal to 0. It follows that j 1 and j 2 are distinct elements of V nonsep with i(j i , a) = i(j i , b) = i(j i , c) = 1, i = 1, 2, and i(j 1 , j 2 ) = 0. Hence, by Proposition 6.2, e = g. Remark 6.4. If the genus of S is at least 5, it can be shown that BP a is the vertex set of a connected subcomplex of C. In other words, any two elements b and c of BP a are connected by a sequence Hence, Proposition 6.3 is sufficient for the purpose of obtaining the desired induced map ฮจ * : V nonsep โ†’ V nonsep when g > 4. One of the main reasons why this result is so much more difficult to achieve when g is equal to 3, is that this connectivity result is no longer true when g = 3. (We do not know whether it is true when g = 4.) Indeed, when g = 3, there are no edges in C with both vertices in BP a . One of the key ideas for obtaining the main result of this paper for g > 2 is to include with the edges of C which join elements of BP a more general connections between elements of BP a which leave invariant the first coordinate of ฮจ * (a, b) and provide paths between any two elements of BP a . The first such connection is provided by our next result. Proof. We may represent a, b, and c by circles, A, B, and C, on S such that A is disjoint from B and C; B is transverse to C; and B and C intersect in exactly two points, x and y. Since (a, b) and (a, c) are bounding pairs, we may orient A, B, and C so that the the oriented circles, A, B, and C are homologous on S. Since B and C are homologous, the algebraic (i.e. homological) intersection of B and C is equal to 0. It follows that the signs of intersection of B with C at x and y are opposite. We assume that the sign of intersection of B with C at x is positive and the sign of intersection of B with C at y is negative. The above considerations imply that a regular neighborhood P on S of the graph B โˆช C is a four-holed sphere. We may assume that A is contained in the complement of P . Since B and C are transverse circles on S with geometric intersection 2, it follows that each of the four boundary components of P is an essential circle on S. Indeed, suppose that one of the four boundary components of P bounds a disc D on P . Then P โˆช D is an embedded pair of pants with B and C in the interior of P โˆช D. Note that any two circles in a pair of pants have geometric intersection 0. Indeed, any circle in a pair of pants on S is isotopic to one of the three boundary components on the pair of pants [FLP]. Hence, B and C have geometric intersection 0. This is a contradiction. Hence, each of the four boundary components of P is an essential circle on S. The two points, x and y, divide the oriented circle B into two oriented arcs, B 1 and B 2 , where B 1 begins at x and ends at y, B 2 begins at y and ends at x, B 1 โˆช B 2 = B, and B 1 โˆฉ B 2 = {x, y}. Likewise, the two points, x and y divide the oriented circle C into two oriented arcs, C 1 and C 2 , where C 1 begins at x and ends at y, C 2 begins at y and ends at x, C 1 โˆช C 2 = C, and C 1 โˆฉ C 2 = {x, y}. Let i and j be integers with 1 โ‰ค i, j โ‰ค 2. Note that B i โˆช C j is a circle in the interior of P . There is a unique embedded annulus A (i,j) in P such that the boundary of A (i,j) consists of B i โˆช C j and a component D (i,j) of the boundary of P . P is equal to the union of the four annuli, A (i,j) ; these annuli have disjoint interiors; and these annuli meet exactly along those edges of the graph B โˆช C which they have in common. Orient the boundary of P so that P is on the right of its boundary. With this orientation, the cycle D (1,1) is homologous to B 1 โˆ’ C 1 ; the cycle D (1,2) is homologous to โˆ’C 2 โˆ’ B 1 ; the cycle D (2,2) is homologous to โˆ’B 2 + C 2 ; the cycle D (2,1) is homologous to C 1 + B 2 ; the cycle B is homologous to B 1 + B 2 ; and the cycle C is homologous to C 1 + C 2 . By our assumptions, B is homologous to C. This implies that B 1 + B 2 is homologous to C 1 + C 2 ; and B 1 โˆ’ C 1 is homologous to โˆ’B 2 + C 2 . Hence, the cycles D (1,1) and D (2,2) are homologous. Note that the cycle D (1,2) + D (2,1) is homologous to an oriented circle E contained in the interior of P . E is obtained by tubing the oriented circle D (1,2) to the oriented circle D (2,1) along a properly embedded arc J in P joining a point on the boundary component D (1,2) of P to a point on the boundary component D (2,1) of P . Suppose that D (1,1) is not homologous to 0. Then, since the homology of an oriented surface is torsion free, 2D (1,1) is not homologous to 0. Hence, E is not homologous to 0. This implies that E is a nonseparating circle on S. It follows that the homology class of E is not divisible by 2. Indeed, E is not divisible by any integer greater than 1. On the other hand, E is homologous to 2D (1,1) . This is a contradiction. Hence, D (1,1) is homologous to 0. Note that B is homologous to D (1,1) + D (2,1) . Since D (1,1) is homologous to 0, it follows that B is homologous to D (2,1) . Since A is homologous to B, it follows that A is homologous to D (2,1) . Since A is nonseparating, the homology class of A is not 0. Hence, the homology class of D (2,1) is not zero. This implies that D (2,1) is an essential circle on S. Let d denote the isotopy class of the essential circle D (2,1) on S. Suppose that d is not equal to a. Since P is disjoint from A and D (2,1) is a boundary component of P , D (2,1) is disjoint from A. Since D (2,1) is disjoint from A and homologous to A, it follows that (a, d) is a bounding pair on S. Note that D (2,1) is also disjoint from B and C, since P is a regular neighborhood of B โˆช C. Since i(c, d) = 0 and i(c, b) = 2, d is not equal to b . Hence, (a, b, d) is a 0-joint based at a. Hence, by Proposition 6.3, the first coordinate of ฮจ * (a, b) is equal to the first coordinate of ฮจ * (a, d). Similarly, (a, d, c) is a 0-joint based at a and, hence, the first coordinate of ฮจ * (a, d) is equal to the first coordinate of ฮจ * (a, c). Hence, the first coordinate of ฮจ * (a, b) is equal to the first coordinate of ฮจ * (a, c). In other words, e = g, as desired. Hence, we may assume that d is equal to a. It follows that A is isotopic to D (2,1) . Since A and D (2,1) are disjoint essential circles on S and the genus of S is not equal to 1, it follows that A โˆช D (2,1) is the boundary of exactly one embedded annulus L on S. Since D (2,1) is a boundary component of P and A is in the complement of P , it follows that L โˆฉ P = D (2,1) . Since D (1,2) + D (2,1) is homologous to E and E is homologous to 0, it follows that D (1,2) is homologous to โˆ’D (2,1) . Since D (2,1) is homologous to A, it follows that D (1,2) is homologous to โˆ’A. As before, for A and D (2,1) , it follows that A โˆช D (1,2) is the boundary of exactly one embedded annulus R on S. Since A is a nonseparating circle on S, it follows that A โˆช D 1,2) is the boundary of exactly two embedded surfaces on S. The annulus R is one of these two surfaces. Let Q be the other of these two surfaces. Note that R and Q are both connected. Hence, either P โˆช L is contained in R or P โˆช L is contained in Q. Suppose that P โˆช L is contained in R. Then the essential circles B and C on S are both contained in the interior of the annulus R. Since i(b, c) = 2, this is a contradiction. Indeed, any two circles in an annulus have zero geometric intersection number. It follows that P โˆช L is contained in Q. This implies that the four-holed sphere P โˆชL and the annulus R have disjoint interiors and (P โˆช L) โˆฉ R = A โˆช D (2,1) . It follows that P โˆช L โˆช R is an embedded two-holed torus, Z, on S with boundary components D (1,1) and D (2,2) . Since D (1,1) and D (2,2) are boundary components of P , they are essential circles on S. Hence, we may apply Proposition 3.1 to the embedded two-holed torus Z on S. It follows that there is a homeomorphism H : S โ†’ S such that ฮจ( [F ] for every homeomorphism F : S โ†’ S such that F is supported on Z and F acts trivially on the first homology of S. Let U be a circle in the interior of Z such that U is isotopic to the boundary component D (1,1) of Z. Let u denote the isotopy class of the essential separating circle U on S. Note that u is an element of V sep . Let v = ฮจ * (u). Then ฮจ(D u ) = D วซ v . Note that the element D u of T is represented by a twist map D U supported on Z. Hence, ฮจ (D u w where w is the isotopy class of the essential circle H(U) and ฮฑ is the orientation type of the homeomorphism H : S โ†’ S. Then D ฮฑ w = D วซ v . This implies that ฮฑ = วซ. Hence, the homeomorphism H has the same orientation type as the automorphism ฮจ. Note that A, B, and C are contained in the interior of Z. Hence, the twists, D a , D b , and D c , are represented by twist maps, D A , D B , and D C , supported on Z. It follows that ฮจ(D a โ€ข D โˆ’1 b ) = (D p โ€ข D q ) ฮฑ where p is the isotopy class of H(A) and q is the isotopy class of H (B). Since ฮฑ = วซ, it follows that e = p, f = q, g = p, and h = r. Hence, e = g. Remark 6.6. If the genus of S is at least 4, it can be shown that if (a, b, c) is a 2-joint based at a, then b and c are connected by a sequence b 1 = b, ..., b n = c of elements of BP โŠฃ such that (a, b i , b i+1 ) is a 0 โˆ’ joint based at a. Part of the proof of this fact is implicit in the proof of Proposition 6.5. Following this proof, it is clear that it remains only to consider the situation when the representative circles A, B and C are contained in a two-holed torus Z as in the last part of this proof. In addition to the connections provided by 0-joints and 2-joints based at a, we shall use one more type of connection. This third type of connection is provided by our next result. Proof. We may represent the isotopy classes a, b, and c in V nonsep be nonseparating circles, A, B, and C, on S such that A is disjoint from B and C; B and C are transverse; and B and C intersect in exactly 4 points. Since (a, b) is a bounding pair on S, AโˆชB is the boundary of exactly two embedded surfaces L and R on S. Note that the interiors of L and R are disjoint and L โˆฉ R = A โˆช B. Orient the circle A and B so that the boundary of the oriented surface L is equal to the difference A โˆ’ B of the cycles A and B. Note that, with the chosen orientations, L is on the left of A and B, and R is on the right of A and B. Note that the cycles A and B are homologous on S. Since (a, c) is a bounding pair on S, we may orient the circle C so that C is homologous on S to A. Since A โˆช B is transverse to C, and AโˆชB intersects C in exactly four points, and all four of these points lie on B, it follows that C โˆฉR is a disjoint union of two properly embedded arcs, C 1 and C 3 , in R; and C โˆฉ L is equal to a disjoint union of two properly embedded arcs, C 2 and C 4 , in L. We may label these arcs so that the oriented arc C 1 joins a point w on B to a point x on B; the oriented arc C 2 joins the point x on B to a point y on B; the oriented arc C 3 joins the point y on B to a point z on B; and the oriented arc C 4 joins the point z on B to the point w on B. Note that it follows, from the above considerations, that, as we travel along the oriented circle C, the signs of intersection of B with C alternate. Likewise, by interchanging the roles of B and C in the previous argument, it follows that the signs of inersection of B with C alternate as we travel along B. Hence, we may choose the notation C i , 1 โ‰ค i โ‰ค 4, so that the sign of intersection of B with C at w is positive. It follows that the sign of intersection of B with C at x is negative; the sign of intersection of B with C at y is positive; and the sign of intersection of B with C at z is negative. Since the signs of intersection of B with C alternate as we travel along the oriented circle B, it follows that the points of intersection occur in one of the following two orders as we travel along B, beginning at w: (i) (w, x, y, z) or (ii) (w, z, y, x). Suppose that (i) holds. Note that the points of intersection of B with C divide the oriented circle B into oriented arcs, B i , 1 โ‰ค i โ‰ค 4. We may choose the notation B i so that B 1 joints w to x; B 2 joins x to y; B 3 joints y to z; and B 4 joins z to w. It follows that a regular neighborhood on S of the graph B โˆช C is a six-holed sphere P . We may assume that A is contained in the complement of P . P is the union of six embedded annuli, A i , 1 โ‰ค i โ‰ค 6, on S. These annuli have disjoint interiors on S. There is a unique boundary component of P , D i , such that D i is one of the two boundary components of A i . The other boundary component of A i is a cycle in the graph B โˆช C. We may choose the notation A i , 1 โ‰ค i โ‰ค 6, so that the boundary component of A 1 contained in the interior of P is the cycle C 1 โˆ’ B 1 ; the boundary component of A 2 contained in the interior of P is the cycle B 2 โˆ’ C 2 ; the boundary component of A 3 contained in the interior of P is the cycle C 3 โˆ’ B 3 ; the boundary component of A 4 contained in the interior of P is the cycle B 1 + C 2 + B 3 + C 4 ; the boundary component of A 5 contained in the interior of P is the cycle โˆ’B 4 โˆ’ C 3 โˆ’ B 2 โˆ’ C 1 ; and the boundary component of A 6 contained in the interior of P is the cycle B 4 โˆ’ C 4 . Note that the oriented circle B is homologous to the cycle B 1 + B 2 + B 3 + B 4 ; and the oriented cycle C is homologous to the cycle C 1 + C 2 + C 3 + C 4 . By our assumptions, B is homologous to C. It follows that D 1 โˆ’ D 2 + D 3 โˆ’ D 4 is homologous to 0. On the other hand, D 1 + D 2 + D 3 + D 4 + D 5 + D 6 is homologous to 0. Hence, D 5 + D 6 is homologous to โˆ’2(D 1 + D 3 ). As in the proof of Proposition 6.5, the fact that D 5 + D 6 is homologous to a circle in P , obtained by tubing D 5 to D 6 along a properly embedded arc J in P , implies that D 1 + D 3 and D 5 + D 6 are both homologous to 0. Again, the homology class of a nonseparating oriented circle on S is primitive (i.e. not a proper multiple of any nonzero homology class). Since D 1 + D 2 + D 3 + D 4 + D 5 + D 6 is also homologous to 0, it follows that D 2 + D 4 is homologous to 0. Note that D 2 + D 4 is homologous to the cycle B โˆ’ D 6 . Since D 2 + D 4 is homologous to 0, it follows that B is homologous to D 6 . Since B is nonseparating on S, D 6 is nonseparating on S. Since D 5 + D 6 is homologous to 0, D 5 is homologous to โˆ’D 6 . Since D 6 is a nonseparating circle on S and D 5 is a circle disjoint from D 6 , it follows that D 5 โˆช D 6 is the boundary of exactly two embedded surfaces in S. Note that these two surfaces are each connected, have disjoint interiors, and intersect exactly along D 5 โˆช D 6 . Their interiors are the two connected components of the complement of D 5 โˆช D 6 in S. Note that P is contained in one of these two surfaces. Let Q be the other of these two surfaces. Since Q is connected and the boundary of Q is equal to D 5 โˆช D 6 , there exists a properly embedded arc M on Q joining a point u on D 5 to a point v on D 6 . Note, furthermore that there exists a properly embedded arc N on P joining u to v such that N is disjoint from C, transverse to B, and intersects B in exactly one point, p. Moreover, we may choose N so that p lies in the interior of B 4 . It follows that M โˆช N is a circle on S such that the geometric intersection of M โˆชN with B is equal to 1, and the geometric intersection of M โˆช N with C is equal to 0. Note that we may orient the circle M โˆช N so that the algebraic (i.e. homological) intersection of M โˆช N with B is equal to 1. Since B is homologous to C, this implies that the algebraic intersection of M โˆช N with C is equal to 1. On the other hand, since M โˆช N is disjoint from C, the algebraic intersection of M โˆช N with C is equal to 0. This is a contradiction. Hence (ii) holds. Note that the points of intersection of B with C divide the oriented circle B into oriented arcs, B i , 1 โ‰ค i โ‰ค 4. We may choose the notation B i so that B 1 joints w to z; B 2 joins z to y; B 3 joints y to x; and B 4 joins x to w. It follows that a regular neighborhood on S of the graph B โˆช C is a six-holed sphere P . We may assume that A is contained in the complement of P . P is the union of six embedded annuli, A i , 1 โ‰ค i โ‰ค 6, on S. These annuli have disjoint interiors on S. There is a unique boundary component of P , D i , such that D i is one of the two boundary components of A i . The other boundary component of A i is a cycle in the graph B โˆช C. We may choose the notation A i , 1 โ‰ค i โ‰ค 6, so that the boundary component of A 1 contained in the interior of P is the cycle B 1 + C 4 ; the boundary component of A 2 contained in the interior of P is the cycle โˆ’B 4 โˆ’ C 1 ; the boundary component of A 3 contained in the interior of P is the cycle C 2 + B 3 ; the boundary component of A 4 contained in the interior of P is the cycle โˆ’C 4 + B 2 โˆ’ C 2 + B 4 ; the boundary component of A 5 contained in the interior of P is the cycle C 1 โˆ’ B 3 + C 3 โˆ’ B 1 ; and the boundary component of A 6 contained in the interior of P is the cycle โˆ’B 2 โˆ’ C 3 . Note that the oriented circle B is homologous to the cycle B 1 + B 2 + B 3 + B 4 ; and the oriented cycle C is homologous to the cycle C 1 +C 2 +C 3 +C 4 . By our assumptions, B is homologous to C. It follows that D 4 โˆ’ D 5 is homologous to 0. Hence, the cycle D 4 is homologous to the cycle D 5 . This implies that D 4 + D 5 is homologous to 2D 4 . As in the proof of Proposition 6.5, the fact that D 4 + D 5 is homologous to a circle in P , obtained by tubing D 4 to D 5 along a properly embedded arc J in P , implies that D 4 and D 4 + D 5 are both homologous to 0. Again, the homology class of a nonseparating oriented circle on S is primitive (i.e. not a proper multiple of any nonzero homology class). Since D 4 and D 4 + D 5 are both homologous to 0, it follows that D 5 is homologous to 0. Since B and C are transverse circles on S with minimal intersection, it follows that each of the boundary components, D 1 , D 3 , D 2 , and D 6 are essential circles on S [FLP]. Since D 1 +D 2 +D 3 +D 4 +D 5 +D 6 , D 4 , and D 5 are all homologous to 0, D 1 + D 3 is homologous to โˆ’(D 2 + D 6 ). It follows that 2(D 1 + D 3 ) is homologous to the cycle (B 1 Suppose that D 1 and D 2 are both separating circles on S. Then D 3 is homologous to A and D 6 is homologous to โˆ’A. Suppose that the circle D 3 is not isotopic to A. Let d be the isotopy class of D 3 . Then (a, b, d) and (a, c, d) are both 0-joints based at a. Hence, by Proposition 6.3, the first coordinates of ฮจ * (a, b) and ฮจ * (a, c) are both equal to the first coordinate of ฮจ * (a, d). Hence, e = g. Hence, we may assume that D 3 is isotopic to A. Since the genus of S is not equal to 1, it follows that A โˆช D 3 is the boundary of exactly one embedded annulus L on S. Note that P is in the complement of the interior of L. Hence, the boundary of the oriented annulus L is equal to A โˆ’ D 3 . Likewise, we may assume that D 6 is isotopic to โˆ’A and conclude that there is exactly one embedded annulus R of S with boundary A โˆช D 6 and the boundary of the oriented annulus R is equal to โˆ’(A + D 6 ). It follows that P โˆช L โˆช R is an embedded two-holed torus Q on S; A โˆช B โˆช C is contained in the interior of Q; and each of the two boundary components of Q, D 2 and D 6 , are essential separating circles on S. Hence, by Proposition refprop:twoholedtori, it follows, as in the proof of Proposition 6.5, that e = g. Hence, we may assume that D 1 and D 2 are not both separating circles. Likewise, we may assume that D 1 and D 6 are not both separating circles on S; D 3 and D 2 are not both separating circles on S; and D 3 and D 6 are not both separating circles on S. Suppose that D 1 is a separating circle on S. Then D 3 is homologous to A. Again, we may assume that D 3 is isotopic to A. Hence, D 3 โˆช A is the boundary of exactly one embedded annulus L in S, and the boundary of the oriented annulus L is equal to A โˆ’ D 3 . Since D 1 is a separating circle on S, D 2 and D 6 are both nonseparating circles on S. Since D 2 + D 6 is homologous to โˆ’A, it follows that D 2 โˆช D 6 โˆช A is the boundary of exactly two embedded surfaces on S. The interiors of these two surfaces are the two connected components of the complement of D 1 โˆช D 2 โˆช A in S. One of these two surfaces contains P . Let R be the other of these two surfaces. Note that the boundary of the oriented surface R is equal to โˆ’(A + D 2 + D 6 ). Choose distinct points, p 1 and p 2 , on A; distinct points, q 1 and q 2 on D 3 ; a point r 1 on D 2 ; and a point r 2 on D 6 . Choose disjoint properly embedded arcs, P 1 and P 2 , on L such that P 1 joins p 1 to q 1 , and P 2 joins p 2 to q 2 . Choose disjoint properly embedded arcs, Q 1 and Q 2 , on P such that Q 1 joins q 1 to r 1 ; Q 2 joins q 2 to r 2 ; Q 1 and Q 2 are both transverse to B and C; Q 1 intersects B โˆช C exactly at x; and Q 2 intersects B โˆช C exactly at y. Choose disjoint properly embedded arcs, R 1 and R 2 , on R such that R 1 joins r 1 to p 1 and R 2 joins r 2 to p 2 . Let J i = P i โˆช Q i โˆช R i , i = 1, 2. Note that J 1 and J 2 are disjoint circles on S; each of these two circles is transverse to A, B, and C; and each of these two circles intersects each of the circles, A, B, and C, in exactly one point. It follows that each of these two circles are nonseparating circles on S. Let j i denote the isotopy class of J i on S, i = 1, 2. Note that each of the circles, J i , is transverse to D 2 . J 1 intersects D 2 in exactly one point, whereas J 2 is disjoint from D 2 . It follows that the geometric intersection of J 1 with D 2 is equal to 1, whereas the geometric intersection of J 2 with D 2 is equal to 0. It follows that j 1 and j 2 are distinct common duals to the 4-joint (a, b, c) with i(j 1 , j 2 ) = 0. Hence, by Proposition 6.2, e = g. Hence, we may assume that D 1 is a nonseparating circle on S. Likewise, we may assume that D 3 , D 2 , and D 6 are nonseparating circles on S. Since D 1 + D 3 is homologous to A, and D 1 , D 2 , and D 3 are disjoint nonseparating curves, it follows that D 1 โˆช D 2 โˆช A is the boundary of exactly two embedded surfaces in S. These two surfaces are each connected, and have disjoint interiors. The interiors of these two surfaces are the two connected components of the complement of D 1 โˆช D 2 โˆช A in S. One of these two surfaces contains P . Let L be the other of these two surfaces. Note that the boundary of the oriented surface L is equal to A โˆ’ (D 1 + D 3 ). Likewise, D 2 โˆช D 6 โˆช A is the boundary of exactly two embedded surfaces in S. These two surfaces are each connected, and have disjoint interiors. The interiors of these two surfaces are the two connected components of the complement of D 2 โˆช D 6 โˆช A in S. One of these two surfaces contains P . Let R be the other of these two surfaces. Note that the boundary of the oriented surface R is equal to โˆ’(A+D 2 +D 6 ). As in the previous case, we may construct two disjoint circles, J i , i = 1, 2, such that J i is transverse to A, B, and C; J i intersects each of A, B and C in exactly one point; J i is transverse to D 2 ; J 1 intersects D 2 in exactly one point; and J 2 is disjoint from D 2 . Again, it follows that the isotopy classes, j i , i = 1, 2 of J i , i = 1, 2 are distinct common duals of the 4-joint (a, b, c) based at a with i(j 1 , j 2 ) = 0. Hence, as in the previous case, it follows, by Proposition refprop:commonduals, that e = g. Hence, in any case, e = g. Remark 6.8. If the genus of S is at least 5, it can be shown that if (a, b, c) is a 4-joint based at a, then b and c are connected by a sequence b 1 = b, ..., b n = c of elements of BP โŠฃ such that (a, b i , b i+1 ) is a 0 โˆ’ joint based at a. We do not know whether this fact is true when the genus of S is 4. Our next result will allow us to show that the connections provided by Propositions 6.3, 6.5, and 6.7 are sufficient to prove the desired invariance of the first coordinate of ฮจ * (a, b). Proposition 6.9. Let a be an element of V nonsep . Let b and c be elements of BP a . Then there exists a sequence b i , 1 โ‰ค i โ‰ค n of elements of BP a such that (i Proof. Represent a, b, and c by circles A, B, and C on S such that A is disjoint from B and C, B and C are transverse to each other, and the number of intersection points of B and C is minimal (i.e. #(B โˆฉ C) = i(b, c)). Let k be the number of points of intersection of B and C (i.e. k = i(b, c)). The proof is by induction on k. The assertion of Proposition 6.9 clearly holds if k = 0. In this case, let n = 2, b 1 = b, and b 2 = c. Hence, we shall assume that k > 0. Orient the circle A. Since (a, b) is a ordered bounding pair A โˆช B is the boundary of two embedded surfaces on S, L B and R B , with L B โˆฉ R B = A โˆช B. Moreover, we may orient B so that the homology classes of the oriented circles A and B are equal. Likewise, we may orient C so that the homology classes of the oriented circles A and C are equal. Note that the intersection of C with R B is a disjoint union of properly embedded arcs in R B , each joining a point of intersection of B and C to another point of intersection of B with C. Each of these properly embedded arcs on R B is either separating or nonseparating on R B . Let J be one of the components of C โˆฉR B . Suppose that J separates R B . Note that exactly one of the two components of the complement of J in R B contains the boundary component A of R B . Denote the closure of this component by P . Note that the boundary of P consists of two circles on S, A and a circle D. The points x and y on D divide D into two arcs joining x to y, D B and D C , where D B lies on B and D C lies on C. Note that D C = J. Since B and C are transverse essential circles with minimal intersection, D is an essential circle on S. Let B P be a circle in the interior of P such that B P and D bound an embedded annulus in P . Since B P is isotopic on S to the essential circle D on S, B P is also essential on S. Let Q be the closure of the other component of the complement of J in R B . Note that P โˆฉ Q = J. The boundary of Q consists of a single circle E. The points x and y on E divide E into two arcs joining x to y, E B and E C , where E B lies on B and E C lies on C. Note that Let B Q be a circle in the interior of Q such that B Q and E bound an annulus in Q. As for D and B P , E and B Q are both essential circles on S. Since B P and Q bound an annulus in Q, we may orient B Q and E so that the oriented circles B P and Q are homologous in Q. Since E bounds Q, E is nullhomologous on S. Hence, B Q is nullhomologous on S. Note that B โˆช B P โˆช B Q is the boundary of a pair of pants R in S. The arc J is properly embedded in R, joining the boundary component B of R to itself, and separating the other two boundary components of R, B P and B Q , from each other. In familiar terminology, B P โˆช B Q is the result of tubing B to itself along the arc J. Note that C is transverse to the submanifold B P โˆชB Q and the number of intersection points of C with B P โˆช B Q is equal to k โˆ’ 2. This implies that the number of intersection points with C of each of the essential circles, B P and B Q , is less than k. Since the boundary of P consists of A and D, we may orient D so that the oriented circles A and D are homologous. Since B P is isotopic to D, we may orient B P so that the oriented circles B P and D are homologous. Hence, the oriented circles B P and A are homologous. On the other hand, B P and A are disjoint. Suppose that B P is not isotopic to A on S. Let b โ€ฒ be the isotopy class of B P . By the above considerations, {a, b โ€ฒ } is a bounding pair, i(b, b โ€ฒ ) = 0 and i(b โ€ฒ , c) < k. Hence, the result follows by applying the inductive hypothesis to the elements b โ€ฒ and c of BP a . Hence, we may assume that B P is isotopic to A on S. Since B P and A are disjoint essential circles, it follows that B P and A bound an annulus on S. Note that the disjoint essential circles, B P and A bound exactly two embedded surfaces on S. One of these two surfaces is P . Since B P is contained in the interior of R B , the other of these two surfaces contains the surface of positive genus, L B . It follows that this other surface is not an annulus. Hence, P is an annulus. Consider the arc D B of D joining x to y along B. Suppose that C intersects D B at a point u in the interior of D B . Note that there is exactly one component J โ€ฒ of C โˆฉ R B with u as one of its two endpoints. Note that J โ€ฒ lies in the annulus P and both endpoints of J โ€ฒ lie in the interior of the arc D B . These two endpoints of J โ€ฒ are therefore joined by an arc F embedded in the interior of D B . It follows that J โ€ฒ separates the annulus P into two components, one of which is a disk and the other of which is an annulus. Since B and C are transverse essential circles with minimal intersection, the disc component cannot contain the arc F on its boundary. This implies that the disc component contains the arc J on its boundary. It follows that the unique component P โ€ฒ of the complement of J โ€ฒ in R B which contains A is an annulus contained in P . Moreover, the boundary component D โ€ฒ of P containing J โ€ฒ consists of J โ€ฒ and an arc D โ€ฒ B of B joining u to a point v in the interior of D B . Moreover, D โ€ฒ B is contained in the interior of D B . Indeed, D โ€ฒ B is the arc F . It follows that we may assume, by choosing an "innermost" arc J, that the interior of D B contains no points of C. Let K be the unique component of C โˆฉ L B such that y is the initial endpoint of K. Since the interior of D B contains no points of C, the terminal endpoint of K is not in the interior of D B . Suppose that this endpoint is x. Then C = J โˆช K and, hence, B โˆฉ C = {x, y}. Hence, since B and C intersect minimally, i(b, c) = 2. Clearly, in this situation, the result holds. Hence, we may assume that the terminal endpoint of K is a point z in the interior of E B . This implies that there is an arc K โ€ฒ parallel to K joining a point on B P to a point on B Q such that K โ€ฒ is contained in the complement of C, K โ€ฒ is transverse to B, and K โ€ฒ intersects B in exactly two points, u and v, where u is a point in the interior of D B near the endpoint y of D B and v is a point in the interior of E B near z. Let B โ€ฒ be a circle obtained by tubing B P to B Q along K โ€ฒ . The above considerations imply that (i) B โ€ฒ is disjoint from A, (ii) B โ€ฒ is homologous to B, (iii) B โ€ฒ and B are transverse and have minimal intersection, (iv) B โ€ฒ and B intersect in exactly 4 points, (v) B โ€ฒ and C are transverse and (vi) the number of points of intersection of B โ€ฒ and C is equal to k โˆ’ 2. Let b โ€ฒ be the isotopy class of B โ€ฒ . By the above considerations, it follows that b โ€ฒ is not equal to a, (a, b โ€ฒ ) is a bounding pair, i(b โ€ฒ , b) = 4 and i(b โ€ฒ , c) โ‰ค k โˆ’ 2. The result follows, in this situation, by applying the inductive hypothesis to b โ€ฒ and c. It remains to consider the case where there is at least one component J of C โˆฉ R B such that the complement of J in R B is connected. This case is handled in a manner similar to the previous case. Again, we tube B to itself along J producing a disjoint union B 1 โˆชB 2 of disjoint oriented circles B 1 and B 2 whose homology classes add up to the homology class of B. In this situation, we assume the notations B 1 and B 2 are chosen so that B 1 corresponds to the oriented arc N 1 of the oriented circle B which joins the initial endpoint x of the oriented arc J to the terminal endpoint y of J and B 2 corresponds to the oriented arc N 2 of B joining y to x. Again, we let K be the unique component of C โˆฉ L B having the terminal endpoint y of J as its initial endpoint. Again, if the terminal endpoint of K is equal to x, then C = J โˆช K, i(b, c) = 2, and the result clearly holds. Hence, we may assume that the terminal endpoint of K is a point z in the interior of N i for some integer i in {1, 2}. Suppose that z is in the interior of N 1 . Then we may tube B 2 to B 1 along an arc K โ€ฒ parallel to K, crossing N 2 at a point u in the interior of N 2 near y and N 1 at a point v in the interior of N 1 near z. This results in a curve B โ€ฒ with which the proof proceeds by induction as before. Suppose, on the other hand, that z is in the interior of N 2 . In this case, we may tube B 1 to B 2 along an arc K โ€ฒ parallel to K, crossing N 1 at a point u in the interior of N 1 near y and N 2 at a point v in the interior of N 2 near z. This results in a curve B โ€ฒ with which the proof proceeds by induction as before. We are now ready to establish the desired invariance of the first coordinate of ฮจ * (a, b). Proposition 6.10. Let a be an element of V nonsep . Suppose that b and c are distinct elements of BP a . Let (e, f ) = ฮจ * (a, b) and (g, h) = ฮจ * (a, c). Then e = g. Proof. By Proposition 6.9, there exists a sequence b i , 1 โ‰ค i โ‰ค n of elements of BP a such that b Suppose that i is an integer with 1 โ‰ค i < n. Let (e i , f i ) = ฮจ * (a, b i ). Note that (e 1 , f 1 ) = (e, f ) and (e n , f n ) = (g, h). By the above observations, (a, b i , b i+1 ) is a k-joint based at a for some even integer k with 0 โ‰ค k โ‰ค 4. Hence, k is an element of {0, 2, 4}. It follows, from Propositions 6.3, 6.5, and 6.7, that e i = e i+1 . Thus, by induction, e 1 = e n . That is to say, e = g. The preceding results now provide us with the desired induced map ฮจ * : V nonsep โ†’ V nonsep . Proposition 6.11. Let ฮจ : T โ†’ T be an automorphism of T . Let วซ be the orientation type of ฮจ. Let V nonsep denote the set of isotopy classes of nonseparating circles on S. Then: โ€ข There exists a unique function ฮจ * : V nonsep โ†’ V nonsep such that, for each bounding pair (a, b) on S, Proof. By Proposition 6.10, there exists a well-defined function ฮจ * : This proves the existence of a function ฮจ * as described in the first clause of Proposition 6.11. Suppose that ฮจ # : V nonsep โ†’ V nonsep is a function such that, for each bounding pair (a, b) on S, . Let a be an element of V nonsep . Let b be an element of BP a . Then (a, b) is a bounding pair on S. Hence, This implies that (c, d) = (e, f ). Hence, c = e. That is to say, ฮจ * (a) = ฮจ # (a). Hence, the two functions ฮจ * : V nonsep โ†’ V nonsep and ฮจ # : V nonsep โ†’ V nonsep are equal. This proves the uniqueness of a function ฮจ * as described in the first clause of Proposition 6.11. Let ฮ˜ : T โ†’ T be the inverse of the automorphism ฮจ : T โ†’ T . Clearly, by the definition of the orientation type of an automorphism in Proposition 4.3 and the definition of the induced map on V nonsep given in the first clause of Proposition 6.11, the induced maps ฮจ * : V nonsep โ†’ V nonsep and ฮ˜ * : V nonsep โ†’ V nonsep are inverse maps. This proves the second clause of Proposition 6.11. Induced automorphism of the complex of curves In this section, we extend the automorphism ฮจ * : C sep โ†’ C sep of Proposition 5.2 to the entire complex of circles C of S. Consider the unique function ฮจ * : V โ†’ V whose restrictions to V sep and V nonsep are the functions ฮจ * : V sep โ†’ V sep and ฮจ * : V nonsep โ†’ V nonsep of Propositions 5.1 and 6.11. We shall show that ฮจ * : V โ†’ V extends to a simplicial map. This will use the following consequence of Proposition 3.1. Proposition 7.1. Let a be an element of V nonsep and b be an element of โˆซ โŒ‰โˆš. Let c = ฮจ * (a) and d = ฮจ * (b). Suppose that a is represented by a circle A on S and b is represented by a circle B on S such that B is the boundary of an embedded torus P on S and A is contained in the interior of P . Then c is represented by a circle C on S and d is represented by a circle D on S such that D is the boundary of an embedded torus Q on S and C is contained in the interior of Q. We shall now show that ฮจ * : V โ†’ V extends to a simplicial map ฮจ * : C โ†’ C. Theorem 7.2. Let ฮจ : T โ†’ T be an automorphism of T . Let ฮจ * : V โ†’ V be the unique function whose restrictions to V sep and V nonsep are the functions ฮจ * : V sep โ†’ V sep and ฮจ * : V nonsep โ†’ V nonsep of Propositions 5.1 and 6.11. Then ฮจ * : V โ†’ V extends to a simplicial automorphism ฮจ * : C โ†’ C of C. Proof. Let a and b be distinct elements of V such that i(a, b) = 0. Let c = ฮจ * (a) and d = ฮจ * (b). It suffices to show that i(c, d) = 0. This will follow by considering several cases. Suppose that a and b are both elements of V sep . Then, by Proposition 5.2, i(c, d) = 0. Suppose, on the other hand, that a and b are both elements of V nonsep . If (a, b) is a bounding pair, then by Proposition 6.11, (c, d) is a bounding pair. In particular, i(c, d) = 0. We may assume, therefore, that (a, b) is not a bounding pair. Since i(a, b) = 0, this assumption implies that a is represented by a circle A on S and b is represented by a circle B on S such that A is disjoint from B and the complement of A โˆช B in S is connected. Hence, there exists circles E and F on S such that (i) A, B, E, and F are all disjoint, (ii) E and F bound embedded tori P E and P F on S, and (iii) A is contained in the interior of P E and B is contained in the interior of P F . Since the genus of S is at least 3, E is not isotopic to F . Let e and f denote the isotopy classes of E and F . Since E and F are separating circles, e and f are elements of V sep . Since E and F are disjoint, i(e, f ) = 0. Since E and F are not isotopic, e is not equal to f . Let g = ฮจ * (e) and h = ฮจ * (f ). By Proposition 5.1, since e and f are distinct elements of V sep , g and h are distinct elements of V sep . By Proposition 5.2, since i(e, f ) = 0, we have i(g, h) = 0. By Proposition 7.1, c and g are represented by circles C and G on S such that G bounds an embedded torus Q G on S and C is contained in the interior of Q G . Likewise, d and h are represented by circles D and H on S such that H bounds an embedded torus Q H on S and D is contained in the interior of Q H . Since i(g, h) = 0, we may assume that G and H are disjoint. Since g is not equal to h, G is not isotopic to H. Hence, the interiors of the embedded tori, Q G and Q H , bounded by G and H are disjoint. This implies that C and D are disjoint. Hence, i(c, d) = 0. Thus, if a and b are both elements of V nonsep , then i(c, d) = 0. Suppose, finally, that one of a and b is an element of V nonsep and one is an element of V sep . We may assume that a is an element of V nonsep and b is an element of V sep . Since i(a, b) = 0, a is represented by a circle A on S and b is represented by a circle B on S such that A and B are disjoint. Since b is in V sep , B bounds an embedded surface P B in S such that A is contained in the interior of P B . P B is the closure of the unique component of the complement of B in S which contains A. Let k be the genus of P B . Since B is an essential circle on S, k > 0. If k = 1, then by Proposition 7.1, i(c, d) = 0. Hence, we may assume that k > 1. It follows that there exists a circle E on S such that (i) A, B, and E are disjoint, (ii) E bounds an embedded torus P E on S such that P E is contained in the interior of P B and A is contained in the interior of P E . Note that E is not isotopic to B. Let e denote the isotopy class of E on S. Let f = ฮจ * (e). Since B and E are nonisotopic disjoint essential separating circles on S, it follows, by Propositions 5.1 and 5.2, that d and f are distinct elements of V sep such that i(d, f ) = 0. Hence, d and f are represented by nonisotopic disjoint essential separating circles, D and F , on S. Hence, by Proposition 7.1, it follows that c is represented by a circle C on S such that C is contained in the interior of an embedded torus P F on S with boundary F . Since D and F are disjoint nonisotopic essential separating circles on S, it follows that D is contained in the complement of P F . Since C is contained in the interior of P F , it follows that C and D are disjoint. Hence, i(c, d) = 0.
The role of migration in mental healthcare: treatment satisfaction and utilization Migration rates increase globally and require an adaption of national mental health services to the needs of persons with migration background. Therefore, we aimed to identify differences between persons with and without migratory background regarding (1) treatment satisfaction, (2) needed and received mental healthcare and (3) utilization of mental healthcare. In the context of a cross-sectional multicenter study, inpatients and day hospital patients of psychiatric settings in Southern Germany with severe affective and non-affective psychoses were included. Patientsโ€™ satisfaction with and their use of mental healthcare services were assessed by VSSS-54 and CSSRI-EU; patientsโ€™ needs were measured via CAN-EU. In total, 387 participants (migratory background: nโ€‰=โ€‰72; 19%) provided sufficient responses for analyses. Migrant patients were more satisfied with the overall treatment in the past year compared to non-migrant patients. No differences between both groups were identified in met and unmet treatment needs and use of supply services (psychiatric, psychotherapeutic, and psychosocial treatment). Despite a comparable degree of met and unmet treatment needs and mental health service use among migrants and non-migrants, patients with migration background showed higher overall treatment satisfaction compared to non-migrants. The role of sociocultural and migrant-related factors may explain our findings. Supplementary Information The online version contains supplementary material available at 10.1186/s12888-022-03722-8. Background The number of persons with migration background 1 is increasing due to socio-political, economic, demographic and environmental factors [2,3]. In 2019, the population with migration background in Germany comprised 26% of the population in total [1]. Even though more than 99% of migrants living in Germany exhibit a health insurance [4], which covers on average of 84% of healthcare costs [5], there is still an underrepresentation of people with migratory background in the German healthcare system [6][7][8][9]. Despite the higher psychopathological burden of people with migration background [4], recent findings indicate an underrepresentation of 1st generation migrants in the German in-and outpatient mental healthcare system [7,8], including the psychosocial supply system [9]. Moreover, migration factor was shown to be a negative predictor for the treatment outcome of mental disorders [10]. Turkish descents, representing the major group of immigrants in Germany, were found to exhibit inferior effective treatment results after receiving psychosomatic rehabilitative treatment compared to patients without migration background [11]. Furthermore, a number of studies have found that patients, belonging to an ethnic minority, are less likely to engender an empathic response from their clinicians, to participate in shared decision making and to receive information about disorder and treatment compared to ethnic majorities [12]. One could assume that these research findings are associated with lower treatment satisfaction. Likewise, a recent review on patient satisfaction with psychiatric inpatient services indeed indicates a positive relationship of treatment satisfaction with treatment outcome, quality of the therapeutic relationship as well as information sharing [13]. However, results on treatment satisfaction among migrants remain widely inconsistent and are still sparse in the field of mental healthcare [13][14][15]. Numerous definitions of patient satisfaction exist [16]. Yet, the majority of definitions capture the following aspects: patient satisfaction as a correspondence between patients' needs or expectations and their actual experiences with healthcare services [14,16,17]. In the evaluation of service quality, patient satisfaction plays a key role, as it represents the unique perspective of patients and the renunciation of a clinicians' centered view. Shipley and colleagues (2000) [18] demonstrated that patient satisfaction was a more accurate indicator of quality of care than clinicians' evaluation of the treatment. Moreover, patient satisfaction was found to improve adherence to treatment, which is crucial in the context of relapse and recurrence prevention in severe mental disorders, like affective and nonaffective psychoses [19]. Patients' needs, expectations, treatment experiences and thus satisfaction are indicated to be influenced by a conglomerate of cognitive-affective (e.g., knowledge about care, prior experiences, values, cultural norms) as well as sociodemographic factors (e.g., age, socio-economic status, geographic characteristics) [13,14,20,21]. On the one hand, the complexity of the concept treatment satisfaction requires interpreting results against the background of possible moderators. On the other hand, it highlights the relevance of examining treatment satisfaction in an ethnically and culturally diverse society with a high percentage of migrants, such as Germany [16]. Investigating migration-related disparities is generally a difficult endeavor not least due to the heterogeneity that is inherent in migration (e.g., country of origin, reason for migration) [22]. However, the indicated disadvantages of people with migration background in the mental health care system [6][7][8]10] as well as the lack of representative research [8] require comparative studies to raise understanding of possible underlying factors. Thus, our aim was to investigate the quantity and quality of treatment among patients with and without migration background in the framework of a multicenter study by examining the following aspects: (1) treatment satisfaction as indicator of quality of care [18], (2) the degree of accordance between needed and actually received mental healthcare as aspect of patient satisfaction and thus quality of care [13,18], and (3) utilization of mental healthcare services as indicator of treatment quantity. Subjects and recruitment The cross-sectional study was performed from 03/2019 to 09/2019 in the context of a larger project (Implementation status of the German guideline for psychosocial interventions for patients with severe mental illness (IMPPETUS)) [23]. Inpatients and day hospital patients of psychiatric settings diagnosed with severe affective and non-affective psychoses were included. As we conducted a multi-centric study, data was collected in 10 departments for psychiatry and psychotherapy in Bavaria, Germany. These departments are characterized by providing both psychiatric (i.e. somatic treatment forms, e.g., pharmacotherapy) and psychotherapeutic forms of treatment (e.g., cognitive and behavioral therapy). The selected centers represent metropolitan (Munich, Augsburg), middle-urban (Kempten, Memmingen) as well as rural (Donauwรถrth, Gรผnzburg, Kaufbeuren, Taufkirchen) catchment areas (the list of the participating hospitals appears in the supplement). In the present study, the following inclusion criteria were applied: (1) 18 to 65 years old, (2) ability to give consent, (3) sufficient German language skills in order to understand the questions exclusively asked in German, (4) exhibiting a severe mental illness. In order to identify patients with severe mental illness, the subsequent criteria were used [24]: (a) Patients with schizophrenia (ICD-10: F2), bipolar disorders (F30, F31) or depression (F32, F33), (b) duration of psychiatric illness โ‰ฅ2 years, (c) considerable consequences on daily life activities and social functioning, which was assessed through the Global Assessment of Functioning, GAF, [25] (score โ‰ค 60) and Health of the Nation Outcome Scales, HoNOS, [26] (score of โ‰ฅ2 on one of the subscale items for symptomatic problems and a score of โ‰ฅ2 on each, or a score of โ‰ฅ3 on at least one of the four subscale items for social problems). The recruitment and study flow chart is displayed in Fig. 1. Trained study personnel invited each eligible patient (during their attendance at the clinic) to participate in the study in coordination with the clinical teams. Those who agreed to participate were screened by trained study staff as soon as possible after admission. To identify patients with severe mental disorder, the Global Assessment of Functioning (GAF) [25] and the Health of the Nation Outcome Scales (HoNOS) [26] were executed. The diagnosis was set by the treating board-certified psychiatrist in the beginning of the inpatient or day hospital treatment. The duration of the illness (criterion: โ‰ฅ 2 years) was taken from the medical record or from the information provided by the treating physician. Patients who met the inclusion criteria were interviewed by trained study personnel shortly (ca. 2 weeks) before their discharge. The research team was informed by the treating physician about the (approximate) date of discharge to interview participants in case of a premature, prolonged, and planned discharge. There were no restrictions regarding the time period between inclusion in the study (shortly after admission) and conducting the interview (shortly before discharge). Measures After recruitment of patients who met the inclusion criteria, the following constructs were measured shortly (ca. 2 weeks) before discharge from the clinic. Sociodemographic data To assess sociodemographic aspects (including migration background) we used the German adaptation of the Client Sociodemographic and Service Receipt Inventory (CSSRI-EU) [27]. Migration background was measured through the following item: "Do you have a migration background?". In case of agreement, participants were asked to specify their migration background: "Yes, I am a migrant myself. " (1st generation migrant) vs. "Yes, at least one of my parents is a migrant. " (2nd generation migrant). As detailed below, we focused on the analysis of migrants (1st and 2nd generation) vs. non-migrants. The rationale for this classification was both content-based and statistical. The reason to group 1st and 2nd generation migrants in one category is that they both exhibit direct or -in case of 2nd generation migrants -indirect migration experience. Furthermore, against the backdrop of statistical power, the undertaken categorization reduced the chance to create a type II error -alternative distributions (undertaken for sensitivity analyses) would have led to a greater inequality between the compared groups as well as to smaller subgroup sizes [28]. Moreover, we conducted two exploratory sensitivity analyses: 1st generation vs. 2nd generation vs. nonmigrants; non-migrants and 2nd generation migrants vs. 1st generation migrants. Service satisfaction and utilization Service satisfaction In order to assess patients' satisfaction with mental healthcare services in the previous year, we conducted the German adaptation of the Verona Service Satisfaction Scale (VSSS-54) [29]. Psychometric properties of the German adaptation are satisfying and comparable with international studies [29,30]. Conceptually, the 54 items of the VSSS-54 represent seven dimensions: Overall Satisfaction (three items: satisfaction with the amount of help received, the kind of treatment services, the overall treatment services), Professionals' Skills and Behavior (24 items: satisfaction with professionals' behavior, e.g. interpersonal skills), Information (three items: satisfaction with information on disorders, therapies and services), Access (two items: satisfaction with service location and costs), Efficacy (eight items: satisfaction with overall and specific aspects of efficacy of service, e.g. social skills), Relatives Involvement (six items: satisfaction with help given to relatives/ persons of trust) and Types of Intervention (17 items: satisfaction with and use of e.g. medical prescription, psychotherapy). The items of the VSSS-54 dimensions were rated on a 5-point Likert scale (level of satisfaction: 1 = terrible -5 = excellent). Further methodological information is presented in Table 2 and Supplementary Methods. Service utilization The VSSS-54 domain Types of Intervention was analyzed on a (binary) item-based level to assess the utilization of specific mental healthcare services within the past 12 months before the survey: medical prescription and psychotherapy (individual and family therapy). Moreover, the receipt of outpatient psychiatric and psychotherapeutic treatment 3 months before admission to the hospital (binary items) was assessed through the German adaptation of the CSSRI-EU [27]. The use of psychosocial interventions was surveyed via the questionnaire "Attitudes and knowledge regarding psychosocial therapies" developed by the authors and available on request. The utilization of the presented psychosocial therapy forms was analyzed through binary items on (e.g., question: "Have you ever received supported employment?"). The psychosocial supply system in general and psychosocial interventions in particular focus on improving the individual's integration and participation in society through individual psychosocial interventions (e.g., occupational therapy, exercise therapy), system-level interventions (e.g., residential care interventions), and cross-cutting interventions (e.g., peer-led interventions) [31]. We analyzed whether the participants have ever received specific forms of psychosocial interventions (system-level interventions, single psychosocial interventions and cross-cutting issues). See Supplementary Methods for further methodological information. Patient needs The German adaptation of the Camberwell Assessment of Need-EU (CAN-EU) [32] was applied to measure the extent of accordance between needed and actually received mental healthcare. The interviewer-administered instrument consists of the following five categories of need: Basic, Functioning, Health, Social and Services. Information on the individual domains is presented in Table 4 and Supplementary Methods. The participants were asked whether there was a need regarding the individual domains in the past 4 weeks. In case of an absence of need/problem, the interviewer proceeded with the next domain. In case of a need, the participant was asked whether adequate care was received. If the subject agreed, it was recorded as met need, in case of disagreement (no or inadequate care received), it was registered as unmet need. For our study, we computed two summary scores: the total number of met needs (one or more met needs but no unmet needs on the domains within the category) as well as the total number of unmet needs (at least one unmet need on the domains belonging to the category) [33]. Concerning internal consistency, testretest-reliability and inter-rater-reliability, the German version of the CAN-EU is satisfying and comparable with other European versions [32,34]. Statistical analysis All analyses were carried out in IBM SPSS for Windows (version 26) with a significance level of ฮฑ = 0.05. Descriptive statistics are displayed with frequency and percentage distributions for binary data. Means and standard deviations are presented in case of continuous data and additionally medians for categorical data. Intergroup differences were assessed using Chi 2 tests in case of binary data and Mann-Whitney-U or Kruskal-Wallis tests (Dunn-Bonferroni tests for subgroup analyses in case of significant intergroup differences) for categorical data (e.g., in the event of patient's satisfaction, assessed by 5-point Likert scales). For continuous data we used independent sample t-tests or one-way ANOVAs (Bonferroni tests for subgroup analyses in case of significant intergroup differences). As primary intergroup analyses, we compared participants with vs. without migratory background (1st and 2nd generation migrants). As exploratory secondary analyses we conducted two sensitivity analyses: First, we computed a 3-group-comparison to identify differences between 1st generation migrants, 2nd generation migrants and subjects without migratory background. Second, we analyzed differences between native Germans (subjects without migratory background and 2nd generation migrants) and 1st generation migrants. Participants' characteristics In total, 398 patients participated in the study. Retrospectively, n = 8 patients were excluded from the analyses (of Table 1 Descriptive statistics and mean response comparisons between patients with and without migratory background a Includes first-or second-generation migrants (as for all following tables in the manuscript) b The diagnosis assignment was based on the ICD- which n = 7 did not provide data on the Global Assessment of Functioning, and n = 1 did not fulfill the age inclusion criteria). Moreover, only participants who provided responses to their migration status were included in the analyses (N = 387, of which n = 72 exhibited a migration background). Demographic information of the subjects is described in Table 1. Results indicate that subjects with migratory background in our sample were more frequent to exhibit schizophrenia (ICD-10: F2x), p = 0.031, and lived more often in medium populated areas (20โ€ฒ001-500โ€ฒ000 inhabitants), p = 0.007, compared to non-migrant subjects. Moreover, participants without migratory background were more frequently located in lower populated areas (โ‰ค 20โ€ฒ000 inhabitants), p = 0.001, showed a higher functioning level, p = 0.008 and a lower severity of mental disorder, p = 0.037 than migrant participants. No intergroup differences were found concerning age, gender, family status and salary (net). For all details and complete test statistics see Table 1 and Supplementary Tables 1 and 5 (secondary analyses). Service satisfaction and utilization Concerning satisfaction with mental healthcare services in the previous year (VSSS-EU), Mann-Whitney-U tests indicate a higher Overall Satisfaction with mental health care services (p = 0.006) as well as satisfaction with Relatives Involvement (p = 0.001) among patients with migratory background compared to non-migrants ( Table 6). Concerning the use of mental healthcare services, no differences between migrants and non-migrants were found in the examined areas of treatment: medical prescription and psychotherapy in the previous year, outpatient psychological-psychotherapeutic and medical-psychiatric treatments 3 months before admission to the hospital and psychosocial interventions. For complete test statistics see Table 3 and Supplementary Tables 3 and 7 (secondary analyses). Patient needs Across all domains of need of the CAN-EU, we identified a higher percentage of unmet compared to met needs, see Table 4. Chi 2 tests of independence showed no intergroup differences in the met and unmet needs of the CAN-EU dimensions between patients with vs. without migration background (Table 4). Moreover, secondary analyses on patients' needs indicate no differences between groups, see Supplementary Tables 4 and 8. Discussion The most remarkable result to emerge from the data was a higher overall satisfaction with the received mental health care treatment in the past 12 months among patients with migration background compared to those without. Simultaneously, no differences were shown in the utilization of treatments as well as in the degree of accordance between needed and actually received mental healthcare between patients with and without migratory background. Our results on patient satisfaction are in agreement with a Canadian study, which provided evidence for a higher satisfaction with mental health care among 1st generation migrants compared to native Canadians [35]. In contrast to the present results, higher dissatisfaction among patients with migratory background was found by Parkman and colleagues (1997) [36] in a mental healthcare context as well as by Borde and colleagues (2002) [37] in a somatic-gynecological context. Generally Table 3 Service utilization: Average confirmation rates of treatment use and response comparisons a Participants were asked whether they had received the intervention in the last 12 months (VSSS-EU). The related time period of 12 months refers to the inpatient setting (at the time of the survey) and previous settings (out-or inpatient settings) b Participants were asked whether they had received the intervention 3 months before admission to clinic (CSSRI-EU) c Participants were asked whether they had ever received the psychosocial intervention. The displayed score corresponds to the proportion of psychosocial interventions used out of the total number of psychosocial interventions presented, shown as decimal number speaking, so far only a limited number of comparative studies on patient satisfaction between patients with and without migratory background exist and results are inconsistent [13][14][15]. Against the background of inconsistent data and the weight of evidence that speaks for strong and persistent mental healthcare disparities due to ethnicity and migration (e.g., access to or quality of care) [38] -how can the higher overall satisfaction rates among migrant patients be explained? To begin with, it is crucial to examine potential confounding effects of socio-demographic characteristics on the relationship between migration and treatment satisfaction [14,16]. In our sample, participants with and without migratory background exhibited significant differences in their health status (functioning level and severity of mental disorder), diagnosis and geographic characteristics. Health status: There is consistent evidence for a relationship between poor health status and overall lower satisfaction levels, for literature reviews see Badri et al. (2009) [20] and Batbaatar et al. (2017) [14]. In the present study, participants with migration background were identified to exhibit a significantly lower functioning level as well as higher severity of mental disorder compared to non-migrants. This finding would imply a lower satisfaction level among migrants -according to previous research. However, migrants reported to be overall more satisfied with their treatment, despite their lower health status. Diagnosis: In our sample, the diagnosis schizophrenia (ICD-10: F2x) was more prevalent among migrant compared to nonmigrant subjects. This is in line with previous findings on elevated risks among migrant groups to develop nonaffective psychoses, see Fearon and Morgan (2005) for a review [39]. The potential confounding role of diagnosis on the relationship between migration and treatment satisfaction appears to be small -the majority of studies about the relationship between satisfaction and diagnosis found no [13] or inconsistent effects [40,41]. Geographic characteristics: In the present study, subjects with migration background were more often residents of medium populated areas (20โ€ฒ001-500โ€ฒ000 inhabitants), whereas participants without migratory background inhabited lower populated areas (โ‰ค 20โ€ฒ000 inhabitants) more frequently. No differences were found in highly populated areas (> 500โ€ฒ000 inhabitants). Research on the relationship between population density and treatment satisfaction show inconsistent results. Few studies showed a higher overall satisfaction among rural residents [42,43], whereas the majority of studies reported no [44,45] or 22:116 varying differences between rural and urban populations in treatment satisfaction [46][47][48]. Thus, research assigns solely a minor role to geographical characteristics in explaining treatment satisfaction [48]. To conclude, the effects of health status, diagnosis and geographic characteristics on the revealed differences in treatment satisfaction between patients with and without migratory background remain unclear. Thus, future research must be undertaken to further clarify the significance of sociodemographic characteristics on treatment satisfaction among migrant populations. Furthermore, differences in satisfaction can be attributed to disparities in the provision of treatments (e.g., treatment accessibility). Regulated by a nationwide framework for demand planning, the geographic distribution of services is based on aspects like number of inhabitants per physician or specific regional characteristics. However, particularly in outpatient mental healthcare, there is a wide range in the supply density. For instance, patients from higher populated areas (metropolitan areas or Western Germany) experience a higher density of outpatient psychiatric and psychotherapeutic services [49,50]. According to that, it would be expectable to find a lower service utilization among participants without migration background as they were more often residents of lower populated areas (โ‰ค 20โ€ฒ000 inhabitants). However, as mentioned above, no significant intergroup differences in the utilization of mental healthcare services were detected. Around twothirds of migrants and non-migrants reported the use of individual psychotherapy and nearly 100% the use of medication in the past 12 months (including out-and the current inpatient setting at the time of the study). Likewise, no differences were found in outpatient psychological-psychotherapeutic and medical-psychiatric treatment 3 months before admission to the clinic. In both groups the utilization rate was one-fourth for outpatient psychological-and around one-third for outpatient medical-psychiatric treatments. Concerning psychosocial treatments, patients with and without migration background did not exhibit differences in the lifetime use -participants of both groups reported to have received about one-third of the presented psychosocial interventions. At first sight, our findings are not in line with recent findings, which indicate an underrepresentation of migrant patients in the German in-and outpatient mental healthcare system [7][8][9]. However, comparing the proportion of migrants in the overall German population (26%) [1] with the proportion of migrants in the study population (19%), an underrepresentation of people with a migration background in the participating inpatient health care facilities can be observed. Thus, it remains unclear to which degree the underrepresentation of migrant participants in our study affected our results. Equivalent to the results on service use, no difference in the degree of accordance between needed and actually received mental healthcare was found between migrants and non-migrants. Likewise, the secondary analyses that were conducted did not show intergroup differences (see supplement). This indicates that migrants and nonmigrants received treatment according to their needs to a comparable extent. Despite the significance of met and unmet needs in the theoretical construct of patient satisfaction [16,51] and also existing evidence on the relationship between unmet needs and lower satisfaction [52], the explored differences on satisfaction in our sample cannot be (sufficiently) explained by the degree of met and unmet needs. Moreover, our results showed that the number of unmet needs exceeds the number of met needs in each category examined -for both groups. These findings differ from previous results reported in the literature -Swedish researchers found a higher total number of unmet needs among the migrant compared to the non-migrant group and a higher number of met needs compared to unmet needs for both groups [53]. However, to our knowledge, research on fulfillment of patient's needs among migrant groups does not exist for a German population and is also internationally sparse. Hence, further research needs to be performed to investigate the role of fulfillment of needs in patient satisfaction among migrants in the mental healthcare system (e.g., with emphasis on culturally sensitive evaluation tools on patient's needs). Participants with vs. without migration background significantly differed in their Overall Satisfaction and only numerically in the remaining categories (except for Information). Overall Satisfaction is not composed of the remaining specific categories of satisfaction (e.g., Professionals' Skills) but rather represents a superordinate and more general level of satisfaction assessed by three items: satisfaction with the amount of help received, the kind of treatment services and the overall treatment services. On the one hand, it would be plausible to assume, a higher overall level of satisfaction is also reflected in a higher satisfaction on specific aspects of treatment e.g., on the information provided about diagnosis and treatment forms (Information). On the other hand, our results highlight the complexity of the construct treatment satisfaction. Despite its popularity, there is uncertainty about the construct and its various dimensions based on patient's expectations, needs and their actual experiences [16,54]. Another possible explanation of the seemingly inconsistent results on treatment satisfaction (overall vs. specific) might be a lack of discriminatory power between the two compared samples (participants with vs. without migration background). When comparing participants born in Germany with 1st generation migrants in our sample, differences in treatment satisfaction expand -1st generation migrants reported to be more satisfied with most of the presented categories (Professionals' Skills and Behavior, Efficacy, Relatives Involvement, and Overall Satisfaction) -except for Information and Access, where we did not detect a significant difference (see Supplementary Table 6). First generation migrants exhibit -unlike 2nd generation migrants -direct migration experiences and are possibly more influenced by the sociocultural factors of the country of origin. Therefore, it is essential to investigate the detected differences in treatment satisfaction against the background of sociocultural factors as well as factors related to migration itself. Culture, defined as collective phenomenon that characterizes persons, which share a set of defining values, norms, and attitudes [55], has uncontestably an influence on people's social behaviors -and thus possibly on the expression of dissatisfaction [56]. The cultural dimensions by Hofstede are a wellestablished framework to describe and compare cultures [57]. One of its central components is the dimension individualism-collectivism, which describes the level of integration into groups. Whereas in individualist cultures ties are rather lose and the individual plays the central role, in collectivist cultures the focus is on groups and tightly integrated social relationships [58]. Numerous studies have investigated the impact of individualism and collectivism on social interaction patterns [56]. It was found that collectivist -in contrast to individualist cultures -tend to avoid the expression of dissatisfaction in order to avert potential conflicts [59][60][61]. In our study, the specific country of origin was not assessed and must be considered as limitation (see below). However, based on the current migration report of the German government (as of 2021), the four most prevalent migration backgrounds (1st or 2nd generation) are Turkish (13%), Polish (11%), Russian (7%) and Kazakh (6%) [62]. Each of these nations are -according to Hofstede's research [58] -collectivist societies, whereas Germany is categorized as predominately individualistic society. Although Hofstede's cultural dimensions must be interpreted with caution due to vast methodological and conceptual limitations (e.g., lack of sample representativeness, underestimation of a nation's heterogeneity, neglection of non-cultural causation) [63], possible cultural differences (individualism -collectivism) and thus the preparedness to express discontentment rather than avoiding it, might partly explain differences in self-reported treatment satisfaction between patients with and without migration background. Socially desirable behavior and therefore the expression of satisfaction is not only related to cultural aspects. Moreover, it was observed that the expression of satisfaction is also associated with migration itself [16,63]. Recent qualitative research indicates a lack of expressed dissatisfaction in case of inadequate treatment services among patients with migration background [64,65]. Language skills, obligation of gratitude and thus socially desirable behavior are discussed as possible influential factors on expressing or mitigating negative treatment experiences [16,[63][64][65]. Additionally, previous literature has discussed the possible impact of prior treatment expectations on treatment satisfaction [16]. It was considered that unsatisfying experiences with the healthcare system in the country of origin lead to a mitigation of negative treatment experiences and thus to an overestimation of satisfaction. However, according to the confirmation bias -the tendency to select, determine and interpret information in a manner that fulfils (confirms) one's prior expectations -negative treatment experiences might rather lead to a decreased treatment satisfaction. In conclusion, specific sociocultural (e.g., individualism vs. collectivism) as well as migration-related factors (e.g., language barriers, social desirability) might contribute to a higher actual or solely expressed satisfaction. In that regard, further research is needed to investigate the impact of migration and cultural norms on the perceived as well as expressed treatment satisfaction among migrant patients. There are some limitations concerning the results of this study. As first limitation, results on migrant groups must be interpreted with caution. Migrant groups are highly heterogeneous and exhibit a great diversity of cultural, ethnic, religious and social backgrounds and therefore diverging experiences, attitudes and behaviors towards healthcare [22]. Moreover, distinction was made in our sample neither by country of origin nor by reason for migration. Given that the impact of both aspects could not be examined for the present research questions, results must thus be treated with caution. Furthermore, as we conducted our study in psychiatric and psychosomatic inpatient and day hospital facilities, the participants received intense and diverse treatment services. Therefore, the degree of satisfaction, which our participants exhibited, might not be generalizable to other settings (e.g., outpatient services) or to the general population. Also, our study did not include the assessment of needs identified by clinicians due to the focus on the patient's perspective of the overall project. Therefore, data based on patients' needs must be interpreted with regard to the missing comparative variable. Moreover, significant differences in geographic characteristics, health status and diagnosis between participants with and without migration background were detected. Given the differences, results must be interpreted with caution. The following limitation concerns the number of different variables. While we corrected post-hoc contrasts for multiple comparisons, we did not adjust analyses according to the total number of tested variables. Hence, the exploratory findings must be replicated in independent samples. Furthermore, the proportion of migrants in our sample (19%) does not correspond to the proportion of migrants in the German population as a whole (26%) [1]. Even though it was expectable due to the found underrepresentation of migrants in the health system [6], it can bias the present results as it does not capture the remaining 7%, which might have stayed away from the mental healthcare system due to dissatisfaction or were not included in the study due to language barriers. Finally, we are not able to make statements regarding those patients who did not want to participate or who stopped the trial early. Thus, a selection bias regarding those patients with a higher level of satisfaction cannot be completely ruled out. The same risk for a selection bias can be assumed by the inclusion criterion of having sufficient German language skills. Conclusion To our knowledge, the present work is the first comparative multicenter study on satisfaction with mental healthcare services between migrant and non-migrant inpatients in German psychiatric facilities. Taken together, the present findings indicate a higher overall satisfaction with mental healthcare services among migrants. Simultaneously, no differences in service use as well as in met/unmet needs in mental healthcare were detected between patients with and without migratory background. Moreover, findings on satisfaction appear to be associated with sociocultural and migration-related factors -the explored differences on treatment satisfaction increased when comparing 1st generation migrants with native Germans (without migration background or 2nd generation migrants). In conclusion, the present work supports the significance of sociocultural and migration-related factors for (expression of ) treatment satisfaction, e.g., social desirability. Thus, our findings point towards the risk to overlook the needs of patients with migration background in our healthcare system and highlight the importance of an exhaustive exploration of patients' needs and expectations within a culturally sensitive healthcare setting. However, in order to understand the role of sociocultural and migrant-related factors on patient satisfaction and to provide more specific practical implications, further research must be undertaken.
Sideritis elica, a New Species of Lamiaceae from Bulgaria, Revealed by Morphology and Molecular Phylogeny Sideritis elica, from the Rhodope Mountains, is described as a species new to science. Results of a detailed morphological analysis were combined with the data of molecular analyses using DNA barcoding as an efficient tool for the genetic, taxonomic identification of plants. The combination of morphological features distinguishes the new species well: Its first three uppermost leaf pairs are significantly shorter and wider, the branchiness of the stems is much more frequent, the whole plant is much more lanate, and it looks almost white, as opposed to the other closed species of section Empedoclia, which look grayish green. The molecular analysis, based on the rbcL and trnH-psbA regions, supports the morphological data about the divergence of Sideritis scardica and Sideritis elica. The studied populations of the two taxa were found to be genetically distant (up to 6.8% polymorphism for trnH-psbA) with distinct population-specific nucleotide patterns, while no polymorphism in the DNA barcodes was detected within the Sideritis elica population. The results confirm the existence of a new species called Sideritis elica, which occurs in the nature reserve Chervenata Stena, located in the northern part of the Central Rhodope Mountains. There were only 12 individuals found in the locality, which underlines the necessity of conservation measures. Introduction Genus Sideritis (Lamiaceae, Lamioideae) comprises more than 150 species distributed in the temperate and tropical areas of the Northern Hemisphere [1,2] and subdivided into two subgenera: Sideritis and Marrubiastrum (Moench.) Mendoza-Heuer. Southeastern Europe and the Eastern Mediterranean, with about 50 species, represent a center of diversity, particularly of section Empedoclia (Rafin.) Bentham of the subgenus Sideritis, with 45 species in Turkey [3], and about 10 species in Greece and the Balkans in general [4][5][6]. The number of species depends on their taxonomic treatment and concepts, which are not straightforward because of the high level of polymorphism, including ecotype diversity and hybridization among the species [7]. In Bulgaria, Sideritis is represented by four species, two of them belonging to section Hesiodia Bentham (S. montana L. and S. lanata L.), and two belonging to section Empedoclia (S. scardica Griseb. and S. syriaca L.) [8]. All but S. montana are considered rare and endangered species in need of measures for conservation [9][10][11]. Moreover, S. scardica and S. syriaca are considered essential medicinal plants and are subject to cultivation. While the two species of section Hesiodia are discrete and well-distinguishable, there are still some taxonomic uncertainties within the species of section Empedoclia [7]. A typical distribution pattern of Sideritis species of section Empedoclia is the high percentage of endemism [3,12,13]. S. scardica is endemic to the Balkan Peninsula, while S. syriaca sensu lato is believed to have wider distribution [4]. Both species naturally occur exclusively on limestone although they can be cultivated in a broader range of soil pHs [14]. A recent morphometric study on the Bulgarian populations of Sideritis spp. [15] revealed that the individuals in one population, treated as S. scardica, demonstrate substantial differences from those of the remaining populations. This population represents a distinct taxon, which can be very well-distinguished based on morphological traits. Modern plant taxonomy could benefit substantially by the application of different DNA barcoding techniques [16][17][18][19][20], which reveal the phylogenetic relationships among the taxa and facilitate taxonomic decisions. This is especially important in cases of so-called "cryptic species", i.e., species demonstrating low morphological, but considerable genetic, differences [21][22][23][24]. Genetic markers, including DNA barcoding, have been successfully applied to the study of different aspects of genetic diversity and the evolutionary relationships between different medicinal plants of the family Lamiaceae [25][26][27], including the species of genus Sideritis [28,29]. The application of DNA barcoding markers has been proven to be an important tool for various genetic and systematic studies. The objective of the present study was to combine existing information from the morphological and morphometric studies with the utility of DNA barcoding on a local scale in discriminating Sideritis species. Four of the most frequently used cpDNA barcoding regions, matK, rbcL, trnH-psbA, and ITS1/ITS2, were used in our study. Both matK and rbcL have been selected as core barcodes by the Consortium for the Barcode of Life (CBOL) Plant Working Group (PWG), and ITS/ITS2 and trnH-psbA have been suggested as supplementary loci. In addition, the utility of these markers has been explored and discussed in previous studies on representatives of the family Lamiaceae [25][26][27][28][29]. Our last, but very important, objective was to provide a description of this new species of genus Sideritis, sect. Empedoclia. Morphological and Morphometric Data The full results of the statistical analysis are presented in [15], and they revealed that the population Chervenata Stena (CHE), classified as S. scardica, differed significantly from the remaining ones of the same species. The distinct features of CHE are the higherflowering stems, densely lanate stems and leaves, and different shape of the uppermost 1-3 pairs of leaves. In more detail, the height of the flowering stems of the individuals here reached 40-60 cm, as opposed to 20-40 cm in other populations. Leaf indumentum was clearly different, too-it was densely white and lanate in the individuals in the CHE population, as opposed to the grayish green indumentum of the leaves of plants from the other populations (Figures 1 and 2). A very indicative trait was the ratio length/width of the 2-3 uppermost leaf pairs. In the individuals from the CHE population, this ratio ranged between 2 and 3, while in the other populations, it ranged between 3 and 6 ( Figure 3). These results are summarized in Table 1. The applied cluster and principal component analyses clearly separated the population Chervenata Stena from the remaining populations of both S. scardica and S. syriaca (see [15]). The juvenile seedlings of S. scardica from Trigrad and S. elica differed clearly in their pubescence although both localities are situated in the same mountain range ( Figure 4). It can be seen in the figure that the cotyledons and the stems of S. elica plants are densely covered by trichomes, while in S. scardica there are only single trichomes, located mostly on the stem. The applied cluster and principal component analyses clearly separated the population Chervenata Stena from the remaining populations of both S. scardica and S. syriaca (see [15]). The applied cluster and principal component analyses clearly separated the population Chervenata Stena from the remaining populations of both S. scardica and S. syriaca (see [15]). DNA Barcoding Five specimens from Chervenata Stena and six from Slavyanka Mountain were analyzed using DNA barcode markers. The efficiency of PCR amplification and sequencing for the Sideritis specimens was 100%, despite the fact that the sequence quality of the amplified products was low for matK and ITS primers. On the other hand, DNA barcodes for regions rbcL and trnH-psbA clearly indicated that Sideritis specimens from the two floristic regions are separated genetically, with a lack of polymorphism for samples from the floristic region Chervenata Stena ( Figure 5). The divergence was visualized by the presence DNA Barcoding Five specimens from Chervenata Stena and six from Slavyanka Mountain were analyzed using DNA barcode markers. The efficiency of PCR amplification and sequencing DNA Barcoding Five specimens from Chervenata Stena and six from Slavyanka Mountain were analyzed using DNA barcode markers. The efficiency of PCR amplification and sequencing for the Sideritis specimens was 100%, despite the fact that the sequence quality of the amplified products was low for matK and ITS primers. On the other hand, DNA barcodes for regions rbcL and trnH-psbA clearly indicated that Sideritis specimens from the two floristic regions are separated genetically, with a lack of polymorphism for samples from the floristic region Chervenata Stena ( Figure 5). The divergence was visualized by the presence of insertions and deletions (Indels) and/or as single nucleotide polymorphisms. The trnH-pasbA region was the most informative, with Indels and SNPs found to be population-specific. The rbcL marker showed a unique SNP (CA/CG), which is also population-specific ( Figure 5, black box). Plants 2022, 11, x FOR PEER REVIEW 5 of 10 of insertions and deletions (Indels) and/or as single nucleotide polymorphisms. The trnH-pasbA region was the most informative, with Indels and SNPs found to be populationspecific. The rbcL marker showed a unique SNP (CA/CG), which is also population-specific ( Figure 5, black box). The genetic divergence, both within and between populations ( Table 2), showed that, when taking into account the total polymorphic sites, including insertions/deletions, the value of genetic divergence accounts for 6.8% for trnH-psbA marker (35/508, data not shown). In order to address the taxonomic status of Sideritis specimens from both floristic regions, we performed a BLAST analysis of the region-specific consensus sequence (e.g., S10 and S18) against the NCBI database accessions (for DNA barcode trnH-psbA) and NCBI database + BOLD database (for DNA barcode rbcL). The taxonomic assignment ( Figure 6) The genetic divergence, both within and between populations ( Table 2), showed that, when taking into account the total polymorphic sites, including insertions/deletions, the value of genetic divergence accounts for 6.8% for trnH-psbA marker (35/508, data not shown). In order to address the taxonomic status of Sideritis specimens from both floristic regions, we performed a BLAST analysis of the region-specific consensus sequence (e.g., S10 and S18) against the NCBI database accessions (for DNA barcode trnH-psbA) and NCBI database + BOLD database (for DNA barcode rbcL). The taxonomic assignment ( Figure 6) shows that the samples from Slavyanka Mountain belongs to Sideritis scardica, forming a cluster with other accessions mainly of this species from the NCBI database. The sample representative for the Chervenata Stena population, although close to Sideritis scardica ( Figure 5, gene rbcL), is slightly genetically distant and clustered separately. As revealed by the DNA barcode trnH-psbA, the specimens from the Rhodope Mountains clustered predominantly with Sideritis sipylea Boiss. shows that the samples from Slavyanka Mountain belongs to Sideritis scardica, forming a cluster with other accessions mainly of this species from the NCBI database. The sample representative for the Chervenata Stena population, although close to Sideritis scardica ( Figure 5, gene rbcL), is slightly genetically distant and clustered separately. As revealed by the DNA barcode trnH-psbA, the specimens from the Rhodope Mountains clustered predominantly with Sideritis sipylea Boiss. Figure 6. Taxonomic assignment of region-specific Sideritis scardica specimens (S10 and S18, in bold) against BOLD database accessions (trnH-psbA) and BOLD+NCBI (rbcL). The trees were constructed in Geneious software using the genetic distance model of Jukes-Cantor and the tree construction method of UPGMA. Taxonomic Implications The combined results of the morphometric and DNA barcode studies showed that the population of Chervenata Stena differs from the other populations of S. scardica and suggest that it represents a distinct species. . Taxonomic assignment of region-specific Sideritis scardica specimens (S10 and S18, in bold) against BOLD database accessions (trnH-psbA) and BOLD+NCBI (rbcL). The trees were constructed in Geneious software using the genetic distance model of Jukes-Cantor and the tree construction method of UPGMA. Taxonomic Implications The combined results of the morphometric and DNA barcode studies showed that the population of Chervenata Stena differs from the other populations of S. scardica and suggest that it represents a distinct species. Etymology: The new species is named Sideritis elica, in honor of Elka Aneva (the mother of Ina Aneva) in recognition of her botanical enthusiasm and inspirational help during all of the field studies. There is some overlap in the morphological and quantitative characteristics, allowing for the hypothesis that the new species possesses some characteristics of a cryptic species. Along with the morphological differentiation, there are clear differences in the DNA barcoding markers trnH-psbA and rbcL, as shown by the phylogenetic analysis ( Figure 6). Sideritis elica is a narrow endemic, occurring on limestone in the northern part of the Central Rhodope Mountains in Bulgaria. The species grows in the periphery of a mixed forest Pinus nigra Arn. and Ostrya carpinifolia Scop. Some individuals also occur within Juniperus deltoides R.P. Adams (J. oxycedrus L.) scrub, at an altitude of 1180-1210 m. The population size is low; the exact number of individuals is not known because no detailed inventory has been performed so far, but a rough estimate allows for the conclusion that there are not more than~20 individuals in the locality (only 12 individuals were registered in area of 0.5 ha). The individuals have relatively large sizes (in contrast to those of S. scardica in their natural localities): one individual with an average size of 120 ร— 80 cm, three individuals with an average size of 90 ร— 60 cm, two individuals of 80 ร— 50 cm, two of 60 ร— 40 cm, and four of 40 ร— 30 cm. The total area occupied by the individuals is 4.34 m 2 . Applied to the entire area of the natural locality (5000 m 2 ), this equates to a projective coverage of 0.087%. The population is in a critical state; both the area and the projective cover of the species have very low values. The anthropogenic factor has a great influence due to the proximity of the locality to the Martsiganitsa Hut. Currently, no specific threats have been identified; however, the low population size provides a risk per se, related either to direct damage, or to genetic erosion of the population. Materials and Methods Morphologic and morphometric studies were performed on eight populations, representing six populations of S. scardica and two populations of S. syriaca [15]. Particular attention was given to the population Chervenata Stena (CHE) from the Rhodope Mountains (41 โ€ข 53 N 24 โ€ข 51 E, 1300 m ASL). Details of the studies are presented elsewhere [15]. In addition, we collected seeds of S. scardica from Trigrad (Rhodope Mountains) and of S. elica from the Chervenata Stena locality in order to compare the morphology of juvenile seedlings. The seeds were sown under the same controlled conditions. A dataset of 11 specimens of Sideritis from the locality Chervenata Stena (No. 9-13) and the Slavyanka Mountain (No. [15][16][17][18][19][20] were used for DNA barcoding analysis. DNA Extraction, PCR Amplification, and Sequencing Genomic DNA was extracted by using an Invisorb ยฎ Spin Plant Mini Kit (Invitek Molecular GmbH, Berlin, Germany) following the instructions of the manufacturer. DNA quality and quantity were measured by a NanoDropTM Lite Spectrophotometer (Thermo Fisher Scientific). The genetic diversity of the samples was evaluated based on sequences of universal barcodes for plants: nuclear ribosomal internal transcribed spacer (ITS), ribulose-1,5-bisphosphate carboxylase/oxygenase large subunit (rbcL) gene, maturase K (matK) gene, and psbA-trnH intergenic spacer. The sequences of the used primers (synthesized by Microsynth) and the PCR conditions are presented in Table 3. PCR amplification was performed in 20-ยตL reaction mixtures containing approximately 30 ng of genomic DNA, 1 X PCR buffer, MgCl 2 (2.0 mM for ITS and matK, and 2.5 mM for rbcL and trnH-psbA), 0.2 mM of each dNTP, 0.2 ยตM of each primer, and 1.0 U Taq DNA Polymerase (Solys Biodine). The quality of the PCR products was checked on 1% agarose gel containing Good-ViewTM staining dye. Successful amplicon products were sequenced in both directions by Microsynth (Germany), using the same primers used for the PCR amplification. Candidate DNA barcode sequences for each barcode region were aligned via MEGA-X, and consensus sequences were subjected to further analyses using the software package Geneious. The phylogenetic trees were constructed using the Jukes-Cantor genetic distance model [30], and the UPGMA tree-building method. Evolutionary divergence was tested using the Tamura 3-parameter model [31] implemented in MEGA-X software [32]. Taxonomic assignment of the Sideritis specimens was performed through BLAST analyses in Geneious against publicly available accessions in NCBI. The estimates of within population and between population divergence were calculated in MEGA-X [32]. Conclusions In this study, we performed a morphological and DNA barcoding analysis of representatives of Sideritis scardica populations from two geographically distant floristic regions in Bulgaria. The Sideritis population from the reserve Chervenata Stena (CHE) was found to be phenotypically distinct from Sideritis scardica. This allowed us to state that the population from the reserve represents a new a new species we called Sideritis elica Aneva, Zhelev and Bonchev. The genetic divergence between S. scardica and S. elica was supported based on rbcL and trnH-psbA markers. The data has two main implications. First, our study implies that eco-geographical and demographic conditions enhance genetic diversification and occasionally the speciation within the genus Sideritis. Second, our study highlights the importance of the DNA barcoding method to unravel patterns of genetic variability at species in support of classical morphological approaches. Further studies on a larger set of Sideritis populations could give deeper insight into the ecological dynamics of this endemic genus of high medicinal value for Bulgaria, with practical implications for its conservation.
Regularity and irregularity of fiber dimension of non-autonomous dynamical systems This note concerns non-autonomous dynamics of rational functions and, more precisely, the fractal behavior of the Julia sets under perturbation of non-autonomous systems. We provide a necessary and sufficient condition for holomorphic stability which leads to H\"older continuity of dimensions of hyperbolic non-autonomous Julia sets with respect to the $l^\infty$-topology on the parameter space. On the other hand we show that, for some particular family, the Hausdorff and packing dimension functions are not differentiable at any point and that these dimensions are not equal on an open dense set of the parameter space still with respect to the $l^\infty$-topology. Introduction Let F = f ฯ„ ; ฯ„ โˆˆ ฮ› 0 be a holomorphic family of rational functions depending analytically on a parameter ฯ„ โˆˆ ฮ› 0 , ฮ› 0 being some open and connected subset of C d , d โ‰ฅ 2. We investigate the dynamics of functions where each f ฮปj is an arbitrarily chosen function of the family F . Such a dynamical system is usually called non-autonomous. They generalize deterministic dynamics (where all the functions f ฮปj equal one fixed rational map) and random dynamics (where the functions f ฮปj are chosen according to some probability law) that first have been considered by Fornaess and Sibony [FS91]. If ฮป = (ฮป 1 , ฮป 2 , ...) โˆˆ ฮ› N 0 then it is convenient to denote f n ฮป = f ฮปn โ€ข f ฮปnโˆ’1 โ€ข ... โ€ข f ฮป1 . Like in deterministic dynamics, the normal family behavior of (f n ฮป ) n splits the sphere into two subsets. The Fatou set F ฮป , i.e. the set of points for which (f n ฮป ) n is normal on some neighborhood, and its complement the Julia set J ฮป . We are going to investigate the fractal nature of the Julia set J ฮป and, more precisely, the dependence of the fractal dimensions of J ฮป on the parameter ฮป โˆˆ ฮ› N 0 . The deterministic hyperbolic case is completely understood by now. Indeed in 1979, R. Bowen [Bow79] showed that the Hausdorff dimension of the Julia set can be expressed by the zero of a pressure function. The picture was completed by D. Ruelle [Rue82] who showed that this dimension depends real analytically on the function. More recently, random dynamics became an active area and both Bowen's formula and Ruelle's real analyticity result have its counterparts in random dynamics. Bowen's formula has been established for various random dynamical systems (see e.g. [MUS11] and the corresponding references in this monograph) and H. Rugh [Rug] established real analyticity for random repellers. We will see in this note that the situation is completely different in the non-autonomous setting. Bowen's and Ruelle's results are valid for hyperbolic deterministic functions and hyperbolic functions are so called stable functions of the parameter space. In general, it is not possible to expect nice behavior of the Julia sets and of the dimensions of these sets if we perturb an unstable map. Therefore, we first investigate and characterize stability of non-autonomous maps. There are several notions of stability. We consider holomorphic stability that is based on the concept of holomorphic motions and the ฮป-Lemma, which has its origin in the fundamental paper [MSS83] by Manรฉ, Sad and Sullivan. A parameter ฮท โˆˆ ฮ› N 0 is called holomorphically stable if there exists a family of holomorphic motions {h ฯƒ n (ฮป) } n over some neighborhood V ฮท such that the following diagram commutes. In here, ฯƒ(ฮป 1 , ฮป 2 , ...) = (ฮป 2 , ฮป 3 , ...) is the usual shift map. Comerford in [Com08] proved stability for certain hyperbolic non-autonomous polynomial maps. We establish the following characterization of holomorphic stability. It is valid under natural dynamical conditions (Julia sets are perfect and the maps are topologically exact; see Definition 2.2) which are necessary in order to exclude some pathological examples. We would like to mention that the usual theory developed by Manรฉ, Sad and Sullivan [MSS83] is based on the stability of repelling periodic points. Such points do not exists at all in the non autonomous setting. Another remark is that the parameter space ฮ› N 0 is infinite dimensional. Theorem 1.1. Suppose that ฮ› โŠ‚ ฮ› N 0 is equipped with a complex Banach manifold structure. Let f ฮท , ฮท โˆˆ ฮ›, have perfect Julia sets and suppose that f ฮป is topologically exact for ฮป in a neighborhood of ฮท. Then, the map f ฮท is holomorphically stable if and only if there exist an open neighborhood V of ฮท and three holomorphic functions ฮฑ n i : V โ†’ฤˆ, i = 1, 2, 3, such that (1.2) ฮฑ n i (ฮป) โˆˆ J ฯƒ n (ฮป) and ฮฑ n i (ฮป) = ฮฑ n j (ฮป) for all ฮป โˆˆ V and i = j. (1.4) If ฮฑ n+k i (ฮป) = f k ฯƒ n (ฮป) (ฮฑ n j (ฮป)) for some ฮป โˆˆ V then this equality holds for all ฮป โˆˆ V . Remark 1.2. Throughout the whole scope of this paper we could have chosen in each fiber j โ‰ฅ 0 the map f ฮปj in a different family F j of rational maps. In particular, Theorem 1.1 and the whole Section 3 on holomorphic stability does hold without any restrictions on these families F j , j โ‰ฅ 0. Only starting from Section 4 we need some further control like, for example, a uniform bound on the degree of the functions. We do not insist for such a generalization simply because the notations are already involved enough. This characterization is in the spirit of the stability of critical orbits in the deterministic case, i.e. the stability of orbits where c ฮป is a critical point of f ฮป . By Montel's Theorem, such an orbit is stable if it avoids three values ฮฑ n 1 (ฮป), ฮฑ n 2 (ฮป), ฮฑ n 3 (ฮป) depending holomorphically on ฮป and staying some definite spherical distance apart. Such a condition appears in Lyubich's paper [Lyu86] which itself is based on the previous work by Levin [Lev81]. It turns out that this is the right point of view for generalizing the characterization of stability to the non-autonomous setting. Hyperbolic random and non-autonomous polynomials have been studied for example by Comerford [Com06] and Sester [Ses99]. Sumi considered in [Sum97] hyperbolic semi-groups. The definition of hyperbolicity is based on a uniform expanding property, and this is the reason why we will call such maps uniformly hyperbolic. We will consider hyperbolic and uniformly hyperbolic non-autonomous maps. Later in the course of the paper we will see that they have normal critical orbits and are therefore holomorphically stable provided we equip the parameter space with the l โˆž -topology. Using standard properties of quasiconformal mappings we get the following Hรถlder continuity result of the dimensions. Theorem 1.3. For every uniformly hyperbolic map f ฮท there is a neighborhood V of ฮท in l โˆž (ฮ› 0 ) such that the functions ฮป โ†’ HD(J ฮป ) and ฮป โ†’ PD(J ฮป ) (in fact all fractal dimensions) are Hรถlder continuous on V with Hรถlder exponent ฮฑ(ฮป) โ†’ 1 if ฮป converges to the base point ฮท. As already mentioned before, in deterministic as well as in random dynamics one has much more, namely, real analytic dependence of the dimension [Rue82,Rug]. Surprisingly it turned out that in the non-autonomous setting the Hรถlder continuity obtained in Theorem 1.3 is best possible. Indeed we show the following. Theorem 1.4. Consider the quadratic family and let ฮ› be the interior of ฮ› N 0 โˆฉ l โˆž (ฮ› 0 ) for the l โˆž -topology. Then ฮ› = ฮ› uHyp (see Definition 4.2 ) and the functions ฮป โ†’ HD(J ฮป ) and ฮป โ†’ PD(J ฮป ) are not differentiable at any point ฮท โˆˆ ฮ› when equipped with the l โˆž -topology. In order to prove this result we first produce conformal measures, introduce and study fiber pressures and establish an appropriate version of Bowen's formula. Considering the family F in greater detail we also show that generically the different fractal dimensions are not identical. Theorem 1.5. Let F and ฮ› be like in Theorem 1.4. Then, there exists an open and dense set ฮฉ โŠ‚ ฮ› such that HD(J ฮป ) < PD(J ฮป ) for every ฮป โˆˆ ฮฉ . Non-autonomous dynamics Rational functions are holomorphic endomorphisms of the Riemann sphereฤˆ and the spherical geometry is the natural setting to work with. Therefore, all distances, disks and derivatives will be understood with respect to the spherical metric. We always assume that ฮ› 0 is an open and connected subset of C d for some d โ‰ฅ 2 and that F = f ฯ„ ; ฯ„ โˆˆ ฮ› 0 is a holomorphic family of rational functions which means that f ฯ„ is a rational function for every ฯ„ โˆˆ ฮ› 0 and that (ฯ„, z) โ†’ f ฯ„ (z) is a holomorphic map from ฮ› 0 ร—ฤˆ toฤˆ. We are interested in the dynamics of where the f ฮปj โˆˆ F or, equivalently, the ฮป j โˆˆ ฮ› 0 are arbitrarily chosen. Let ฯ€ : ฮ› N 0 โ†’ ฮ› 0 be the canonical projection on the first coordinate and let ฯƒ : be the shift map ฯƒ(ฮป 1 , ฮป 2 , ...) = (ฮป 2 , ฮป 3 , ...). To ฮป = (ฮป 1 , ฮป 2 , ...) โˆˆ ฮ› we associate a nonautonomous dynamical system by first identifying f ฮป with f ฯ€(ฮป) = f ฮป1 and then by setting A straightforward generalization of the deterministic case leads to the following definitions. The Fatou set of (f n ฮป ) n is F (f ฮป ) = z โˆˆฤˆ ; (f n ฮป ) n is a normal family near z and the Julia set J (f ฮป ) =ฤˆ \ F (f ฮป ). Most often there will be only one non-autonomous map f ฮป associated to the parameter ฮป. Then we will use the simpler notations F ฮป and J ฮป . For these sets we have the invariance property Here are some basic definitions and observations concerning these non-autonomous dynamical systems. Lemma 2.1. The Julia set J ฮป of a non-autonomous map f ฮป is either infinite or there exists N โ‰ฅ 0 such that, for every n โ‰ฅ N , J ฯƒ n (ฮป) consists in at most two points. Proof. From the invariance property (2.1) it is clear that either all the sets J ฯƒ n (ฮป) , n โ‰ฅ 0, are simultaneously infinite or finite and that the sequence n ฮป = #J ฯƒ n (ฮป) is decreasing hence stabilising when finite. Suppose that #J ฮป < โˆž and let N be the first integer such that n ฮป = (n + 1) ฮป for every n โ‰ฅ N . Since, by assumption, the functions of F are not injective, it follows that every point of J ฯƒ N (ฮป) is a totally ramified point of f ฯƒ N (ฮป) . Therefore we are done since a rational map of degree at least two has at most two such points. As usually, J ฮป is called perfect if it does not have isolated points. In the case where J ฮป is an infinite set then it is automatically perfect provided the map satisfies the following mixing property. Definition 2.2. A map f ฮป is topologically exact if, for every open set U that intersects J ฮป , there exists N โ‰ฅ 1 such that f N ฮป (U ) โŠƒ J ฯƒ N (ฮป) . As we will see in Example 2.3, non-autonomous maps need not be topologically exact. However, this mixing property is satisfied in most natural settings and is a mild natural dynamical condition. Bรผger [Bรผg97] showed that polynomial non-autonomous maps with bounded coefficients are topologically mixing. This results suggest most likely that f ฮป is topologically exact if {ฮป j } j is pre-compact in ฮ› 0 . Non-autonomous maps are very general and many of the basic properties valid in the deterministic case are no longer true here. For example, in the deterministic case a point is in the Julia set if no subsequence of the iterates is normal. Also, deterministic Julia sets are known to be perfect sets. Both these properties are no longer true in the non-autonomous setting. To illustrate this and some other particularities we provide here two simple examples. Example 2.3. Let f (z) = z 2 and h j (z) = ฮฑ j z for some ฮฑ j > 0, j โ‰ฅ 0. There are numbers ฮป j > 0 such that for every j โ‰ฅ 1 In other words, the deterministic map f is conjugated by the similarities (h j ) j to the nonautonomous map f ฮป . The numbers ฮฑ j can be chosen such that f n ฮป (z) = f n (z) = z 2 n for even n and f n ฮป (z) = r n f n (z) = r n z 2 n for odd n. In here the coefficients r n are chosen to decrease to zero so fast that the sequence (f n ฮป ) n odd is normal at every finite point z โˆˆ C. Notice that then (f n ฮป ) n odd is not normal at infinity from which easily follows that In particular, this example shows that the conjugation (2.2) does not preserve the Julia sets. Also, the initial system is perfect and topologically exact whereas the new non-autonomous map has neither of these properties. Example 2.4. Consider f a hyperbolic rational function such that the Fatou set of f has infinitely many distinct connected components U 1 , U 2 , .... For example, one might take f (z) = z 2 + c where c = โˆ’0.123 + 0.745i and where the associated Julia set J (f ) is Douady's rabbit. Now, similarly to the first example, we will modify this deterministic map by conjugating it to a non-autonomous map f ฮป where n . This times, M n = Id for even n and, for odd n, M n is a Mรถbius transformations of the Riemann sphere such that M n (U n ) โŠƒฤˆ \ D(0, r n ) where r n โ†’ 0. Notice that f 2 ฯƒ 2k (ฮป) = f 2 for every k โ‰ฅ 0. It follows that the deterministic set J (f ) is a subset of the non-autonomous set J ฮป . On the other hand, it is easy to see that F (f ) โŠ‚ F ฮป . Therefore, both systems have the same Julia set J (f ) = J ฮป . In this example, the conjugation preserves the Julia and Fatou sets. However, although we started from a hyperbolic hence expanding function f , for the non-autonomous map f ฮป we have that Further examples with pathological properties can be found e.g. in [Brรผ01] and especially in the very interesting papers [Sum10,Sum11] by H. Sumi. Both above examples are obtained in conjugating a deterministic map. The reason why in both cases the resulting dynamics differ from the original ones is the the lack of equicontinuity of the conjugating family of similarities or Mรถbius transformations respectively. Given this observation it is natural to introduce the following definition. Definition 2.5. Two non-autonomous maps f ฮป and f ยต are conjugated if there are homeomorphisms h j :ฤˆ โ†’ฤˆ such that If in addition the families {h j } j and {h โˆ’1 j } j are equicontinuous then f ฮป and f ยต are called bi-equicontinuous conjugated. In the case the homeomorphisms h j being (quasi)-conformal then we say that the maps are (quasi)-conformally conjugated or (quasi)-conformally biequicontinuous conjugated. The notion of bi-equicontinuous conjugation is consistent with the notion of affine conjugations used by Comerford in [Com03]. Often it is necessary to consider conjugations that do only hold on the Julia sets. But, in order to do so, it is necessary to first ensure that the conjugating maps do identify the Julia sets. Clearly, bi-equicontinous conjugations have this property. As we have seen in Example 2.3, conjugations may not. Nevertheless, in some special cases like in the Example 2.4 Julia sets are preserved. Here is a more general statement where this also holds. Lemma 2.6 (Rescaling Lemma). Suppose that f ฮป is a topologically exact non-autonomous map such that all the Julia sets J (f ฯƒ n (ฮป) ), n โ‰ฅ 0, contain at least three distinct points. Suppose that h n are homeomorphisms ofฤˆ such that 0, 1, โˆž โˆˆ h n (J (f ฯƒ n (ฮป) )) and such that (h n ) n conjugates f ฮป to the non-autonomous map g ฮป . Then J (g ฯƒ n (ฮป) ) = h n (J (f ฯƒ n (ฮป) )) for every n โ‰ฅ 0 . Proof. It suffices to establish the required identity for n = 0, i.e. we have to show that J (g ฮป ) =J ฮป ifJ ฮป = h 0 (J (f ฮป )). Let ฮฑ n 1 , ฮฑ n 2 , ฮฑ n 3 โˆˆ J ฯƒ n (ฮป) be the points that are mapped by h n onto 0, 1, โˆž respectively. Ifz โˆˆJ ฮป then it is easy to see from the conjugations thatz has an open neighborhood U such that g n ฮป (U ) does not contain any of the points 0, 1, โˆž. Therefore, Montel's Theorem yields thatฤˆ \J ฮป โŠ‚ F (g ฮป ) or, equivalently, that J (g ฮป ) โŠ‚J ฮป . Suppose now that there existsz โˆˆJ ฮป โˆฉ F (g ฮป ). Then there exists an open neighborhood U ofz such that (g n ฮป ) n is normal on U . Let ฯ• be the limit on U of a convergent subsequence of (g n ฮป ) n . Shrinking U if necessary, we may assume that one of the points 0, 1, โˆž is not in ฯ•(U ). LetW be an open neighborhood ofz such thatW is relatively compact in U . Since By assumption, the map f ฮป is topologically exact. Therefore, there is N > 0 such that f n ฮป (W ) โŠƒ J ฯƒ n (ฮป) for every n โ‰ฅ N . It follows that g n ฮป (W ) โŠƒ {0, 1, โˆž} for every n โ‰ฅ N . But then we get the contradiction that {0, 1, โˆž} โŠ‚ ฯ•(U ). We showed thatJ ฮป โŠ‚ J (g ฮป ) and thus both sets coincident. Stability and normality of critical orbits In this section we study holomorphic stability and establish, in particular, Theorem 1.1. We would like to mention that Comerford in [Com08] has a partial result in this direction. He shows holomorphic stability for certain polynomial non-autonomous systems provided they are hyperbolic. Our result is an if and only if condition for the stability of a general nonautonomous rational map. The condition relies on the dynamics of the critical orbits and, due to the great generality of non-autonomous systems, we are lead to consider two different conditions of normal critical orbits. In the Proposition 3.5 and in Theorem 3.6 we relate them to holomorphic stability and they yield Theorem 1.1. In the following we suppose that ฮ› โŠ‚ ฮ› N 0 is a complex Banach manifold. A canonical choice is to take ฮ› = ฮ› N 0 and to equip this space with the Tychonov topology. A more relevant example is to work with the l โˆž -topology. Given any function ฯ‰ : Starting from Section 4 we most often deal with uniform hyperbolic maps (see Definition 4.2). Then the natural associated parameter space is ฮ› = l โˆž (ฮ› 0 ), i.e. the space l โˆž ฯ‰ (ฮ› 0 ) with weight function ฯ‰ โ‰ก 1. 3.1. Holomorphic motions. Since this section relies on quasiconformal mappings and holomorphic motions, we start by summarizing some facts from this theory. Let ฮท โˆˆ ฮ› be a base point. Definition 3.1. A holomorphic motion of a set E โŠ‚ฤˆ over ฮ› is a mapping h : ฮ› ร— E โ†’ฤˆ having the following three properties. โ€ข As already mentioned in the introduction, Manรฉ, Sad and Sullivan [MSS83] initially established a ฮป-Lemma stating that any holomorphic motion of a set E โŠ‚ฤˆ over the unit disk of C can be extended to a holomorphic motion of the closure of E. Since then, this ฮป-Lemma has been extensively studied and generalized. Most notably, Slodkowski [Slo95] showed that every holomorphic motion over the unit disk is the restriction of a holomorphic motion of the whole sphere. Hubbard [Hub76] discovered that this is false for holomorphic motions over higher-dimensional parameter spaces and [JM07] contains a simpler example. Nevertheless, we dispose in the following ฮป-Lemma due to Mitra [Mit00] and Yiang-Mitra [JM07]. Theorem 3.2 (ฮป-Lemma). A holomorphic motion h of a set E โŠ‚ฤˆ over a simply connected complex Banach manifold V with basepoint ฮท โˆˆ V extents to a holomorphic motion H of E over V such that Holomorphic stability and normal critical orbits. Here is the precise definition of the stability we use. Notice that, in this definition, the conjugating maps h ฯƒ n (ฮป) are not necessarily bi-equicontinuous. We therefore have to include here that the conjugating maps identify the Julia sets. The set of holomorphic stable parameters is denoted by ฮ› stable . In the theory by Manรฉ, Sad and Sullivan [MSS83] and, independently, Lyubich [Lyu86], showing in particular density of stable parameters in any deterministic holomorphic family of rational functions, appear several equivalent characterizations of stability. Most of this theory relies heavily on the stability of repelling cycles which, in the present non-autonomous setting, do not exist at all. There is one criterion of stability in [Lyu86] which turns out to be appropriate for generalization to the present setting. This criterion exploits the dynamics of the critical orbits .. under perturbation of ฮป. Indeed, stability coincides with the normality of these orbits and, as already mentioned in the introduction, Montel's Theorem implies that such an orbit is stable if it avoids three values ฮฑ n 1 (ฮป), ฮฑ n 2 (ฮป), ฮฑ n 3 (ฮป) depending holomorphically on ฮป and staying some definite distance apart. It is therefore natural to make the following definition. for some ฮป โˆˆ V then this equality holds for all ฮป โˆˆ V . Notice that (3.2) is precisely (1.3) and the compatibility condition (3.3) is also exactly the condition (1.3) of Theorem 1.1. Only the first condition (3.1) differs from the corresponding one in Theorem 1.1. It is a normalized version of condition (1.2) in which we allow the functions ฮฑ n j to have values not only in the corresponding Julia set but in the whole Riemann sphere. If, in this definition, the condition (3.1) is replaced by (1.2), then we will say that f ฮท has normal critical orbits in the sense of Theorem 1.1 on V . Proposition 3.5. Suppose that ฮท โˆˆ ฮ› stable is a holomorphic stable parameter and that J ฮท is a perfect set. Then f ฮท has normal critical orbits in the sense of Theorem 1.1. Proof. Consider first the map f ฮท and let us define the points ฮฑ n j (ฮท) by induction. Since J ฮท is perfect, there exist three distinct points ฮฑ 0 1 (ฮท), ฮฑ 0 2 (ฮท), ฮฑ 0 3 (ฮท) โˆˆ J ฮท . Suppose that all the points ฮฑ k j (ฮท) are defined for 0 โ‰ค k < n. The set J ฯƒ n (ฮท) is also perfect and so there are distinct points By assumption there are holomorphic motions {h ฯƒ n (ฮป) } n such that Definition 3.3 is satisfied. The following main result of this section goes in the opposite direction. Notice that here we do not need any additional assumption. So, in particular, no topological exactness is needed. Theorem 3.6. Suppose that f ฮท has normal critical orbits. Then f ฮท is holomorphically stable, i.e. ฮท โˆˆ ฮ› stable . Moreover, the corresponding family of holomorphic motions is biequicontinuous; it gives rise to a bi-equicontinuous conjugation. Before giving a proof of it, let us first explain how Theorem 1.1 results. Proof of Theorem 1.1. Given Proposition 3.5 we only have to show that normality of critical orbits in the sense of Theorem 1.1 implies holomorphic stability. Let f ฮท be a map such that there exist functions ฮฑ n 1 , ฮฑ n 2 , ฮฑ n 3 defined and holomorphic on some neighborhood V of ฮท such that the conditions (1.2), (1.3) and (1.4) are satisfied. Let M ฯƒ n (ฮป) be a Mรถbius transformation sending the points ฮฑ n j (ฮป), j = 1, 2, 3, to 0, 1, โˆž and considerf ฯƒ n (ฮป) defined by By assumption, f ฮป is topologically exact near ฮท, say on V . Therefore, Lemma 2.6 applies and yields that for all ฮป , n . Since the functions ฮป โ†’ ฮฑ n j (ฮป) are holomorphic on V , it suffices to establish holomorphic stability off ฮท . This new functionf ฮท has normal critical orbits (with functionsฮฑ n j constant 0, 1 or โˆž) and so we would like to conclude by applying Theorem 3.6. However, on every fiber the mapf ฯƒ j (ฮป) , j โ‰ฅ 0, belongs to a different holomorphic family F j = {f ฯƒ j (ฮป) ; ฮป โˆˆ V }. But, as already mentioned in Remark 1.2, the whole paper and especially Theorem 3.6 does hold in this generality with the same proof. Thereforef ฮท is holomorphically stable. The remainder of this section is devoted to the proof of Theorem 3.6. In order to do so, suppose from now on that f ฮท has normal critical orbits: there are V , an open neighborhood of ฮท, and holomorphic functions ฮฑ n j such that the conditions of Definition 3.4 are satisfied. Consider the sets {ฮฑ n 1 (ฮป), ฮฑ n 2 (ฮป), ฮฑ n 3 (ฮป)} , j โ‰ค n and (3.5) E ฯƒ j (ฮป) = nโ‰ฅj E ฯƒ j (ฮป),n , ฮป โˆˆ V and j โ‰ฅ 0 . Proposition 3.7. For every j โ‰ฅ 0, there are holomorphic motions h ฯƒ j (ฮป) : Proof. We explain how to obtain the motions in the case j = 0. The general case is proven exactly the same way. Let z ฮท โˆˆ E ฮท and let n โ‰ฅ 0 be minimal such that z ฮท โˆˆ E ฮท,n . A point z ฮท โˆˆ E ฮท,n if f n ฮท (z ฮท ) = ฮฑ n i (ฮท) for some i โˆˆ {1, 2, 3}. Hence, we have to consider the equation . We want to apply the implicit function theorem to this equation and get z as a function of ฮป. This is possible as long as (f n ฮป ) โ€ฒ (z) = 0. If (f n ฮป ) โ€ฒ (z) = 0, then the point ฮฑ n i (ฮป) is a critical value of f n ฮป . However, the assumption (3.2) implies that this is not the case for ฮป โˆˆ V . Therefore there is a uniquely defined holomorphic function ฮป โ†’ z ฮป , ฮป โˆˆ V , starting at the given point z ฮท , if ฮป = ฮท, and such that (ฮป, z ฮป ) is solution of (3.8). Therefore, we can define If ever z ฮป โˆˆ E ฮป,k โˆฉ E ฮป,n for some ฮป โˆˆ V and 1 โ‰ค k โ‰ค n, then there are i, j โˆˆ {1, 2, 3} such that ฮฑ n i (ฮป) = f nโˆ’k ฯƒ k (ฮป) (ฮฑ k j (ฮป)). But then the compatibility condition (3.3) implies that the last equation holds for all ฮป โˆˆ V and that it does not matter for the definition of the function ฮป โ†’ z ฮป if we start with ฮฑ n i (ฮท) or with ฮฑ k j (ฮท). The normalization (3.6) and the conjugating relation (3.7) are clearly satisfied simply by the way we constructed the holomorphic motions. Hence, the proof is complete. We are now able to conclude the proof of Theorem 3.6 since we can now apply Mitra's version of the ฮป-Lemma. Indeed, Theorem 3.2 asserts that the motions h ฯƒ j (ฮป) extend to holomorphic motions of the closure K ฯƒ j (ฮป) . We continue to denote these extended motions by h ฯƒ j (ฮป) . These maps h ฯƒ j (ฮป) are global quasiconformal homeomorphisms with dilatation bounded by exp (2ฯ V (ฮท, ฮป)). Therefore, for every fixed ฮป โˆˆ V the family (h ฯƒ j (ฮป) ) j is uniformly quasiconformal and normalized by (3.6). Since the points ฮฑ j i (ฮป), i = 1, 2, 3, are at definite spherical distance (see Condition (3.1)), it results from standard properties of families of uniformly quasiconformal mappings that the conjugation by (h ฯƒ j (ฮป) ) j is bi-equicontinuous. Up to now we showed that Theorem 3.6 holds but with the julia sets J ฯƒ j (ฮป) replaced by the sets K ฯƒ j (ฮป) . However it is not hard to see that J ฯƒ j (ฮป) โŠ‚ K ฯƒ j (ฮป) . Indeed, for every open set U โŠ‚ฤˆ \ K ฯƒ j (ฮป) we have that Hence, Montel's Theorem along with Condition (3.1) imply that U โŠ‚ F ฯƒ j (ฮป) . Consequently, J ฯƒ j (ฮป) โŠ‚ K ฯƒ j (ฮป) for every j โ‰ฅ 0. The proof of Theorem 3.6 is complete. From this study of holomorphic stability we get first informations concerning our initial problem, namely the behavior of the variation of the Julia sets and of their dimensions. Proof. The assertion on the Hรถlder continuity directly results from known properties of quasiconformal mappings along with the fact that the distortions of the quasiconformal mappings h ฯƒ j (ฮป) do only depend on ฮป and not on j โ‰ฅ 0. Concerning the continuity of the Julia sets, this is a consequence of the continuity of the function (ฮป, z) โ†’ h ฯƒ j (ฮป) (z) (see property (2) of Theorem 3.2). Hyperbolic non-autonomous systems In deterministic dynamics a hyperbolic function is stable. But if we perturb a deterministic hyperbolic function to a non-autonomous map then the stability depends on the topology we use on the parameter space. As an illustration we first consider the simple Tychonov convergence and explain that, for this topology, every map is unstable (see Proposition 4.1). Then we investigate non-autonomous hyperbolic and uniform hyperbolic functions and will see that the later are stable provided the parameter space is ฮ› = l โˆž (ฮ› 0 ). In oder to prove their stability it suffices to use Theorem 3.6. Indeed, the normal critical orbits condition is best appropriated since it is easy to check for hyperbolic maps. 4.1. Stability and Tychonov topology. Up to here, the parameter space ฮ› was equipped with any arbitrary complex manifold structure. Let us inspect a particular case. On the one hand we have that ฮป (n) โ†’ ฮท point wise. On the other hand we have HD(J ฮป (n) ) = ฮด โ€ฒ for every n โ‰ฅ 1 and hence HD(J ฮป (n) ) โ†’ HD(J ฮท ) as n โ†’ โˆž. But then it follows from Corollary 3.8 that ฮท cannot be a stable parameter. 4.2. Hyperbolicity. Hyperbolic random systems have been studied in various papers (see e.g. [Com06,Ses99] and also [Sum97] where hyperbolic semi-groups are considered). In these papers, normalized most often polynomial families are considered and the definitions of hyperbolicity rely on uniform conditions. We therefore call such functions uniformly hyperbolic. The set of parameters of uniformly hyperbolic random maps is denoted by ฮ› uHyp . For general families of non-autonomous maps this definition is not entirely satisfactory. For instance, in the Example 2.4 we have conjugated a deterministic hyperbolic function by Mรถbius maps. The resulting non-autonomous map does not satisfy the requirements of Definition 4.2 although it shares many properties of maps that should be called hyperbolic. It is uniformly expanding "up to a conformal change of coordinates". Moreover, it is topologically exact which, as we will see (Lemma 4.8), is a property that uniform hyperbolic maps always have. A natural candidate for the class of hyperbolic maps is to take all the maps that are Mรถbius conjugate to uniform hyperbolic maps. However, one has to be careful since the map given in Example 2.3, obtained by conjugation by similarities of a deterministic hyperbolic function, should really not be called hyperbolic. Given these examples and Lemma 2.6 which ensures that the Julia sets are identified provided the dynamics are topologically exact, it is natural to introduce the following definition. |(f N ฯƒ j (ฮป) ) โ€ฒ (z)| โ‰ฅ ฯ„ > 1 for all z โˆˆ V ฮด (J ฯƒ j (ฮป) ) and j โ‰ฅ 0 . In particular, if f ฮป is uniformly hyperbolic then there exist ฮด > 0 such that for all n โ‰ฅ 1, j โ‰ฅ 0 and z โˆˆ J ฯƒ n+j (ฮป) all holomorphic inverse branches of f n ฯƒ j (ฮป) are well defined on D(z, ฮด) have uniform distortion and are uniformly contracting. In the case of deterministic iteration of rational functions there are several equivalent conditions for hyperbolicity. One of them is the expanding condition, another condition demands that critical orbits are captured by attracting domains. Here is a version in the non-autonomous case which in fact is an adaption of [Ses99]. (1) f ฯƒ j (ฮป) (U j ) โŠ‚ U j+1 and dist S (f ฯƒ j (ฮป) (U j ), โˆ‚U j+1 ) โ‰ฅ m 0 , (2) D(z, m 0 ) โˆฉ U j = โˆ…, for every z โˆˆ J ฯƒ j (ฮป) , and (3) the critical points of f ฯƒ j (ฮป) are contained in U j . Proof. Since most of the proof is standard we only give a brief outline of it. Especially, finding the sets U j knowing that f ฮป is uniformly hyperbolic is a straightforward adaption of Sester's arguments [Ses99, which themselves are based on the deterministic case. The main idea is to build a metric in which all the functions f ฯƒ j (ฮป) have a derivative greater than some constant ฮณ > 1 on V ฮด (J ฯƒ j (ฮป) ) for some ฮด > 0. The proof of the opposite implication is based on hyperbolic geometry. Suppose the sets U j are given, ). Then f ฯƒ j (ฮป) :ลจ j โ†’ V j+1 is a proper map and, the critical orbits being captured by the domains U j (see (3)), f ฯƒ j (ฮป) : ฯ‰ j โ†’ ฮฉ j+1 is a covering map where ฯ‰ j , ฮฉ j+1 is the complement of the closure ofลจ j , V j+1 respectively. Therefore this map is a local hyperbolic isometry with respect to the hyperbolic distances of these domains. Property (1) implies that there is 0 < c < 1 such that the inclusion map i : ฯ‰ j+1 โ†’ ฮฉ j+1 is a hyperbolic c-contraction for all j โ‰ฅ 0. Combining these properties it follows that f ฯƒ j (ฮป) is a 1/c-expansion on J ฯƒ j (ฮป) โŠ‚ ฯ‰ j โˆฉ f โˆ’1 (ฯ‰ j+1 ) with respect to the hyperbolic distances of ฯ‰ j and ฯ‰ j+1 . Finally, it results from property (2) that it is possible to compare the hyperbolic and spherical distance for points in J ฯƒ j (ฮป) โŠ‚ ฯ‰ j , j โ‰ฅ 0, and to conclude. The topological characterization of Propositon 4.5 and espacially the uniform control due to the constant m 0 implies the following. This result immediatley implies the following continuity property of non-autonomous Julia sets which, in various versions, is well known to the specialists (see for example [Brรผ00,Ses99,Com06]. On the other hand, since all inverse branches exists and are uniformly contracting on the complement ofลจ j , j โ‰ฅ 1, we have first of all that J ฮป = n A n ฮป = nรƒ n ฮป and, secondly, that for every ฮต > 0 there exist n = n ฮต โ‰ฅ 1 such that A n ฮป โŠ‚รƒ n ฮป โŠ‚ V ฮต (J ฮป ) for every ฮป โˆˆ V . Fix ฮต > 0 and let n = n ฮต . Notice that the sets A n ฮป andรƒ n ฮป do only depend on the n functions f ฮป1 , ..., f ฮปn . A standard compactness argument shows now that there exists ฮด = ฮด(ฮต) > 0 such that Therefore, for every ฮป, ฮป โ€ฒ โˆˆ V such that sup i=1,...,n |ฮป i โˆ’ ฮป โ€ฒ i | < ฮด we have that We conclude the discussion on uniform hyperbolicity with the following uniform mixing property. Proof. Suppose to the contrary that there exist r 1 > 0 and 0 < r 2 โ‰ค ฮด and, for every N , j N โ‰ฅ 0, z 1,N โˆˆ J ฯƒ j (ฮป) and z 2,N โˆˆ J ฯƒ j N +N (ฮป) such that (D(z 1,N , r 1 ) Since f ฮป is expanding on the Julia set the family (ฯ• N ) N is not normal at the origin. Therefore there are infinitely many N such that (4.4) ฯ• N (D(0, 1/2)) โˆฉ D(z 2,N , r 2 ) = โˆ… . Since r 2 โ‰ค ฮด, all inverse branches of f N ฯƒ j N (ฮป) are well defined and have bounded distortion on D(z 2,N , r 2 ). It suffices then to choose N big enough and to deduce from expanding along with (4.4) that where f โˆ’N ฯƒ j N (ฮป), * is some well chosen inverse branch. This contradicts (4.3). 4.3. Hyperbolicity and stability. The definition of hyperbolic map is based on uniform controls, e.g. the iterated maps f n ฯƒ j (ฮป) are expanding uniformly in j. With respect to this and in order to deal with perturbations of hyperbolic functions it is natural to equip the parameter space ฮ› with the sup-norm, i.e. to work with the space ฮ› = l โˆž (ฮ› 0 ). Throughout the rest of this paper we suppose that ฮ› is this particular Banach manifold. As already mentioned, in order to establish stability of uniformly hyperbolic maps, the condition of normal critical orbits as defined in Definition 3.4 is perfectly adapted since easy to verify for such functions. such that dist S (a n i , a n j ) โ‰ฅ c 0 for some c 0 > 0 and for all n โ‰ฅ 0 and i = j. Since C f ฮป j โŠ‚ U j , j โ‰ฅ 1, we have the inclusion f n ฮป (C f n ฮป ) โŠ‚ f ฮปn (U nโˆ’1 ) โŠ‚ U n . The constant functions ฮป โ†’ ฮฑ n i (ฮป) = z n i , ฮป โˆˆ V , therefore satisfy the conditions (1) and (2) of Definition 3.4 and appropriate perturbations of these constant functions if necessary yield that Condition (3) of this definition is also satisfied. Therefore, f ฮป has normal critical orbits on V . The following statement follows now from Theorem 3.6. Conformal measures, pressure and dimensions In this section we consider a single non-autonomous uniformly hyperbolic map f ฮป , ฮป = (ฮป 1 , ฮป 2 , ...) โˆˆ ฮ› uHyp . Remember that all the derivatives are taken with respect to the spherical metric. Since {ฮป n } n is relatively compact in the set ฮ› 0 and since the rational maps are Lipschitz with respect to the spherical metric [Bea91, Theorem 2.3.1], there is a constant A < โˆž such that (5.1) |f โ€ฒ ฯƒ j (ฮป) (z)| โ‰ค A for all z โˆˆฤˆ and j โ‰ฅ 1 . 5.1. Conformal measures. Let t โ‰ฅ 0 and consider the operators L ฯƒ j (ฮป),t : C(J ฯƒ j (ฮป) ) โ†’ C(J ฯƒ j+1 (ฮป) ) defined by Proposition 5.1. For every t โ‰ฅ 0 there exist a sequence of probability measures m ฯƒ j (ฮป),t โˆˆ PM(J ฯƒ j (ฮป) ) and positive numbers ฯ ฯƒ j (ฮป),t such that ,t for all j โ‰ฅ 0 . Moreover, there exist a sequence N k โ†’ โˆž and points w k โˆˆ J ฯƒ N k (ฮป) such that for all j โ‰ฅ 0 . Measures, actually a sequence of measures, satisfying (5.3) are called t-conformal. To simplify the notations we will use often in this section the following shorthands m j,t = m ฯƒ j (ฮป),t and ฯ j,t = ฯ ฯƒ j (ฮป),t . This does not lead to confusions since the parameter ฮป โˆˆ ฮ› uHyp is fixed. Proof. Choose for every N โ‰ฅ 0 arbitrarily a point w N โˆˆ J ฯƒ N (ฮป) and consider the probability measures Observe that Let N k โ†’ โˆž be a sequence such that all the measures m N k j converge weakly as k โ†’ โˆž and denote m j,t = lim kโ†’โˆž m N k j . It follows then from (5.5) that, for every j โ‰ฅ 0, the limit (5.4) also exists and that we have (5.3). Lemma 5.3. With the notations of Proposition 5.1, we have for every j โ‰ฅ 0 and t โ‰ฅ 0 that ,t 1 1 (w k ) and since (5.7) the lemma follows from the expression (5.4). Remember that ฮด = ฮด(ฮป) is such that all inverse branches are well defined and have bounded distortion on disks of radius ฮด centered on Julia sets. Lemma 5.4. For every t โ‰ฅ 0, there exist a constant C t โ‰ฅ 1 such that for every t-conformal measure m j,t and associated ฯ j,t and for all r > 0 and z โˆˆ J ฯƒ j (ฮป) we have where ฯ n j,t = ฯ j,t ฯ j+1,t ...ฯ j+nโˆ’1,t and ฯ โˆ’n j,t = ฯ n j,t โˆ’1 and where n โ‰ฅ 1 is maximal such that First of all, since f ฮป is expanding we have a lower bound of the derivatives |f โ€ฒ ฯƒj (ฮป) | on Julia sets. Together with the Lipschitz estimation (5.1) it follows that there is a > 0 such that and j โ‰ฅ 1 . Therefore, if z โˆˆ J ฯƒ j (ฮป) and if we put r n = |f n ฯƒ j (ฮป) (z)| โˆ’1 then for every r > 0 there exist n such that (5.9) r โ‰ r n . with implicit constants independent of z, j. Therefore it suffices to establish Lemma 5.4 for radii of the form r = r n = |f n ฯƒ j (ฮป) (z)| โˆ’1 . But this follows from a standard zooming argument along with the conformality of the measures. More precisely from formula (5.6) provided we can prove the following claim. Claim 5.5. There is a constant c > 0 such that for every sequence of t-conformal measures m j,t we have that (5.10) m j,t (D(z, ฮด)) โ‰ฅ c for all j โ‰ฅ 0 and z โˆˆ J ฯƒ j (ฮป) . In order to establish this lower bound we first make the following general observation. The sphere having finite spherical volume and the number ฮด being fixed, there is an absolute number M such that every Julia set J ฯƒ n (ฮป) can be covered by no more than M disks of radius ฮด. Consequently there exist, for every n โ‰ฅ 0, a disk D n = D(z, ฮด), z โˆˆ J ฯƒ n (ฮป) , having measure m n,t (D n ) โ‰ฅ 1/M . The mixing property of Lemma 4.8 with r 1 = r 2 = ฮด asserts that there is a number N = N (ฮด) such that (5.11) f N ฯƒ j (ฮป) (D(z, ฮด)) โŠƒ D j+N for every j โ‰ฅ 0 and z โˆˆ J ฯƒ j (ฮป) . Therefore, there is ฮฉ โŠ‚ D(z, ฮด) such that f N ฯƒ j (ฮป) : ฮฉ โ†’ D j+N is a conformal bijection with bounded distortion. With ฮพ โˆˆ ฮฉ an arbitrarily chosen point we get with ฯ j,t the eigenvalues associated to m j,t by (5.3). It remains to estimate ฯ N j,t . But this has already been done in Lemma 5.3 from which follows that ฯ N j โ‰ค a โˆ’N t deg(f ฮป ) N . Therefore, we get the final estimation As a first consequence of the previous result we get the following key estimation. Proof. Let again ฮด = ฮด(ฮป) and remember from the previous proof that there is an absolute number M such that, for every j, n, the Julia set J ฯƒ j+n (ฮป) can be covered by at most M disks D i = D(z i , ฮด), i = 1, ..., M , of radius ฮด. Let j โ‰ฅ 0, n โ‰ฅ 1 and let U i,k be the components of f โˆ’n ฯƒ j (ฮป) (D i ). Notice that {U i,k } i,k is a Besicovitch covering of J ฯƒ j (ฮป) , i.e. z โˆˆ U i,k can happen for at most M indices (i, k). Together with conformality of the measures we get that L n ฯƒ j (ฮป),t 1 1(w) and (5.13) 1 ฯ โˆ’n j,t L n ฯƒ j (ฮป),t 1 1(z i ) for every i = 1, ..., M . The right-hand inequality of the lemma follows now easily from Koebe's distortion theorem and (5.13). For the other inequality we proceed as follows. Let again N = N (ฮด) be an integer such that the mixing property (5.11) holds. For all n < N the required estimation is true (see (5.7)). Let n โ‰ฅ N and j โ‰ฅ 0. Denote then w max โˆˆ J ฯƒ j+nโˆ’N (ฮป) a point such that L nโˆ’N ฯƒ j (ฮป),t 1 1(w max ) = L nโˆ’N ฯƒ j (ฮป),t 1 1 โˆž . which is the required inequality. We have not shown yet unicity of conformal measures. Ifm j,t are some other conformal measures andฯ j,t are the corresponding eigenvalues from (5.3) then they are uniformly close to the eigenvalues ฯ j,t of m j,t in the following sense. Lemma 5.7. For every t โ‰ฅ 0, there exist a constant B t โ‰ฅ 1 such that for all j โ‰ฅ 0 and n โ‰ฅ 1 we have Proof. With the above notations we get from Lemma 5.4 that m j,t (D(z, r)) โ‰ r t ฯ โˆ’n j,t andm j,t (D(z, r)) โ‰ r tฯโˆ’n j,t for every z โˆˆ J ฯƒ j (ฮป) and r = r(z, n) = |(f n ฯƒ j (ฮป) ) โ€ฒ (z)| โˆ’1 . Fix n โ‰ฅ 1. Taking a Besicovitch covering of J ฯƒ j (ฮป) by disks D k = D(z k , r(z k , n)) centered on J ฯƒ j (ฮป) we get that for all j โ‰ฅ 0 and n โ‰ฅ 1. Pressure. To every ฮป โˆˆ ฮ› hyp and t โ‰ฅ 0 we associate the lower and upper topological pressure where we used the already introduced notation ฯ n ฮป,t = ฯ ฮป,t ฯ ฯƒ(ฮป),t ...ฯ ฯƒ nโˆ’1 (ฮป),t . Notice that these definitions do not dependent on the choice of conformal measures because of Lemma 5.7. Since we have good estimations (Lemma 5.6) for the iterated operator L n ฮป,t we also have the following expression for the pressures. (5.15) P ฮป (t) = lim inf nโ†’โˆž 1 n log L n ฮป,t 1 1(w n ) โ‰ค lim sup nโ†’โˆž 1 n log L n ฮป,t 1 1(w n ) = P ฮป (t) for any arbitrary choice of points w n โˆˆ J ฯƒ n (ฮป) . The pressures, seen as functions of t, have the following properties. If p 2 > p 1 โˆ’ (t 2 โˆ’ t 1 ) log ฮณ then there is ฮต > 0 sufficiently small such that for some sequence r j โ†’ 0 we get lim jโ†’โˆž m ฮป,t 2 (D(z,rj )) m ฮป,t 1 (D(z,rj )) = 0. This holds for every z โˆˆ J ฮป . Therefore it would follow from Besicovitch's covering theorem that m t2 (J ฮป ) = 0, a contradiction. Therefore, The second inequality can be proven in the same way replacing the estimation r ฮณ โˆ’n by Dimensions. Given the properties of the pressure functions in Proposition 5.8, there are uniquely defined zeros h ฮป and h ฮป of P ฮป and P ฮป respectively. With these numbers we get the following formula of Bowen's type. Proof. Given Lemma 5.4 and the properties of the pressure functions (Proposition 5.8) the proof of the theorem is by now standard. A good reference is [PU]. Irregularity of pressure and dimensions Considering a particular family of quadratic polynomials greater in detail, we now establish that the Hรถlder-continuity of dimensions obtained in Theorem 1.3 is almost best possible, i.e. we prove Theorem 1.4. The key point is to show non-differentiability of the pressure functions. As a byproduct we get that generically there is a gap between the Hausdorff and the packing dimension as described in Theorem 1.5. We recall that these results concern the family of functions Note that for f l โˆˆ F we have f โ€ฒ l (z) = lz. The inverse branches of f l have the form Let U 0 = {z โˆˆ C : |z โˆ’ 1| < 1/3} and U 1 = {z โˆˆ C : |z + 1| < 1/3} and denote U := U 0 โˆช U 1 . A simple calculation shows that f l (U i ) โŠƒ D(0, 2) and that moreover f โˆ’1 ฮป (U ) โŠ‚ U for every i = 0, 1 and ฮป โˆˆ ฮ› = ฮ› N 0 . Consequently, the Julia set J ฮป is a Cantor set and all critical orbits of (f n ฮป ) n , l โˆˆ ฮ› N 0 , do not intersect the set U . This last property means that every ฮป โˆˆ l โˆž (ฮ› 0 ) gives rise to a uniformly hyperbolic map and that, in particular, ฮป is a stable parameter. Let, in the following, ฮ› = l โˆž (ฮ› 0 ). We have ฮ› = ฮ› uHyp = ฮ› stable . Let ฮท โˆˆ ฮ› and let {h ฯƒ n (ฮป) } n be a family of holomorphic motions over V neighborhood of ฮท such that (1.1) holds. We first investigate the speed of these motions. Proof. The particular choice of the functions in the family F leads to the following expressions. First of all, for every n โ‰ฅ 1, Now, using again holomorphic stability and the notation z x = h ฮป(x) (z), z โˆˆ J ฮท , we also have that If we now apply Lemma 6.1 then we get the estimation Replacing โˆ† by this estimation in the preceding inequalities leads to e โˆ’tn|x|/2 n k=1 e โˆ’txs k (f n ฮท ) โ€ฒ (z) โˆ’t โ‰ค (f n ฮป(x) ) โ€ฒ (z x ) โˆ’t โ‰ค e tn|x|/2 n k=1 e โˆ’txs k (f n ฮท ) โ€ฒ (z) โˆ’t for every z โˆˆ J ฮท and t > 0. The operators L ฮป,t have been defined in (5.2). The previous inequality yields (6.10) e โˆ’tn|x|/2 n k=1 e โˆ’txs k L n ฮท,t 1 1(w) โ‰ค L n ฮป(x),t 1 1(w x ) โ‰ค e tn|x|/2 n k=1 e โˆ’txs k L n ฮท,t 1 1(w) for every n โ‰ฅ 0, w โˆˆ J ฯƒ n (ฮท) and with w x = h ฯƒ n (ฮป(x)) (w). Avoiding long notation, we have just shown this inequality for the first fiber. But it is clear that one can replace here the parameters ฮท and ฮป(x) by their images by ฯƒ j , j โ‰ฅ 1, and one still has the corresponding estimation. We can now study the behavior of the pressures. Let us recall that we have the expression (5.15) of P ฮป (t) and of P ฮป (t) in terms of the iterated operators L n ฮป,t 1 1. Inequality (6.10) implies that, for all x โˆˆ (โˆ’r, r) and t > 0, (6.11) โˆ’ t |x| 2 + 1 n log L n ฮป(x),t 1 1(w x ) โ‰ค 1 n log L n ฮท,t 1 1(w) โˆ’ t x n n k=1 s k โ‰ค t |x| 2 + 1 n log L n ฮป(x),t 1 1(w x ) . For the conclusion of the proof let t > 0 again be fixed. There is then a sequence n j โ†’ โˆž such that P ฮท (t) = lim jโ†’โˆž This choice makes that lim sup j โˆ’t x n nj k=1 s k = t|x|. It follows now from (6.11) that which is exactly (6.8). Inequality (6.9) follows in the same way and they both together imply that the pressures are not differentiable at ฮท. Proof of Theorem 1.4. We first consider Hausdorff dimension. Let h ฮท > 0 be the unique zero of t โ†’ P ฮท (t) and suppose that the s k โˆˆ {โˆ’1, 1} in Proposition 6.2 are chosen for t = h ฮท . It follows then from (6.9) in Proposition 6.2 that We look for h x zero of t โ†’ P ฮป(x) (t) since, by Theorem 5.9, this number equals the Hausdorff dimension of J ฮป(x) . The pressures being stricly decreasing, h x < h ฮท . Therefore, Proposition 5.8 yields from which follows that (6.12) h x โ‰ค h ฮท 1 โˆ’ |x| 2 log A . Proof of Theorem 1.5. In any family F the set ฮฉ = {ฮป โˆˆ ฮ› , HD(J ฮป ) < PD(J ฮป )} is open in l โˆž (ฮ›) because of Theorem 1.3. Density of ฮฉ for the particular quadratic family of this section can be shown as follows. If ฮท โˆˆ ฮ› \ ฮฉ then it follows immediately from (6.12) and (6.13) together with Bowen's formula (Theorem 5.9) that there are arbitrarily small perturbations of ฮท that are in ฮฉ.
Strong Backdoors to Bounded Treewidth SAT There are various approaches to exploiting"hidden structure"in instances of hard combinatorial problems to allow faster algorithms than for general unstructured or random instances. For SAT and its counting version #SAT, hidden structure has been exploited in terms of decomposability and strong backdoor sets. Decomposability can be considered in terms of the treewidth of a graph that is associated with the given CNF formula, for instance by considering clauses and variables as vertices of the graph, and making a variable adjacent with all the clauses it appears in. On the other hand, a strong backdoor set of a CNF formula is a set of variables such that each possible partial assignment to this set moves the formula into a fixed class for which (#)SAT can be solved in polynomial time. In this paper we combine the two above approaches. In particular, we study the algorithmic question of finding a small strong backdoor set into the class W_t of CNF formulas whose associated graphs have treewidth at most t. The main results are positive: (1) There is a cubic-time algorithm that, given a CNF formula F and two constants k,t\ge 0, either finds a strong W_t-backdoor set of size at most 2^k, or concludes that F has no strong W_t-backdoor set of size at most k. (2) There is a cubic-time algorithm that, given a CNF formula F, computes the number of satisfying assignments of F or concludes that sb_t(F)>k, for any pair of constants k,t\ge 0. Here, sb_t(F) denotes the size of a smallest strong W_t-backdoor set of F. The significance of our results lies in the fact that they allow us to exploit algorithmically a hidden structure in formulas that is not accessible by any one of the two approaches (decomposability, backdoors) alone. Already a backdoor size 1 on top of treewidth 1 (i.e., sb_1(F)=1) entails formulas of arbitrarily large treewidth and arbitrarily large cycle cutsets. Introduction Background. Satisfiability (SAT) is probably one of the most important NP-complete problems [8,22]. Despite the theoretical intractability of SAT, heuristic algorithms work surprisingly fast on real-world SAT instances. A common explanation for this discrepancy between theoretical hardness and practical feasibility is the presence of a certain "hidden structure" in industrial SAT instances [18]. There are various approaches to capturing the vague notion of a "hidden structure" with a mathematical concept. One widely studied approach is to consider the hidden structure in terms of decomposability. The basic idea is to decompose a SAT instance into small parts that can be solved individually, and to put solutions for the parts together to a global solution. The overall complexity depends only on the maximum overlap of the parts, the width of the decomposition. Treewidth and branchwidth are two decomposition width measures (related by a constant factor) that have been applied to satisfiability. The width measures are either applied in terms of the primal graph of the formula (variables are vertices, two variables are adjacent if they appear together in a clause) or in terms of the incidence graph (a bipartite graph on the variables and clauses, a clause is incident to all the variables it contains). If the treewidth or branchwidth of any of the two graphs is bounded, then SAT can be decided in polynomial time; in fact, one can even count the number of satisfying assignments in polynomial time. This result has been obtained in various contexts, e.g., resolution complexity [2] and Bayesian Inference [5] (branchwidth of primal graphs), and Model Checking for Monadic Second-Order Logic [13] (treewidth of incidence graphs). A complementary approach is to consider the hidden structure of a SAT instance in terms of a small set of key variables, called backdoor set, that when instantiated moves the instance into a polynomial class. More precisely, a strong backdoor set of a CNF formula F into a polynomially solvable class C (or strong C-backdoor set, for short) is a set B of variables such that for all partial assignments ฯ„ to B, the reduced formula F [ฯ„ ] belongs to C (weak backdoor sets apply only to satisfiable formulas and will not be considered in this paper). Backdoor sets where introduced by Williams et al. [39] to explain favorable running times and the heavy-tailed behavior of SAT and CSP solvers on practical instances. In fact, real-world instances tend to have small backdoor sets (see [23] and references). Of special interest are base classes for which we can find a small backdoor set efficiently, if one exists. This is the case, for instance, for the base classes based on the tractable cases in Schaefer's dichotomy theorem [35]. In fact, for any constant b one can decide in linear time whether a given CNF formula admits a backdoor set of size b into any Schaefer class [15]. Contribution. In this paper we combine the two above approaches. In particular, we study the algorithmic question of finding a small strong backdoor set into a class of formulas of bounded treewidth. Let W โ‰คt denote the class of CNF formulas whose incidence graph has treewidth at most t. Since SAT and #SAT can be solved in linear time for formulas in W โ‰คt [13,34], we can also solve these problems efficiently for a formula F if we know a strong W โ‰คt -backdoor set of F of small size k. We simply take the sum of the satisfying assignments over all 2 k reduced formulas that we obtain by applying partial truth assignments to a backdoor set of size k. However, finding a small strong backdoor set into a class W โ‰คt is a challenging problem. What makes the problem difficult is that applying partial assignments to variables is a much more powerful operation than just deleting the variables from the formula, as setting a variable to true may remove a large set of clauses, setting it to false removes a different set of clauses, and for a strong backdoor set B we must ensure that for all the 2 |B| possible assignments the resulting formula is in W โ‰คt . The brute force algorithm tries out all possible sets B of at most k variables, and checks for each set whether all the 2 |B| reduced formulas belong to W โ‰คt . The number of membership checks is of order 2 k n k for an input formula with n variables. This number is polynomial for constant k, but the order of the polynomial depends on the backdoor size k. Is it possible to get k out of the exponent and to have the same polynomial for every fixed k and t? Our main result provides an affirmative answer to this question. We show the following. Theorem 1. There is a cubic-time algorithm that, given a CNF formula F and two constants k, t โ‰ฅ 0, either finds a strong W โ‰คt -backdoor set of size at most 2 k , or concludes that F has no strong W โ‰คt -backdoor set of size at most k. Our algorithm distinguishes for a given CNF formula between two cases: (A) the formula has small treewidth, or (B) the formula has large treewidth. In Case A we use model checking for monadic second order logic [3] to find a smallest backdoor set. Roberson and Seymour's theory of graph minors [29] guarantees a finite set of forbidden minors for every minor-closed class of graphs. Although their proof is non-constructive, for the special case of bounded treewidth graphs the forbidden minors can be computed in constant time [1,21]. These forbidden minors are used in our monadic second order sentence to describe a strong backdoor set to the base class W โ‰คt . A model checking algorithm [3] then computes a strong W โ‰คt -backdoor set of size k if one exists. In Case B we use a theorem by Robertson and Seymour [31], guaranteeing a large wall as a topological minor, to find many vertex-disjoint obstructions in the incidence graph, so-called wall-obstructions. A backdoor set needs to "kill" all these obstructions, where an obstruction is killed either internally because it contains a backdoor variable, or externally because it contains two clauses containing the same backdoor variable with opposite signs. Our main combinatorial tool is the obstruction-template, a bipartite graph with external killers on one side and vertices representing vertex-disjoint connected subgraphs of a wall-obstruction on the other side of the bipartition. It is used to guarantee that for sets of k variables excluding a bounded set of variables, every assignment to these k variables produces a formula whose incidence graph has treewidth at least t + 1. Combining both cases leads to an algorithm producing a strong W โ‰คt -backdoor set of a given formula F of size at most 2 k if F has a strong W โ‰คt -backdoor set of size k. For our main applications of Theorem 1, the problems SAT and #SAT, we can solve Case A actually without recurring to the list of forbidden minors of bounded treewidth graphs and to model checking for monadic second order logic. Namely, when the treewidth of the incidence graph is small, we can directly apply one of the known linear-time algorithms to count the number of satisfying truth assignments [13,34], thus avoiding the issue of finding a backdoor set. We arrive at the following statement where sb t (F ) denotes the size of a smallest strong W โ‰คt -backdoor set of a formula F . Theorem 2. There is a cubic-time algorithm that, given a CNF formula F , computes the number of satisfying assignments of F or concludes that sb t (F ) > k, for any pair of constants k, t โ‰ฅ 0. This is a robust algorithm in the sense of [36] since for every instance, it either solves the problem (SAT, #SAT) or concludes that the instance is not in the class of instances that needs to be solved (the CNF formulas F with sb t (F ) โ‰ค k). In general, a robust algorithm solves the problem on a superclass of those instances that need to be solved, and it does not necessarily check whether the given instance is in this class. Theorem 2 applies to formulas of arbitrarily large treewidth. We would like to illustrate this with the following example. Take a CNF formula F n whose incidence graph is obtained from an n ร— n square grid containing all the variables of F n by subdividing each edge by a clause of F n . It is well-known that the n ร— n grid, n โ‰ฅ 2, has treewidth n and that a subdivision of an edge does not decrease the treewidth. Hence F n / โˆˆ W โ‰คnโˆ’1 . Now take a new variable x and add it positively to all horizontal clauses and negatively to all vertical clauses. Here, a horizontal (respectively, a vertical) clause is one that subdivides a horizontal (respectively, a vertical) edge in the natural layout of the grid. Let F x n denote the new formula. Since the incidence graph of F n is a subgraph of the incidence graph of F x n , we have F x n / โˆˆ W โ‰คnโˆ’1 . However, setting x to true removes all horizontal clauses and thus yields a formula whose incidence graph is acyclic, hence F x n [x = true] โˆˆ W โ‰ค1 . Similarly, setting x to false yields a formula F x n [x = false] โˆˆ W โ‰ค1 . Hence {x} forms a strong W โ‰ค1 -backdoor set of F x n . Conversely, it is easy to construct, for every t โ‰ฅ 0, formulas that belong to W โ‰คt+1 but require arbitrarily large strong W โ‰คt -backdoor sets. One can also define a deletion C-backdoor set B of a CNF formula F by requiring that deleting all literals x, ยฌx with x โˆˆ B from F produces a formula that belongs to the base class [27]. For many base classes it holds that every deletion backdoor set is a strong backdoor set, but in most cases, including the base class W โ‰คt , the reverse is not true. In fact, it is easy to see that if a CNF formula F has a deletion W โ‰คt -backdoor set of size k, then F โˆˆ W t+k . In other words, the parameter "size of a smallest deletion W โ‰คt -backdoor set" is dominated by the parameter "treewidth of the incidence graph" and therefore of limited theoretical interest, except for reducing the space requirements of dynamic programming procedures [6] and analyzing the effectiveness of polynomial time preprocessing [10]. A common approach to solve #SAT is to find a small cycle cutset (or feedback vertex set) of variables of the given CNF formula, and by summing up the number of satisfying assignments of all the acyclic instances one gets by setting the cutset variables in all possible ways [11]. We would like to note that such a cycle cutset is nothing but a deletion W โ‰ค1 -backdoor set. By considering strong W โ‰ค1 -backdoor sets instead, one can get super-exponentially smaller sets of variables, and hence a more powerful method. A strong W โ‰ค1 -backdoor set can be considered as a an implied cycle cutset as it cuts cycles by removing clauses that are satisfied by certain truth assignments to the backdoor variables. By increasing the treewidth bound from 1 to some fixed t > 1 one can further dramatically decrease the size of a smallest backdoor set. Our results can also be phrased in terms of Parameterized Complexity [14]. Theorem 2 states that #SAT is uniformly fixed-parameter tractable (FPT) for parameter (t, sb t ). Theorem 1 states that there is a uniform FPT-approximation algorithm for the detection of strong W โ‰คt -backdoor sets of size k, for parameter (t, k), as it is a fixed-parameter algorithm that computes a solution that approximates the optimum with an error bounded by a function of the parameter [25]. Related work. Williams et al. [39] introduced the notion of backdoor sets and the parameterized complexity of finding small backdoor sets was initiated by Nishimura et al. [26]. They showed that with respect to the classes of Horn formulas and of 2CNF formulas, the detection of strong backdoor sets is fixed-parameter tractable. Their algorithms exploit the fact that for these two base classes strong and deletion backdoor sets coincide. For other base classes, deleting literals is a less powerful operation than applying partial truth assignments. This is the case, for instance, for RHorn, the class of renamable Horn formulas. In fact, finding a deletion RHorn-backdoor set is fixed-parameter tractable [28], but it is open whether this is the case for the detection of strong RHorn-backdoor sets. For clustering formulas, detection of deletion backdoor sets is fixedparameter tractable, detection of strong backdoor sets is most probably not [27]. Very recently, the authors of the present paper showed [16,17] that there are FPT-approximation algorithms for the detection of strong backdoor sets with respect to (i) the base class of formulas with acyclic incidence graphs, i.e., W โ‰ค1 , and (ii) the base class of nested formulas (a proper subclass of W โ‰ค3 introduced by Knuth [20]). The present paper generalizes this approach to base classes of bounded treewidth which requires new ideas and significantly more involved combinatorial arguments. We conclude this section by referring to a recent survey on the parameterized complexity of backdoor sets [15]. Preliminaries Graphs. Let G be a simple, undirected, finite graph with vertex set V = V (G) and edge set E = E(G). Let S โŠ† V be a subset of its vertices and v โˆˆ V be a vertex. We denote by G โˆ’ S the graph obtained from G by removing all vertices in S and all edges incident to vertices in S. We denote by T is a tree with elements of I as nodes such that: The width of a tree decomposition is max iโˆˆI |X i | โˆ’ 1. The treewidth [30] of G is the minimum width taken over all tree decompositions of G and it is denoted by tw(G). For other standard graph-theoretic notions not defined here, we refer to [12]. For a set X โŠ† var(F ) we denote by 2 X the set of all mappings ฯ„ : X โ†’ {0, 1}, the truth assignments on X. A truth assignment ฯ„ โˆˆ 2 X can be extended to the literals over X by setting ฯ„ (ยฌx) = 1 โˆ’ ฯ„ (x) for all x โˆˆ X. The formula F [ฯ„ ] is obtained from F by removing all clauses c such that ฯ„ sets a literal of c to 1, and removing the literals set to 0 from all remaining clauses. SAT is the NP-complete problem of deciding whether a given CNF formula is satisfiable [8,22]. #SAT is the #P-complete problem of determining the number of distinct ฯ„ โˆˆ 2 var(F ) with cla(F [ฯ„ ]) = โˆ… [38]. The class W โ‰คt contains all CNF formulas F with tw(inc(F )) โ‰ค t. For any fixed t โ‰ฅ 0 and any CNF formula F โˆˆ W โ‰คt , a tree decomposition of inc(F ) of width at most t can be found by Bodlaender's algorithm [7] in time O(|F |). Given a tree decomposition of width at most t of inc(F ), the number of satisfying assignments of F can be determined in time O(|F |) [13,34]. Finally, note that, if ฯ„ โˆˆ 2 X is a partial truth assignment for a CNF formula F , then inc(F [ฯ„ ]) is an induced subgraph of inc(F ), namely inc(F [ฯ„ ]) is obtained from inc(F ) โˆ’ X by removing each vertex corresponding to a clause that contains a literal โ„“ with ฯ„ (โ„“) = 1. Backdoors. Backdoor sets are defined with respect to a fixed class C of CNF formulas, the base class. Let F be a CNF formula and B โŠ† var If we are given a strong C-backdoor set of F of size k, we can reduce the satisfiability of F to the satisfiability of 2 k formulas in C. If C is clause-induced (i.e., F โˆˆ C implies F โ€ฒ โˆˆ C for every CNF formula F โ€ฒ with cla(F โ€ฒ ) โŠ† cla(F )), any deletion C-backdoor set of F is a strong C-backdoor set of F . The interest in deletion backdoor sets is motivated for base classes where they are easier to detect than strong backdoor sets. The challenging problem is to find a strong or deletion C-backdoor set of size at most k if it exists. Denote by sb t (F ) the size of a smallest strong W โ‰คt -backdoor set. Graph minors. The operation of merging a subgraph H or a vertex subset The contraction operation merges a connected subgraph. The dissolution operation contracts an edge incident to a vertex of degree 2. A graph H is a minor of a graph G if H can be obtained from a subgraph of G by contractions. If H is a minor of G, then one can find a model of H in G. A model of H in G is a set of vertexdisjoint connected subgraphs of G, one subgraph C u for each vertex u of H, such that if uv is an edge in H, then there is an edge in G with one endpoint in C u and the other in C v . A graph H is a topological minor of a graph G if H can be obtained from a subgraph of G by dissolutions. If H is a topological minor of G, then G has a topological model of H. A topological model of H in G is a subgraph of G that can be obtained from H by replacing its edges by independent paths. A set of paths is independent if none of them contains an interior vertex of another. We also say that G contains a subdivision of H as a subgraph. Obstructions to small treewidth. It is well-known that tw(G) โ‰ฅ tw(H) if H is a minor of G. We will use the following three (classes of) graphs to lower bound the treewidth of a graph containing any of them as a minor. See Figure 1. The complete graph K r has treewidth r โˆ’ 1. The complete bipartite graph K r,r has treewidth r. The r-wall is the graph W r = (V, E) with vertex set V = {(i, j) : 1 โ‰ค i โ‰ค r, 1 โ‰ค j โ‰ค r} in which two vertices (i, j) and (i โ€ฒ , j โ€ฒ ) are adjacent iff either j โ€ฒ = j and i โ€ฒ โˆˆ {i โˆ’ 1, i + 1}, or i โ€ฒ = i and j โ€ฒ = j + (โˆ’1) i+j . We say that a vertex (i, j) โˆˆ V has horizontal index i and vertical index j. The r-wall has treewidth at least โŒŠ r 2 โŒ‹ (it is a minor of the โŒŠ r 2 โŒ‹ ร— โŒŠ r 2 โŒ‹-grid, which has treewidth โŒŠ r 2 โŒ‹ if โŒŠ r 2 โŒ‹ โ‰ฅ 2 [32]). We will also need to find a large wall as a topological minor if the formula has large incidence treewidth. Its existence is guaranteed by a theorem of Robertson and Seymour. Theorem 3 ([31] ). For every positive integer r, there exists a constant f (r) such that if a graph G has treewidth at least f (r), then G contains an r-wall as a topological minor. By [33], f (r) โ‰ค 20 64r 5 . For any fixed r, we can use the cubic algorithm by Grohe et al. [19] to find a topological model of an r-wall in a graph G if G contains an r-wall as a topological minor. The algorithms We start with the overall outline of our algorithms. We rely on the following two lemmas whose proofs we defer to the next two subsections. Lemma 1. There is a quadratic-time algorithm that, given a CNF formula F , two constants t โ‰ฅ 0, k โ‰ฅ 1, and a topological model of a wall(k, t)-wall in inc(F ), computes a set S * โŠ† var(F ) of constant size such that every strong W โ‰คt -backdoor set of size at most k contains a variable from S * . Lemma 2. There is a linear-time algorithm that, given a CNF formula F , a constant t โ‰ฅ 0, and a tree decomposition of inc(F ) of constant width, computes a smallest strong W โ‰คt -backdoor set of F . Lemma 2 will be invoked with a tree decomposition of inc(F ) of width at most tw(k, t). The functions wall(k, t) and tw(k, t) are related by the bound from [33], implying that every graph either has treewidth at most tw(k, t), or it has a wall(k, t)-wall as a topological minor. Here, tw(k, t) := 20 64ยท(wall(k,t)) 5 , wall(k, t) := (2t + 2) ยท (1 + obs(k, t)), The other functions of k and t will be used in Subsection 3.1. Theorem 1 can now be proved as follows. Proof of Theorem 1. Let t, k โ‰ฅ 0 be constants, let F be the given CNF formula, with |F | = n and let G := inc(F ). Using Bodlaender's algorithm [7] we can decide in linear time whether tw(G) โ‰ค tw(k, t), and if so, compute a tree decomposition of smallest width in linear time. If indeed tw(G) โ‰ค tw(k, t), we use Lemma 2 to find a smallest strong W โ‰คt -backdoor set B of F . If |B| โ‰ค k we output B, otherwise we output NO. If tw(G) > tw(k, t) then we proceed as follows. If k = 0, we output NO. Otherwise, by [33], G has a wall(k, t)-wall as a topological minor, and by means of Grohe et al.'s algorithm [19], we can compute a topological model of a wall(k, t)-wall in G in time O(n 3 ). By Lemma 1, we can find in time O(n 2 ) a set S * โŠ† var(F ) of constant size such that every strong W โ‰คt -backdoor set of F of size at most k contains a variable from S * . For each x โˆˆ S * , the algorithm recurses on both formulas F [x = 0] and F [x = 1] with parameter k โˆ’ 1. If both recursive calls return strong W โ‰คt -backdoor sets B ยฌx and B x , then {x} โˆช B x โˆช B ยฌx is a strong W โ‰คt -backdoor set of F . We can upper bound its size s(k) by the recurrence s(k) โ‰ค 1 + 2 ยท s(k โˆ’ 1), with s(0) = 0 and s(1) = 1. The recurrence is satisfied by setting s(k) = 2 k โˆ’ 1. In case a recursive call returns NO, no strong W โ‰คt -backdoor set of F of size at most k contains x. Thus, if for some x โˆˆ S * , both recursive calls return backdoor sets, we obtain a backdoor set of F of size at most 2 k โˆ’ 1, and if for every x โˆˆ S * , some recursive call returns NO, F has no strong W โ‰คt -backdoor set of size at most k. The number of nodes of the search tree modeling the recursive calls of this algorithm is a function of k and t only (and therefore constant), and in each node, the time spent by the algorithms is O(n 2 ). The overall running time is thus dominated by the cubic running time of Grohe et al.'s algorithm, hence we arrive at a total running time of O(n 3 ). Theorem 2 follows easily from Theorem 1, by computing first a backdoor set and evaluating the number of satisfying assignments for all reduced formulas. We present an alternative proof that does not rely on Lemma 2. Instead of computing a backdoor set, one can immediately compute the number of satisfying assignments of F by dynamic programming if tw(inc(F )) โ‰ค tw(k, t). Proof of Theorem 2. Let k, t โ‰ฅ 0 be two integers and assume we are given a CNF formula F with |F | = n and sb t (F ) โ‰ค k. We will compute the number of satisfying truth assignments of F , denoted #(F ). As before we use Bodlaender's linear-time algorithm [7] to decide whether tw(G) โ‰ค tw(k, t), and if so, to compute a tree decomposition of smallest width. If tw(G) โ‰ค tw(k, t) then we use the tree decomposition and, for instance, the algorithm of [34] to compute #(F ) in time O(n). If tw(G) > tw(k, t) then we compute, as in the proof of Theorem 1, a strong W โ‰คt -backdoor set B of F of size at most 2 k in time O(n 3 ). For each ฯ„ โˆˆ 2 B the formula F [ฯ„ ] belongs to W โ‰คt . Hence we can compute #(F [ฯ„ ]) in time O(n) by first computing a tree decomposition of width at most t, and then applying the counting algorithm of [34]. We obtain #(F ) by taking ))| denotes the number of variables that disappear from F [ฯ„ ] without being instantiated. The incidence graph has a large wall as a topological minor This subsection is devoted to the proof of Lemma 1 and contains the main combinatorial arguments of this paper. Let G = (V, E) = inc(F ) and suppose we are given a topological model of a wall(k, t)wall in G. We start with the description of the algorithm. A wall-obstruction is a subgraph of G that is a subdivision of a (2t + 2)-wall. Since a wall-obstruction, and any graph having a wall-obstruction as a topological minor, has treewidth at least t + 1, we have that for each assignment to the variables of a strong W โ‰คt -backdoor set, at least one vertex from each wall-obstruction vanishes in the incidence graph of the reduced formula. Using the wall(k, t)-wall, we now find a set O of obs(k, t) vertex-disjoint wall-obstructions in G. Proof. For any two integers i and j with 1 โ‰ค i, j โ‰ค wall(k, t)/(2t + 2), the subgraph of a wall(k, t)wall induced on all vertices (x, y) with (i โˆ’ 1) ยท (2t + 2) + 1 โ‰ค x โ‰ค i ยท (2t + 2) and (j โˆ’ 1) ยท (2t + 2) + 1 โ‰ค y โ‰ค j ยท (2t + 2) is a (2t + 2)-wall. A corresponding wall-obstruction can be found in G by replacing edges by the independent paths they model in the given topological model. The number of wallobstructions defined this way is wall(k,t) Denote by O a set of obs(k, t) vertex-disjoint wall-obstructions obtained via Lemma 3. A backdoor variable can destroy a wall-obstruction either because it participates in the wall-obstruction, or because every setting of the variable satisfies a clause that participates in the wall-obstruction. Definition 1. Let x be a variable and W a wall-obstruction in G. We say that x kills W if neither inc(F [x = 1]) nor inc(F [x = 0]) contains W as a subgraph. We say that x kills W internally if x โˆˆ V (W ), and that x kills W externally if x kills W but does not kill it internally. In the latter case, W contains a clause c containing x and a clause c โ€ฒ containing ยฌx and we say that x kills W (externally) in c and c โ€ฒ . Our algorithm will perform a series of 3 nondeterministic steps to guess some properties about the strong W โ‰คt -backdoor set it searches. Each such guess is made out of a number of choices that is upper bounded by a function of k and t. At any stage of the algorithm, a valid strong W โ‰คt -backdoor set is one that satisfies all the properties that have been guessed. For a fixed series of guesses, the algorithm will compute a set S โŠ† var(F ) such that every valid strong W โ‰คt -backdoor set of size at most k contains a variable from S. To make the algorithm deterministic, execute each possible combination of nondeterministic steps. The union of all S, taken over all combinations of nondeterministic steps, forms a set S * and each strong W โ‰คt -backdoor set of size at most k contains a variable from S * . Bounding the size of each S by a function of k and t enables us to bound |S * | by a function of k and t, and this will prove the lemma. For any strong W โ‰คt -backdoor set of size at most k, at most k wall-obstructions from O are killed internally since they are vertex-disjoint. The algorithm guesses k wall-obstructions from O that may be killed internally. Let O โ€ฒ denote the set of the remaining wall-obstructions, which need to be killed externally by any valid strong W โ‰คt -backdoor set. Suppose F has a valid strong W โ‰คt -backdoor set B of size k. Then, B defines a partition of O โ€ฒ into 2 k parts where for each part, the wall-obstructions contained in this part are killed externally by the same set of variables from B. Since |O โ€ฒ | = obs(k, t) โˆ’ k = 2 k ยท same(k, t), at least one of these parts contains at least same(k, t) wall-obstructions from O โ€ฒ . The algorithm guesses a subset O s โŠ† O โ€ฒ of same(k, t) wall-obstruction from this part and it guesses how many variables from the strong W โ‰คt -backdoor set kill the wall-obstructions in this part externally. Suppose each wall-obstruction in O s is killed externally by the same set of โ„“ backdoor variables, and no other backdoor variable kills any wall-obstruction from O s . Clearly, 1 โ‰ค โ„“ โ‰ค k. Compute the set of external killers for each wall-obstruction in O s . Denote by Z the common external killers of the wall-obstruction in O s . The presumed backdoor set contains exactly โ„“ variables from Z and no other variable from the backdoor set kills any wall-obstruction from O s . We will define three rules for the construction of S, and the algorithm will execute the first applicable rule. Before being able to state the other two rules, we come to the central combinatorial object in this paper. For each wall-obstruction W โˆˆ O s , we compute a valid obstruction-template. An obstruction-template OT(W ) of a wall-obstruction W โˆˆ O s is a triple (B(W ), P, R), where โ€ข B(W ) is a bipartite graph whose vertex set is bipartitioned into the two independent sets Z and Q W , where Q W is a set of new vertices, โ€ข P is a partition of V (W ) into regions such that for each region A โˆˆ P , we have that W [A] is connected, and โ€ข R : Q W โ†’ P is a function associating a region of P with each vertex in Q W . An obstruction-template OT(W ) = (B(W ), P, R) of a wall-obstruction W โˆˆ O s is valid if it satisfies the following properties: (1) only existing edges: for each q โˆˆ Q W , N B(W ) (q) โŠ† N G (R(q)), (2) private neighbor: for each q โˆˆ Q W , there is a z โˆˆ N B(W ) (q), called q's private neighbor, such that there is no other q โ€ฒ โˆˆ N B(W ) (z) with R(q โ€ฒ ) = R(q), We will use the obstruction-templates to identify a set of vertices that has a non-empty intersection with every valid strong W โ‰คt -backdoor set of size k. Intuitively, an obstruction-template chops up the vertex set of a wall-obstruction into regions. We will suppose the existence of a valid backdoor set B of size k avoiding a certain bounded set of variables and derive a contradiction using the obstruction-templates. This is done by showing that for at least one ฯ„ โˆˆ 2 B , many regions remain in F [ฯ„ ], so that we can contract each of them and construct a treewidth obstruction using the contracted vertices. Each vertex from Q W models a contraction of a region, and its neighborhood models a potential set of common external killers neighboring the contracted region. This explains Property (1). Property (2) becomes handy when a region has many vertices from Q W that are associated with it. Namely, when we contract regions, we would like to be able to guarantee a lower bound on the number of edges of the resulting graph in terms of |Q W |. To ensure that this lower bound translates into a lower bound in terms of |Z|, we need Property (3). The degree lower bound of the next property will be needed so we can patch together a treewidth obstruction out of the pieces modeled by the vertices in Q W . The upper bound on the degree is required to guarantee that sufficiently many vertices from Q W are not neighboring B. Finally, the last property will be used to guarantee that for every q โˆˆ Q W , if B โˆฉ N B(W ) (q) = โˆ…, then there is a truth assignment ฯ„ โˆˆ 2 B such that no vertex from q's region is removed from inc(F ) by applying ฯ„ (see Lemma 5). In the following lemma, we give a procedure to compute valid obstruction-templates. Proof. We describe a procedure to compute a valid obstruction-template (B(W ), P, R). It starts with Q W initially empty. Compute an arbitrary rooted spanning tree T of W . For a node v from T , denote by T v the subtree of T rooted at v. The set of children in T of a node v is denoted C T (v) and its parent p T (v). For a subforest T โ€ฒ โŠ† T , denote by Z( the vertices from Z that are incident to v in G but to no other node from T v . If uv โˆˆ E(T ), then denote by T u (uv) the subtree obtained from T by removing all nodes that are closer to v than to u in T (removing the edge uv decomposes T into T u (uv) and T v (uv)). (A) If w(T ) > 3nb(t), then select a new root r(T ) of T such that for every child c โˆˆ C T (r(T )) of r(T ) we have that w(T โˆ’ V (T c )) โ‰ฅ nb(t). Now, we prove that this procedure computes a valid obstruction-template. First we show that in case w(T ) > 3nb(t), Step (A) is able to find a root r(T ) such that there is no c โˆˆ C T (r(T )) with w(T โˆ’ V (T c )) < nb(t). Suppose that T has no node u such that w(T u (uv)) โ‰ฅ nb(t) for every v โˆˆ N T (u). Then, there is an infinite sequence of nodes u 1 , u 2 , . . . such that u i neighbors u i+1 and w(T u i (u i u i+1 )) < nb(t). Let j be the smallest integer such that u i = u j for some integer i with 1 โ‰ค i < j. Since T is acyclic, we have that i + 2 = j. But then, We observe that all edges of B(W ) have one endpoint in Z and the other in Q W . Thus, Z โŠŽ Q W is a bipartition of B(W ) into independent sets. The set V (W ) is partitioned into disjoint connected regions since each execution of Step (B) defines a new region equal to the vertices of a subtree of T , which is initially a spanning tree of W and is removed from T at the end of Step (B). Consider one execution of Step (B) of the procedure above. We will show that Properties (1)-(5) of a valid obstruction-template hold for the relevant vertices considered in this execution, and this will guarantee these properties for all vertices. Property (1) is ensured for all new vertices since all of them receive at least one new neighbor in B(W ). For the lower bound of Property (4), we first show that at any time during the execution of this procedure, either T is empty or w(T ) โ‰ฅ nb(t). Initially, this is true since |Z| โ‰ฅ nb(t) (by Rule 1) and every vertex from Z is an external killer of W . This remains true since Step (A) makes sure that whenever the vertex v chosen by Step (B) is not the root of T , w(T โˆ’ V (T v )) โ‰ฅ nb(t). Thus, Step (B) always finds a node v such that w(T v ) โ‰ฅ nb(t). Therefore, every vertex that is added to Q W has at least nb(t) neighbors. For the upper bound of Property (4), observe that since W has maximum degree at most 3, T also has maximum degree at most 3. Thus, w(T v โˆ’ {v}) โ‰ค 3(nb(t) โˆ’ 1) since each tree in T v โˆ’ {v} has weight at most nb(t) โˆ’ 1 by the selection of v. Therefore, d B(W ) (q i ) โ‰ค 3nb(t). Property This finishes the description of the algorithm. The correctness of Rule 1 is obvious since any valid strong W โ‰คt -backdoor set contains โ„“ variables from Z and โ„“ โ‰ฅ 1. To prove the correctness of Rules 2 and 3, we need the following lemma. Lemma 5. Let W โˆˆ O s be a wall-obstruction, OT(W ) be a valid obstruction-template of W and q โˆˆ Q W . Let B be a valid strong W โ‰คt -backdoor set such that B โŠ† var(F ) \ N B(W ) (q). There is a truth assignment ฯ„ to B such that inc (F [ฯ„ ]) contains all vertices from R(q). Proof. Since B is valid, it contains no variable from R(q) โŠ† V (W ). Thus, inc(F [ฯ„ ]) contains all variables from R(q). A truth assignment ฯ„ removes a clause c โˆˆ R(q) from the incidence graph iff c contains a literal l such that ฯ„ (l) = 1. We show that no variable from B appears both positively and negatively in the clauses from R(q), and therefore there is a truth assignment ฯ„ to B such that inc(F [ฯ„ ]) contains all vertices from R(q). Assume, for the sake of contradiction, that there is a variable b โˆˆ B such that b โˆˆ lit(c) and ยฌb โˆˆ lit(c โ€ฒ ) and c, c โ€ฒ โˆˆ R(q). We have that b โˆˆ Z because b is in a valid strong W โ‰คt -backdoor set and b is an external killer of W . Since b โˆˆ (N G (R(q)) โˆฉ Z) \ N B(W ) (q), we conclude that q has a vulnerable vertex v. But, since v is the only vulnerable vertex of q, by Property (5), c = c โ€ฒ . We arrive at a contradiction, since no clause contains a variable and its negation. Lemma 6. Rule 2 is sound. Proof. Let Q L โŠ† Q m be a set of t ยท 2 k + 1 vertices such that for each q โˆˆ Q L , N Bm(Os) (q) = L. For the sake of contradiction, suppose F has a valid strong W โ‰คt -backdoor set B of size k with B โˆฉS = โˆ…. By Lemma 5, for each q โˆˆ Q L , there is a truth assignment ฯ„ to B such that inc(F [ฯ„ ]) contains all vertices from R(q). But there are at most 2 k truth assignments to B. Therefore, for at least one truth assignment ฯ„ to B, there is a set Q โ€ฒ L โŠ† Q L of at least โŒˆ|Q L |/2 k โŒ‰ = t + 1 vertices such that inc(F [ฯ„ ]) contains all vertices from R(q), q โˆˆ Q โ€ฒ L . By Property (2), no two distinct q, q โ€ฒ โˆˆ Q L are assigned to the same region. Consider the subgraph of inc(F [ฯ„ ]) induced on all vertices in L and R q , q โˆˆ Q โ€ฒ L . Contracting each region R(q), q โˆˆ Q โ€ฒ L , one obtains a supergraph of a K t+1,t+1 . Thus, inc(F [ฯ„ ]) has a K t+1,t+1 as a minor, implying that its treewidth is at least t + 1, a contradiction. The correctness of Rule 3 will be shown with the use of a theorem by Mader. Proof. Suppose F has a valid strong W โ‰คt -backdoor set B of size k with B โˆฉ S = โˆ…. To arrive at a contradiction, we exhibit a truth assignment ฯ„ to B such that inc(F [ฯ„ ]) has treewidth at least t + 1. To prove the claim, we first show a lower bound on |Q \ N B(Os) (B)| in terms of |Z| and |O s |. Since, by Property (3), each vertex z โˆˆ Z has degree at least one in B(W ), W โˆˆ O s , there are at least |Z| ยท |O s | edges in B m (O s ). Since, by Property (4), each vertex from Q m has degree at most 3nb(t), we have that |Q m | โ‰ฅ |Z|ยท|Os| 3nb(t) . By Rule 2, no set of t ยท 2 k + 1 vertices from Q m has the same neighborhood. Therefore, |Q| โ‰ฅ |Z|ยท|Os| 3nb(t)t2 k . Let d denote the number of edges in B(O s ) with one endpoint in B. Thus, |N B(Os) (B)| โ‰ค d. Since |S| โ‰ฅ 6|B|nb(t) and the degree of any vertex in S is at least the degree of any vertex in B, we have that the number of edges incident to S is at least 6nb(t)d in B(O s ). Thus, |Q| โ‰ฅ 6nb(t)d 3nb(t) = 2d. Therefore, N B(Os) (B) contains at most half the vertices of Q, and we have that |Q \ N B(Os) (B)| โ‰ฅ |Z|ยท|Os| 3nb(t)t2 k+1 . By Lemma 5, for every q โˆˆ Q \N B(Os) (B) there is a truth assignment ฯ„ โˆˆ 2 B such that inc(F [ฯ„ ]) contains all vertices from R(q). Since |2 B | = 2 k , there is a truth assignment ฯ„ โˆˆ 2 B and a subset Q โ€ฒ โŠ† Q\N B(Os) (B) of at least |Q\N B(Os) (B)|/2 k โ‰ฅ |Z|ยท|Os| 3nb(t)t2 2k+1 vertices such that inc(F [ฯ„ ]) contains all vertices from R(q) for every q โˆˆ Q โ€ฒ . This proves Claim 1. and Q โ€ฒ is as in Claim 1. Thus, no vertex from Z โ€ฒ and no vertex from qโˆˆQ โ€ฒ R(q) is removed from the incidence graph by applying the truth assignment ฯ„ to F . We will now merge vertices from H โ€ฒ in such a way that we obtain a minor of inc(F [ฯ„ ]). To achieve this, we repeatedly merge a part A โˆˆ P into a vertex z โˆˆ Z such that z has a neighbor q in H โ€ฒ such that R(q) = A. In the incidence graph, this corresponds to contracting R(q) โˆช {z} into the vertex z. After having contracted all vertices from Q โ€ฒ into vertices from Z โ€ฒ , we obtain therefore a minor of inc(F [ฯ„ ]). Our objective will be to show that the treewidth of this minor is too large and arrive at a contradiction for B being a strong W โ‰คt -backdoor set of F . To prove the claim, we start with H โ€ฒโ€ฒ and Q โ€ฒโ€ฒ as copies of H โ€ฒ and Q โ€ฒ , respectively. We use the invariant that every connected component of H โ€ฒโ€ฒ [Z โ€ฒ ] is a minor of inc(F [ฯ„ ]). For any part A of the partition P , let R A denote the set of vertices {q โˆˆ Q โ€ฒโ€ฒ : R(q) = A}. As long as Q โ€ฒโ€ฒ = โˆ…, select a part A of P such that |R A | โ‰ฅ 1. Let U := qโˆˆR A N H โ€ฒโ€ฒ (q). By the construction of H โ€ฒ and B(O s ) (Property (2)), we have that |U | โ‰ฅ nb(t) + |R A | โˆ’ 1. Claim 2 entails that inc(F [ฯ„ ]) has treewidth at least t + 1. Since ฯ„ is a truth assignment to B, this is a contradiction to B being a strong W โ‰คt -backdoor set of F . This shows the correctness of Rule 3 and proves Lemma 7. The number of choices the algorithm has in the nondeterministic steps is upper bounded by obs(k,t) k ยท 2 k ยทsame(k,t) same(k,t) ยทk, and each series of guesses leads to a set S of at most 6knb(t) variables. Thus, the set S * , the union of all such S, contains 2 O(t 3 ยทkยท4 k ยทpolylog(t)) variables, where polylog is a polylogarithmic function. Concerning the running time, each obstruction-template is computed in time O(n 2 ) by Lemma 4 and their number is upper bounded by a constant. The execution of Rule 2 and the construction of B(O s ) need to compare the neighborhoods of a quadratic number of vertices from Q m . Since each vertex from Q m has a constant sized neighborhood, this can also be done in time O(n 2 ). Thus, the running time of the algorithm is quadratic. This proves Lemma 1. The incidence graph has small treewidth This subsection is devoted to the proof of Lemma 2. We are going to use Arnborg et al.'s extension [3] of Courcelle's Theorem [9]. It gives, amongst others, a linear-time algorithm that takes as input a graph A with labeled vertices and edges, a tree decomposition of A of constant width, and a fixed Monadic Second Order (MSO) sentence ฯ•(X), and computes a minimum-sized set of vertices X such that ฯ•(X) is true in A. First, we define the labeled graph A F for F . The set of vertices of A F is lit(F ) โˆช cla(F ). The vertices are labeled by LIT and CLA, respectively. The vertices from var(F ) are additionally labeled by VAR. The set of edges is the union of the sets { xยฌx : x โˆˆ var(F ) } and { cโ„“ : c โˆˆ cla(F ), โ„“ โˆˆ lit(c) }, edges in the first set are labeled NEG, and edges in the second set are labeled IN. Since a tree decomposition of A F may be obtained from a tree decomposition of inc(F ) by replacing each variable by both its literals, we have that tw(A F ) โ‰ค 2 ยท tw(inc(F )) + 1 and we obtain a constant-width tree decomposition of A F in this way. The goal is to find a minimum size subset X of variables such that for each truth assignment ฯ„ to X the incidence graph of F [ฯ„ ] belongs to G โ‰คt , where G โ‰คt denotes the class of all graphs of treewidth at most t. For testing membership in G โ‰คt we use a forbidden-minor characterization. As proved in a series of papers by Robertson and Seymour [29], every minor-closed class G of graphs is characterized by a finite set obs(G) of forbidden minors. That is, obs(G) is a finite set of graphs such that a graph G belongs to G if and only if G does not contain any graph from obs(G) as a minor. Clearly G โ‰คt is minor-closed. We denote its finite set of obstructions by obs(t) = obs(G โ‰คt ). The set obs(t) is explicitly given in [4] for t โ‰ค 3 and it can be computed in constant time [1,21] for all other values of k. Next we are going to formulate an MSO sentence that checks whether for each truth assignment ฯ„ to X, the incidence graph of F [ฯ„ ] does not contain any of the graphs in obs(t) as a minor. We break up our MSO sentence into several simpler sentences and we use the notation of [14]. The following sentence checks whether X is a set of variables. var(X) = โˆ€x(Xx โ†’ VARx) We associate a partial truth assignment to X with a subset Y of lit(F ), the literals set to 1 by the partial truth assignment. This subset Y contains no complementary literals, every literal in Y is either a variable from X or its negation, and for every variable x from X, x or ยฌx is in Y . The following sentence checks whether Y is an assignment to X. ass(X, Y ) = โˆ€y[Y y โ†’ ((Xy โˆจ (โˆƒz(Xz โˆง NEGyz))) โˆง (โˆ€z(Y z โ†’ ยฌNEGyz)))] โˆงโˆ€x[Xx โ†’ (Y x โˆจ โˆƒy(Y y โˆง NEGxy))] To test whether inc (F [ฯ„ ]) has a graph G with V (G) = {v 1 , . . . , v n } as a minor, we will check whether it contains n disjoint sets A 1 , . . . , A n of vertices, where each set A i corresponds to a vertex v i of G, such that the following holds: each set A i induces a connected subgraph in inc(F [ฯ„ ]), and for every two vertices v i , v j that are adjacent in G, the corresponding sets A i , A j are connected by an edge in inc (F [ฯ„ ]). Deleting all vertices that are in none of the n sets, and contracting each of the sets into one vertex, one obtains G as a minor of F [ฯ„ ]. To test whether A F has G as a minor can be done similarly, except that we need to ensure that each set A i is closed under the complementation of literals (i.e., x โˆˆ A i iff ยฌx โˆˆ A i ). The following sentence checks whether A is disjoint from B. disjoint(A, B) = ยฌโˆƒx(Ax โˆง Bx) To check whether A is connected, we check that there is no set B that is a proper nonempty subset of A such that B is closed under taking neighbors in A. An assignment removes from the incidence graph all variables that are assigned and all clauses that are assigned correctly. Therefore, the minors we seek must not contain any variable that is assigned nor any clause that is assigned correctly. The following sentence checks whether all vertices from a set A survive when assigning Y to X. survives(A, X, Y ) = ยฌโˆƒx(Ax โˆง (Xx โˆจ โˆƒy(Xy โˆง NEGxy) โˆจ โˆƒy(Y y โˆง INyx))) We can now test whether a G-minor survives in the incidence graph as follows: Our final sentence checks whether X is a strong W โ‰คt -backdoor set of F . Strong t (X) = var(X) โˆง โˆ€Y [ass(X, Y ) โ†’ โˆ€ Gโˆˆobs(t) ยฌG-minor(X, Y )))] Recall that we assume t to be a constant. Hence |Strong t | = O(1). Moreover, the tree decomposition of A F that we described has width O(1). We can now use the result of Arnborg et al. [3] that provides a linear time algorithm for finding a smallest set X of vertices of A F for which Strong t (X) holds. This completes the proof of Lemma 2. Conclusion We have described a cubic-time algorithm solving SAT and #SAT for a large class of instances, namely those CNF formulas F that have a strong backdoor set of size at most k into the class of formulas with incidence treewidth at most t, where k and t are constants. As illustrated in the introduction, this class of instances is larger than the class of all formulas with bounded incidence treewidth. We also designed an approximation algorithm for finding an actual strong backdoor set. Can our backdoor detection algorithm be improved to an exact algorithm? In other words, is there an O(n c )-time algorithm finding a k-sized strong W โ‰คt -backdoor set of any formula F with sb t (F ) โ‰ค k where k, t are two constants and c is an absolute constant independent of k and t? This question is even open for t = 1. An orthogonal question is how far one can generalize the class of tractable (#)SAT instances.
Researcher development in universities: Origins and historical context This article explores the origins of researcher development in British universities. Its principal aim is to provide a coherent, and reasonably succinct, account of the evolution and development of researcher development that is as consistent as possible with what is known about the development of the Western university, the history of the research doctorate and the emergence of the research university. The main conclusion is that the origins of researcher development in the modern university can be found in the philology of the early modern university, which in turn emerged from the accumulation of knowledge in Western Christendom from other places and other times. Other conclusions are that there was little researcher development in the medieval university, and that the โ€˜traditionalโ€™ model of researcher development, centred on the PhD, is much more recent than is commonly supposed, so that, from a long-term perspective, the โ€˜traditional modelโ€™ may be but one stage in its continuing development. The article also develops a model that locates researcher development within a series of intellectual contexts: from the research process itself, to the advancement of knowledge more generally, and, finally, to changes in conceptions of knowledge itself. Introduction This article is about the origins of researcher development in British universities. It aims to provide a resource for those who wish to research in the emergent field of researcher development, and those interested in innovation in the practice of researcher development, by providing the historical context within which to frame their work. It seeks to clarify where researcher development in universities is coming from, literally and metaphorically. It offers a single source of knowledge about the origins and history of researcher development across all the stages of development of the Western university from the Middle Ages to the start of the twenty-first century. There are several reasons why this is an important issue. First, the emergence of the researcher developer as a third-space profession (Whitchurch, 2008) has enhanced the significance of enquiry and innovative practice in researcher development and its historical context. Second, the increasing importance of knowledge in the modern world is recognized in the popularity of the term 'knowledge society'; the development of a knowledge society underlines the importance of research, and hence also the importance of researcher development. Third, the position of the PhD is facing potential challenges from other forms of researcher development including, for example, the emergence of the MRes, the filtering down of researcher development Background Researcher development in British universities is a very recent phenomenon. Google's Ngram viewer shows that the overwhelming majority of occurrences of the term 'researcher development' has taken place in the twenty-first century (Google Books Ngram, 2019). This raises the question of the origins and evolution of researcher development. This article reports the results of a quest for the origins of researcher development in Britain. Our enquiry took us back to the beginnings of the Western university, as it is possible that the reason(s) for the emergence of universities could have enduring relevance down the ages to the current purpose(s) of universities, including their contribution to the advancement of knowledge. It finishes at the end of the twentieth century because it is not our intention for this article to become involved in the current debates on researcher development. Indeed, our intention in producing this article is to provide the historical context within which these current debates can be framed. This article adopts a chronological approach, evaluating what we know about researcher development in each of the three main stages of the Western university. These stages are: the medieval university; the Renaissance and early modern university; and the modern university, which emerged in the nineteenth century. This is the consensus model of the main developmental stages of the Western university. It was, for example, the categorization used by the (now) European University Association (EUA) for their monumental A History of the University in Europe (De Ridder-Symoens, 1992, 1996Rรผegg, 2004). For the purposes of this article, 'researcher development' is defined as the enhancement of the capacity and disposition to engage in research. We define research as the intentional accumulation of new knowledge. Research is not the same as scholarship, and we follow Lewis Elton (1992) in defining scholarship as the critical interpretation of existing knowledge. We started, of course, by searching the literature for an already existing history of researcher development, but we were unable to find any single publication on the history of researcher development per se. However, we found a large literature on the history of universities (including, for example, the volumes of the EUA cited above), another large literature on the history of the research doctorate (including, for example, the seminal works of Renate Simpson, 1983Simpson, , 2009, on the development of the PhD in Britain) and another body of work on the history of the research university (including, for example, Clark, 2006 andAxtell, 2016). The task we set ourselves was to distil a coherent, and reasonably succinct, account of the history of researcher development in universities, particularly in Britain, that is as compatible as possible with these literatures. Researcher development in the medieval university During the eleventh and twelfth centuries, the Latin Church was growing geographically and in influence, power and authority over the lives of people of Latin Christendom (Bartlett, 1993). Consequently, it needed more literate and educated clerics for its extending reach and expanding work. In 1079, Pope Gregory VII instructed all cathedrals to establish cathedral schools (Grant, 2004). Most of the earliest universities of Europe had their origins in these cathedral schools (Pedersen, 1997). Their teachers were in holy orders, and their primary purpose was to provide an education that would prepare students for service to the Latin Church. Not all those who enrolled at such institutions, however, became clerics, any more than current students who enrol on history degrees necessarily go on to become historians. The growing reliance on written records in the twelfth and thirteenth centuries enlarged the number of opportunities for employment of literate and university-educated people (Rospigliosi et al., 2016). Students were initially enrolled in the faculty of arts, where their studies were centred on the words of the Bible, which for many people in the Middle Ages (and later) contained the literal words of God (McGrath, 2001), other scriptural texts, particularly the teachings of the major Church fathers, and the trivium (comprising grammar, reasoning (dialectic) and rhetoric), followed by the quadrivium (comprising arithmetic, geometry, astronomy and music). Medieval universities offered doctorates, but not as a research degree (Cobban, 1975). Originally, the master's degree and the doctorate were equivalent terms for a university teacher, with some variation by region and by faculty. Initially, the title of doctor was preferred in the universities of southern Europe and the title of master was favoured in the universities of northern Europe. The arts, philosophy and theology faculties preferred master, and medicine and law preferred doctor. In theology, the title doctor did not replace master until the fifteenth century. 'In 1476, the dean of the arts faculty at Leipzig felt the need to remind people, "The master of arts is the same as the doctor's degree"' (Clark, 2006: 188). In order to explore researcher development in the medieval university, it is necessary to understand research in the medieval university, and in order to understand research in the medieval university, it is necessary to understand the advancement of knowledge more generally, which, in turn, means looking at the nature of knowledge in the medieval university. Knowledge in the medieval university The primary source of knowledge in the medieval university was text. This is not surprising, as Latin Christianity was a text-based religion and the medieval university was an institution of the Latin Church staffed by clerics of the Latin Church. Islamic scholars referred to Christians as 'people of the book' (Burnett, 1997). For the most part, the texts themselves were not challenged as they were sanctioned as 'holy scripture' by the Latin Church (Turner, 2014). There was a hierarchy of esteem among domains of knowledge within the medieval university, in which spiritual (sacred) knowledge occupied a higher place than secular (profane) knowledge. This is reflected in recognition of theology as the 'queen' of the academic subjects (Hannam, 2009). Secular knowledge was relatively unimportant, at least compared to knowledge of the heavenly realm, that is, spiritual knowledge. The Christian Church had traditionally held a low opinion of the pursuit of secular knowledge. Indeed, the pursuit of earthly knowledge had been denounced by St Paul who had the most influence on the development of Christianity other than Jesus himself. And St Augustine who, in the Middle Ages, was the leading figure among the Christian fathers had associated the acquisition of earthly knowledge with the sin of pride (Freeman, 2002). However, it was apparent that a significant part of the success of Islam, in the centuries up to the eleventh century, lay in its superior secular knowledge and that, in this earthly realm, knowledge is power. Consequently, the Latin Church developed an interest in acquiring the secular knowledge possessed by Islamic countries, particularly their practical and scientific knowledge. The unlocking of this secular knowledge in a way that was consistent with the Christian Church was entrusted to the learned clerics who were expert in the interpretation of Christian scripture, that is, those clerics who prepared students for the priesthood, mainly found in the universities of Europe (Hannam, 2009). The advancement of knowledge The modern idea that a university education is a vehicle for the dissemination of new knowledge was largely unknown to medieval university education. There was relatively little accumulation of new knowledge within university education during the Middle Ages, as evidenced by the stability of texts. Many of the texts, such as Peter Lombard's Sentences, first used in the twelfth century, were still in use three hundred years later in the fifteenth century (Harkins, 2015). The greatest advance in knowledge was the result of accumulation of knowledge within Western Christendom from other places (particularly Islamic countries) and other times (particularly ancient Greece). Most of that 'new' knowledge was in the fields of the natural sciences, medicine, philosophy and mathematics. Research If research is the systematic accumulation of new knowledge, then there was very little in the Middle Ages. But the origins of research can be seen in the addition to knowledge available within Western Christendom from the texts from Islamic countries, many of which originated in ancient Greece (Burnett, 1997). Such knowledge was text-based, and most had been copied many times and had been translated into another language at least twice (into Arabic and then into Latin); much was incomplete or fragmentary, and some texts were forgeries (Hiatt, 2004). This meant that those who were responsible for integrating it into the body of knowledge within Western Christendom, mostly university scholars, had the problem of establishing its authenticity, original form and meanings. Tackling these problems led to the development of practices that would form the basis of the theory and practice of philology in the Renaissance and early modern university (Turner, 2014). Researcher development How did medieval scholars learn how to establish the authenticity, original form and meanings of the new texts? First, it is important to recognize that most medieval academics were engaged in scholarship, applying logic to the accepted texts of the Latin Church, rather than in research. For the small minority who were engaged in research, there appears to have been no explicit training or development. These medieval scholars did not have access to the philological methods that had been developed in the ancient world (Turner, 2014). This means that they were reduced to hard experience and following the successful practices of others who were also engaged in this work. By the late Middle Ages, a body of practices had been developed that represented an embryonic form of philology. Perhaps the most famous example of the use of these practices was in the exposure of the Donation of Constantine as fraudulent by Lorenzo Valla in 1440 using recognized philological methods. The Donation of Constantine was a forged decree whereby the Emperor Constantine was supposed to have passed over imperial authority of the western part of the Roman Empire to the Pope. The main conclusion is that the medieval university produced trained scholars skilled in the interpretation of accepted texts, but not trained researchers. Researcher development in the Renaissance and early modern university By the end of the medieval period, the university 'project' had lost its impetus. Green (1974: 31) writes of Oxford in the late medieval period: In the fifteenth century the university entered into ... a period of contraction. Its numbers apparently fell from 1,500 to 1,000; only 27 men took the masters' degree in 1456-7. Contemporary letters complained of the shortage of students, teachers and endowments. There are many reasons for this decline, including the corruption in the Latin Church in the fourteenth and fifteenth centuries, which was increasingly apparent. The goal of providing a higher education that would prepare people for service in the Latin Church therefore became less compelling, and certainly less uplifting. In these later years of the Middle Ages, service to the Latin Church was a less inspiring vision for universities and a less inspiring vocation for talented young men. Consequently, student numbers fell and recruitment became more difficult, which resulted in financial difficulties for the universities (Cobban, 1988). The fall in student numbers led to a reassessment of the priorities of universities. Instead of focusing on the provision of an educated clergy for the Latin Church, they increasingly shifted their focus on to providing a higher education for sons of the relatively affluent. The need to recruit students and generate fee income shifted the mission of universities in a more student-centred direction. The European university became more responsive to emerging developments within Europe, including the Renaissance, the growth of the nation state and the growth of literacy. The university curriculum was extended to reflect the growing interest of the literate classes in the Italian Renaissance and humanistic studies. The spirit of the Renaissance was to seek and preserve the best from across the whole history of civilization, believed, at that time, to have had its origins in ancient Greece (Alexiou, 1990;Miles, 2010). The Renaissance opened up a range of new occupations and increased opportunities within some existing professions, including those of archivists, antiquarians, librarians, curators and personal assistants to those involved in the intellectual revolution in Europe that the Renaissance represented (Rospigliosi et al., 2016). This provided the impetus for the adoption of classical studies within the university that became such an important part of university education for centuries to come. The growth of literacy in Europe in the early modern period also created a rising demand for teachers, a demand that was satisfied by an increasing number of new graduates. The rise of the nation state in Europe also created new forms of employment for literate professionals. Larger political units created the need for administrators, diplomats and other civil servants. An indication of the growing role of university graduates in governing the new nation state can be found in their increasing numbers in the House of Commons: in 1563, only 67 members had a university education; by 1593, only three decades later, the number had risen to 160 (Curtis, 1959: 59). In order to see where researcher development fitted into these transformed institutions, it is necessary to look at its wider context of research. And in order to see where research fitted into its wider context of the advancement of knowledge more generally, it is necessary to look at the still wider context of knowledge itself. Knowledge in the Renaissance and early modern university The decline in the power and esteem of the Latin Church in the late Middle Ages inevitably brought with it a decline in the esteem of spiritual knowledge relative to secular knowledge. At the same time, the Renaissance elevated the esteem of knowledge drawn from ancient Rome and Greece. What counted as knowledge, as far as university education was concerned, continued to be text-based. There was a prima facie case that knowledge that had survived for centuries, even millennia, was of more value than new knowledge that may prove transient. Early Renaissance scholars, such as Petrarch, accumulated manuscripts from monasteries across Europe. The Renaissance disposition to collect, preserve and accumulate is evidenced by the emergence of the 'cabinet of curiosities', the 'room of wonders', the building of galleries and the development of museums. The Renaissance turned the common human impulse for acquisition towards the accumulation of knowledge. It was from this time that the notion of contributing to the stock of knowledge became a recognizable and selfaware activity (Burke, 2016). The advancement of knowledge The key element in the birth of the Renaissance was the recovery of products of the human mind from ancient Rome and Greece. This was not, of course, new knowledge; it was very old knowledge, but it was new to the universities of Europe. A significant dimension of this development is that it constituted an expansion of text-based knowledge far beyond texts that were of particular concern to the Latin Church (Burke, 1997). The new texts from ancient Rome and Greece offered an alternative source of authority, belief and moral justification -key issues for university education of the Renaissance. If universities aspired to produce graduates who could tell right from wrong morally, intellectually and aesthetically, then these were key texts. And if a university education was to introduce young men to the products of the finest minds from the whole of Western civilization, then they were even more important. The accumulation of knowledge became an increasingly important part of the advancement of knowledge in the Renaissance and early modern university. Whereas the medieval universities aspired to the pursuit of truth, Renaissance and early modern universities were more focused on the pursuit of knowledge. The recovery of this knowledge was not, however, unproblematic. In many cases, the original documents were destroyed or lost, and the recovered texts were copies of varying quality and many were forgeries (Hiatt, 2004). Often, the recovered texts were translations, which introduced further sources of error or misinterpretation. The need to resolve problems of this kind gave rise to the development of the philological practices from the medieval university into a key discipline of the early modern university. The development of philology implied a move away from allegorical interpretation of text and towards a more literal interpretation of text and hence knowledge. The main goal of Renaissance philology was to accumulate knowledge and wisdom from texts recovered from ancient Rome and Greece, but it was only a matter of time before the methods of philology were applied to scripture and documents of the Latin Church itself. The weakened Latin Church was unable to prevent this, and the universities had become much more independent. There is therefore a direct link from the development of philology to challenges to the Latin Church's interpretation of scripture and related Church-based documents, to new translations of scripture and translations into the vernacular languages of Europe. Philology thus played an important part in the events that would lead to the Reformation. Research Universities of the Renaissance and early modern period were mostly concerned with the education of young gentlemen, and with knowledge that could impact on that education. Universities sought to discover and then convey an appreciation of the 'best which has been thought and said in the world' (Arnold, 1869: viii). In the context of the Renaissance and early modern world, that meant the fruits of the finest minds of Western civilization throughout the ages and that meant discovering such fruits and working on them to eliminate errors from inaccurate copying and misinterpretations in translation, as well as exposing fraudulent documents. This implied the development of philological skills. The first philological skill was textual analysis. This involved work focused on words and comparison of words from different versions of the same document, checking for anachronisms, mistranslations and so on. The strategy of gaining knowledge and understanding of a text by examining its individual words as parts of that text can be recognized as a reductionist approach. The outcome was often a new edition or a new translation of an old text. A good example of this work and its significance is the new translation of the New Testament from the Greek by Erasmus. The second major philological strategy and skill was contextual analysis. According to philologists, in order to really make sense of texts from bygone ages or faraway places, it was necessary to understand the context(s) in which they were written. It was necessary to understand the societies in which they were produced, their historical circumstances and the motivations of those who produced them. The study of these contexts would eventually produce separate new fields of study within the broader domain of the humanities, such as history, and separate fields of study within the broader domain of the social sciences, such as anthropology (Turner, 2014). As well as contextual analysis and the reductionism of textual analysis, philology developed methods of comparative analysis that eventually played an important role in the rise of linguistics as a separate discipline. Philology also played a crucial role in the development of critical thinking: 'Philology in Europe was, at its zenith, one of the hardest sciences on offer, the centrepiece of education and the sharpest exponent if not the originator of the idea of "critical thinking"' (Pollock, 2009: 931). Philology asked classic questions that defined critical thinking, such as, 'who is the author of the text?', 'what were their motivations and interests?' and 'what assumptions would have been made in the time and place that it was written?' Researcher development The lack of impact of the scientific revolution on university education has often been noted (see, for example, Ashby, 1958). This has been taken as evidence of the backwardness of university education and universities in general in the seventeenth and eighteenth centuries. This is unfair, because the primary goal of the Renaissance Researcher development in universities 213 London Review of Education 17 (2) 2019 and early modern university was to offer a university education that would equip young gentlemen for lives after university, and it was not clear how the physical sciences could play a part in the lives of the large majority of such young men. In England, for example, most graduates of Oxford and Cambridge would go on to positions in the Anglican Church and it was by no means obvious that the latest knowledge of the natural sciences would contribute to that vocation. It was much more important that such graduates be equipped with text-based knowledge from Judeo-Christian sources and the ancient world. Mainstream research in this context was, therefore, philological. Researcher development meant, for the most part, the development of philological skills, which flourished in the early modern university, particularly in Germany (Bod, 2013;Turner, 2014). It was there that philological training in textual analysis, contextual analysis and comparative analysis was most advanced. It was there also that critical thinking reached its highest level of development in the early modern world, and it was there that the academic seminar was developed as the primary vehicle for developing those skills (Clark, 1989). Clark (2006: 174) provides a good account of how the philological seminar in German universities provided a 'methodological training, practice in grammatical analysis, textual interpretation, and critique'. Researcher development in the modern university In the eighteenth century, higher education in the Western university was again stagnating: By the eighteenth century universities everywhere were in the doldrums, confined to the training of priests or pastors, a few civil servants, and those gentry too poor to educate their sons by private tutors and the increasingly popular 'grand tour' of the Continent โ€ฆ most universities in eighteenth century Europe were moribund, with idle professors ... despised by the intellectuals of the Enlightenment. (Perkin, 1997: 14-15) The main exception to this picture was the field of philology, which flourished in eighteenth-century German universities. Bod (2013: 143) describes philology as the 'queen of early modern learning'. This contrast was particularly striking as German universities were otherwise regarded as among the most backward in Europe in the eighteenth century (Perkin, 1997). The solution to the increasing irrelevance of universities arrived in the form of Wilhelm von Humboldt, who led the second great transformation in the main goal of the university. Humboldt was a philologist and statesman who was appointed, in 1808, to be director of the education section within the Prussian ministry of the interior. According to his philosophy, it was not the purpose of the university to serve the needs of students, but rather it was the purpose of the students as well as the staff of the university to serve the pursuit of knowledge itself. In Humboldt's own words (in translation): 'At the highest level, the teacher does not exist for the sake of the student: both teacher and student have their justification in the common pursuit of knowledge' (Humboldt, 1970: 243). That was significantly different from the erstwhile university education of the Renaissance and early modern university. The new University of Berlin statutes established a doctor of philosophy degree above the master of arts award. In addition to a written examination, candidates for the doctor of philosophy degree had to produce a written dissertation and defend it. The written dissertation was not new in German universities, but previously it had been evaluated in terms of its erudition and the extent of knowledge displayed; at Berlin, the dissertation for the doctor of philosophy award was expected to express an original thesis, that is, it had to contain a contribution to knowledge: The master's degree is awarded to whoever can skilfully renew and well order what has been learnt, and thus promises to be a useful link in the transmission of knowledge between generations. The doctor's degree is awarded to whoever shows eigentรผmlichkeiten [personality, peculiarity, originality] and erfindungsvermรถgen [creativity] in the treatment of academic knowledge. (Wright, 1827;quoted in Clark, 2006: 211) It was easy to exclude knowledge of the physical world from a university education when it was based on the perceived highest thoughts and writings since antiquity; it was much less easy to exclude it from a university education aimed at the pursuit of knowledge. This latter goal provided an opening for the entry of the physical sciences into university education. There were several other factors supporting that development. First, the defeat of Prussia by Napoleon led some influential people in Prussia to argue that the power of scientific knowledge could be harnessed to serve national goals -they saw knowledge, particularly scientific and technological knowledge, as power (Noble, 1994). Second, the accumulation of scientific knowledge outside universities had gone from strength to strength, and it was becoming increasingly difficult to ignore. Third, philosophy was particularly influential in German universities, and this meant that, institutionally, natural philosophy was well placed to expand. It may also be relevant that Wilhelm von Humboldt's brother, Alexander von Humboldt, was an eminent scientist and an influential advocate for scientific knowledge, having worked in many countries to establish scientific academies (Wulf, 2015). Wilhelm von Humboldt believed that the study of philology played an important role in training empirical scientists. For Humboldt, therefore, the physical sciences were built on foundations of philology. The contribution of philology to the training of scientists was even more direct in the following respects. First, the tradition of 'critical thinking' of German philology could be harnessed to support the development of science in university education -science can be seen as the product of empiricism and reason, and reason itself can be seen as the product of logic and critical thinking. Second, the 'academic seminar' had been developed as a process for training philologists in German universities, and this method was adopted for training scientists (Clark, 1989(Clark, , 2006. Third, the textual analysis of classical philology supported the idea and strategy of reductionism, that is, increasing understanding of the whole by focusing on enlarging understanding of its individual parts, which offered a model for specialization (and reductionism) in the empirical sciences as a strategy for research. Fourth, philology is empirical in that it uses 'hard' text, rather than the immaterial ideas of philosophical enquiry to test truth claims. The proximate goal of classical philology was the authentication, correction and identification of the intended meaning of text-based knowledge. Its larger goal was to contribute to the stock of texts that were authentic, accurate and conveyed the meaning intended by their authors. This meant that philology was about the accumulation of knowledge, in a quantitative sense. This gave philology a natural affinity with science because the great project of the scientific revolution was to enlarge the stock of knowledge of the natural world (Henry, 2002). Knowledge is pursued so that it may be found. The success of the pursuit of knowledge can therefore be assessed by the extent of the discovery of new knowledge. In other words, the aim of the pursuit of knowledge is the discovery of knowledge, and that meant research. The higher goal of the university became the accumulation of new knowledge and its dissemination (Noble, 1994). Other Prussian universities followed Berlin's lead, and this model was adopted in other countries as well. There are at least three reasons for this. First, the commitment of German professors to research and publication provided them with a source of reputation not enjoyed by university staff in other universities that restricted themselves to teaching (Clark, 2006). Second, in the latter part of the nineteenth century, German industry was flourishing, and this was attributed, at least in part, to the adoption of research, especially research into the physical sciences. Third, there was a large flow of students from other countries, especially the USA, into German universities, particularly from those wanting training in scientific research, who viewed science as a vocation or at least as a source of employment (Simpson, 1983). German universities, which had been regarded as the most backward in Europe in the eighteenth century, were transformed during the course of the nineteenth century into those that were viewed as the most successful. They attracted students and scholars from across the world. By the end of the nineteenth century, not only were the German universities seen as being at the leading edge, but also they were the ones that had made most progress. The conclusion was clear: if you wanted to build a successful university, you needed to prioritize the pursuit of knowledge. Consequently, the universities that were established subsequently (that is, most of the universities that exist in the world today) tended to follow the model of the research-led university initiated by Humboldt (Axtell, 2016). By the beginning of the twentieth century, many universities in the USA had adopted the kind of researcher development programmes that originated in Germany. In the USA, it took the following form. First, it was centred on the PhD -the very title, doctor of philosophy, reflects its origins in Germany, where the philosophy faculties were relatively powerful and where 'natural philosophy' was the traditional term for the study of the natural world, that is, the physical sciences. Second, to be successful, candidates had to provide evidence that they had developed the capacity to make an original contribution to new knowledge (Clark, 2006). Consequently, the distinguishing feature of the PhD was that it included the requirement for candidates to plan and carry out a research project, and that capacity could be tested against the criterion of whether the outcome was worthy, at least in part, of dissemination by publication. Third, a hierarchy of awards was established with the PhD at the summit, as a research degree that was positioned above the taught degree courses. Setting research training at the highest point of higher education reflected the values and priorities of the new research universities of the USA (and, of course, Germany). Fourth, increasingly, research universities in the USA required candidates for academic posts to have acquired a PhD as evidence of their capacity to contribute to the advancement of knowledge by research (Noble, 1994). The issue was less clear in the universities that pre-dated the University of Berlin. Those universities, such as Oxford and Cambridge, had their own values, beliefs and rituals. It had taken centuries to transform institutions whose primary role had originally been as seminaries for the Latin Church into institutions whose primary role was to serve the purposes of the students. The ordering of the priorities of these universities was embedded in traditions, structures and practices that were difficult to change. For most of these institutions, the intellectual history of the nineteenth century was one of a battle between those who attached primary importance to the education of students and those who attached primary importance to research. Mainstream Oxford and Cambridge resisted the idea of a research-based university until well into the twentieth century. This resistance to the idea of the research-based university was most clearly articulated in Newman's The Idea of a University (1852). For Newman, the main purpose of a university was to provide a university education, and the main purpose of a university education was to cultivate the mind and hence produce cultured and civilized young men (Newman, 1852;Tight, 2009). In the early decades of the twentieth century, however, changes were increasingly under way. First, scientists constituted a significant new learned profession, contrasted with the situation at the beginning of the nineteenth century when science (as natural philosophy) was, for the most part, the province of those with the income and/or time to pursue it as a hobby or personal interest. Second, the large flow of students to German universities from other countries, especially the USA, was joined by rapidly expanding numbers of students applying for postgraduate degree courses in those countries where these were available (Simpson, 1983). Third, just as German industrial and military power had accelerated in the first half of the nineteenth century alongside the 'modernization' of its universities, so the industrial and military power of the USA had accelerated in the second half of the nineteenth century alongside the modernization of its universities and their increasing commitment to research. Consequently, Bacon's oft-quoted dictum that 'knowledge is power' acquired a new resonance (Henry, 2002). In Britain, however, there were still those who harboured a belief in British economic dominance in the world, pointing to the size of its empire, the largest the world had ever known. This belief supported resistance by senior figures in British universities to the adoption of the 'foreign' PhD (Simpson, 1983). However, the experience of the First World War was a convincing demonstration of the growing military might, industrial power and economic strength of Germany and the USA. Finally, in 1917 a conference of UK universities recommended adoption of the PhD as a training in research (Simpson, 2009). The first person to be awarded one of the new doctorates from an English university received a DPhil from Oxford in 1919. The agreement reached in 1917 laid the foundation for what is now regarded as the traditional system of researcher development within English universities. The fact that the UK universities agreed to the same regulations for a PhD entitles it to be called a 'system'. However, particularly in the early years, its implementation was anything but systematic. For example, according to Aldrich (2010: n.p.): In 1929 Wittgenstein was awarded a PhD by Cambridge University. He had been a student of Russell but left in 1913 without a degree. In 1929 Ramsey was designated his supervisor and Wittgenstein presented as his thesis a work written 10 years before, away from Russell, away from Cambridge and while Ramsey, 14 years his junior, was still at school. However, 1917 was the turning point, so it is worth being explicit about the meaning of the term 'traditional system of researcher development', which could be interpreted as: 1. the development, through the completion of a PhD, of the capacity to make an original contribution to new knowledge 2. a structure of university education in which research degrees are positioned above taught courses, with the PhD as the highest award for which it is possible to enrol at a university 3. a course of PhD study that includes the experience of actually planning and conducting research supported by supervision and, usually, a programme of research seminars 4. the development of researchers by trained researchers. This reflects the triumph of the vision of the research-led university. As it gained ground, the proportion of students who left university for positions in the Anglican Church in Britain declined, and the proportion going into education-based employment steadily rose. By the middle decades of the century, the Humboldtian revolution in UK universities was largely complete, and by that time the majority of UK graduates entered education-based employment upon graduation. They went on to research, further academic study, teacher training, other training, education administration or directly into teaching. When data on the destinations of university graduates was first published in the early 1960s, about two-thirds of the graduates remained in the education system after graduation (Bourner and Rospigliosi, 2008). The large majority of new graduates were thus engaged in the advancement of knowledge, directly or indirectly, through the accumulation or dissemination of knowledge, and they were prepared for this by the sort of education provided by the direct descendent of the Humboldtian university. This included the dissemination of recent additions to knowledge to the students of the university. It included the development of the students' critical faculties, that is, their ability to test ideas, assertions and evidence as the means by which claims to new knowledge could be evaluated. And it included the development of a questioning attitude as the byproduct of well-honed critical faculties. By the middle of the twentieth century, British universities had largely accepted the traditional researcher development system centred on the PhD, and had transformed their undergraduate education to support that system. However, it is easy to exaggerate the completeness and the depth of tradition in the 'traditional system of researcher development'. By the time of the Robbins Report of 1963, the majority of the academics recruited to British universities still had no PhD. The Robbins Report found that 45 per cent of academics recruited in the period 1959-61 had a doctorate. The report also observed that the proportion of university academics with doctorate degrees was rising (Committee on Higher Education, 1963). However, from the 1960s, there were significant countervailing forces. First, the expansion of higher education, partly a response to the Robbins Report itself; this outstripped the rise in the number of PhDs (Kogan and Kogan, 1983). Second, the expansion of vocational education in British universities. The second half of the twentieth century saw, for example, the development of business schools in universities, which expanded business and management studies in British universities. In recruiting staff to teach vocationally based subjects, greater emphasis was placed on practitioner experience relative to research-based qualifications (Bourner et al., 2001). Third, rebadging the polytechnics as universities in 1992 resulted in a large inflow of university academics from institutions that placed a relatively high value on vocational education, dissemination of knowledge through teaching and the application of knowledge relative to the discovery of new knowledge. The proportion of staff with PhDs was therefore much lower among the polytechnic lecturers than in the pre-1992 universities (Whitburn et al., 1976). By the start of the twenty-first century, the percentage of academics in British universities with PhDs was therefore probably significantly lower than it had been at the time of the Robbins Report four decades earlier. This is the historical context in which many university academics were able to look back to a heyday of the 'traditional system of researcher development' from which contemporary developments could be seen as a departure. Conclusion The reason for writing this article was to report what we discovered from an enquiry into the genealogy of researcher development in British universities. We wanted to be able to contextualize current issues in researcher development, including innovations in researcher development such as the MRes. We draw several main conclusions. There is a hierarchy of esteem in areas of knowledge within universities that has changed over time. In the medieval university, spiritual knowledge was positioned above secular knowledge. Text-based knowledge was most highly regarded, with the Bible, taken by many to be the word of God, the most revered of all. Also, old knowledge was more venerated ('venerable knowledge') than new knowledge. Within the domain of secular knowledge, products of the human mind (logic, rhetoric, grammar, mathematics and so on) occupied a position above knowledge of the physical world, based on fallible human senses. In the early modern university, the position of secular knowledge rose, especially the humanities. By comparison, knowledge of the physical world continued to be less valued. In the modern university, new knowledge of the physical world has become relatively more esteemed than knowledge of products of the human mind (the humanities) and many academics today, would not regard 'spiritual knowledge' as knowledge at all. There was little that we can recognize as researcher development in the medieval university. Why not? There were at least three reasons. First, old knowledge was revered more than new knowledge. Second, knowledge was, for the most part, treated as allegorical rather than literal. Third, most academics, who were virtually all clerics, were more concerned with 'truth' (a qualitative concept) than the accumulation of new knowledge (a quantitative concept). Researcher development, in an embryonic form, first appeared in the Renaissance and early modern university in the form of philology. This was much more developed in some universities, mostly in Germany, than in others. Training in philology had a significant influence on the development, in the nineteenth century, of training in the natural sciences through, among other things, the reductionism of textual criticism and the emergence of the research seminar. Then, similar methods of researcher development were applied to other academic subjects more generally. The 'traditional system' of researcher development, focused on the PhD, is a product of the modern university, and within the modern university it is surprisingly recent. In Britain, for example, it is a development of the first half of the twentieth century, and in many countries it was a development of the second half of the twentieth century. Thus, for example, the first PhD was not awarded in England until after the First World War, and the first PhD in Australia was not awarded until after the Second World War. These conclusions have several implications. First, there is a need for a change of mindset from the idea that current developments in researcher development are deviations from a time-honoured 'traditional model' of researcher development to a view that we are witnessing a system that is still in the process of developing. What is often viewed as the traditional model of researcher development may, when viewed from the longer perspective, simply be the current position in its continuing development. Second, the pecking order of knowledge may be much less stable than it currently appears. In the medieval university, spiritual knowledge was ascendant. The Renaissance and early modern university placed most weight on ethical, aesthetic and social knowledge based on the humanities, and it was only in the modern university that knowledge of the physical world became the most highly valued. Third, it is difficult to understand changes in researcher development without understanding changes in research, and it is difficult to understand that without understanding changes in the advancement of knowledge more generally. It is, in turn, difficult to understand that without understanding changes in knowledge more generally. Figure 1 illustrates this. In other words, researcher development exists within several intellectual contexts, and changes in any of those contexts are likely to impact on researcher development. These conclusions and their implications raise further questions, which help to provide an agenda for further enquiry. First, when we appreciate that researcher development exists within the contexts of research more generally, the advancement of knowledge more generally and conceptions of knowledge yet more generally, then it becomes apparent that understanding how researcher development is changing can be informed by asking the following questions: โ€ข How are knowledge and conceptions of knowledge changing? What does that imply for researcher development? โ€ข How is the advancement of knowledge more generally changing? What does that imply for researcher development? โ€ข How is research itself changing? What does that imply for researcher development? Second, if we think of researcher development evolving as a result of changes in the nature of knowledge (and conceptions of knowledge), the advancement of knowledge and research, rather than as deviations from a time-honoured traditional model, then changes in each of these contexts would lead us to explore consequential changes
Exploring Natural Language Processing in Model-To-Model Transformations In this paper, we explore the possibility to apply natural language processing in visual model-to-model (M2M) transformations. Therefore, we present our research results on information extraction from text labels in process models modeled using Business Process Modeling Notation (BPMN) and use case models depicted in Unified Modeling Language (UML) using the most recent developments in natural language processing (NLP). Here, we focus on three relevant tasks, namely, the extraction of verb/noun phrases that would be used to form relations, parsing of conjunctive/disjunctive statements, and the detection of abbreviations and acronyms. Techniques combining state-of-the-art NLP language models with formal regular expressions grammar-based structure detection were implemented to solve relation extraction task. To achieve these goals, we benchmark the most recent state-of-the-art NLP tools (CoreNLP, Stanford Stanza, Flair, Spacy, AllenNLP, BERT, ELECTRA), as well as custom BERT-BiLSTM-CRF and ELMo-BiLSTM-CRF implementations, trained with certain data augmentations to improve performance on the most ambiguous cases; these tools are further used to extract noun and verb phrases from short text labels generally used in UML and BPMN models. Furthermore, we describe our attempts to improve these extractors by solving the abbreviation/acronym detection problem using machine learning-based detection, as well as process conjunctive and disjunctive statements, due to their relevance to performing advanced text normalization. The obtained results show that the best phrase extraction and conjunctive phrase processing performance was obtained using Stanza based implementation, yet, our trained BERT-BiLSTM-CRF outperformed it for the verb phrase detection task. While this work was inspired by our ongoing research on partial model-to-model transformations, we believe it to be applicable in other areas requiring similar text processing capabilities as well. I. INTRODUCTION As one of the most established topics in natural language processing (NLP), information extraction is focused on extracting various structures of interest from unstructured textual information. Recent advances in deep learning and NLP fields enable the development of high performing models by using large amounts of data and wide contexts to automatically extract relevant features, which can be transferred and reused in other related tasks. Such techniques enable complex context-driven detection of grammatical The associate editor coordinating the review of this manuscript and approving it for publication was Arianna Dulizia . and semantic inconsistencies [1], extraction of relations, aspects, or entities [2], [3], also, tagging entities of interest in the text [4], deduplication, identifying similarities or synonymous forms [5] and solve other similar problems. Moreover, successful implementation of such tasks requires fundamental knowledge about multiple techniques at the intersection of information retrieval, computational linguistics, ontology engineering, and machine learning. This work is inspired by our previous research on NLP-enhanced information extraction in model-to-model transformations [6], [7]. However, the need for similar solutions was also identified in other areas involving visual modeling, such as business process modeling [8], [9], [10]. In this paper, we address the issue of relation extraction from graphical models focused on the detection of semantic relationships within the given text. More specifically, we aim to extract subject-verb relations which can be easily extended to triplets (subject, verb, object) using associative or compositional relationships from the source model (for instance, Use Case element is usually associated with one or more Actors using Association relationship). Therefore, such relationships will be defined between two or more entities and represent certain connections between them. Many recent papers address relation detection between entities of predefined types (such as PERSON, ORGANIZATION, LOCATION) and their semantic relations using supervised learning [11], while we aim to perform more generalized extraction by extracting all available verb and noun pairs. This is not a trivial task, although it has been previously addressed in document processing using pattern-based analysis [12], distant supervision [13], [14] and rule-based extraction systems [15]. In addition to the extraction of verb/noun phrases from the text labels, in this paper, we also study the problem of identifying and properly interpreting abbreviations and acronyms, which is a very relevant topic in model-driven systems development, especially, in the field of automated model transformations. While it may be handled using external sources, like acronym databases, dictionaries, or thesauri, real-world cases may be more complex to interpret due to ambiguities, contextual dependency, or simply the lack of proper text formatting (for instance, acronyms may be written in lowercase if less formal communication or discourse context is considered, such as chatbots or tweets). Finally, we address the problem of processing conjunctive/disjunctive statements, by parsing them into multiple ''subject-verb'' relations. In the context of our research, they can later be combined with related elements to form valid associative relations (triplets). This is also a sophisticated problem due to the natural language ambiguities or inconsistency in the underlying NLP technology. All the above-mentioned issues are discussed in more detail in Section III. The main objective of this paper is to evaluate the capabilities of the most recent developments in NLP for processing text labels in graphical models and to validate their suitability by performing the extraction of noun/verb phrases from the names of model elements under certain real-world conditions and constraints which are usually not addressed in more generalized NLP-related research. To solve these problems, we first identify and enumerate multiple anti-patterns for naming model elements extracted from a real-world dataset which complicate this task and should be handled separately by using additional techniques. Further, we apply deep learning-based sequence tagging models, pretrained with augmented data to address some of these ambiguities and combine them with predefined formal grammar-based extraction. In this paper, we specifically consider the processing of text labels in graphical models created using two prominent visual modeling standards, namely, Business Process Model Notation (BPMN) [16] and Unified Modeling Language (UML) [17]. To our knowledge, this research is one of the first attempts to apply novel deeplearning-driven techniques for the extraction of information from such models. Additionally, we provide evaluations of two related tasks, namely, conjunctive/disjunctive statement processing and acronym detection, which may significantly enhance the performance of our developed relation extractors in this context. We consider our findings to be also applicable to other NLP topics that involve the processing of similar texts, such as process mining, aspect-based sentiment analysis, or conversational intelligence. Further in this paper, Section II gives a short introduction to model-to-model transformations with their reliance upon NLP functionality and provides a concise review of NLP techniques that we consider to be relevant to our research and model-to-model transformations in general. Section III summarizes the main challenges, which must be addressed when solving similar problems, and provides a structured list of element naming anti-patterns, which provide additional noise during automated text processing and illustrate the complexity of this problem. Further, solutions for three inter-related tasks are discussed: Section IV describes the verb/noun extraction task and the experimental results on this subject; Section V deals with the processing of conjunctive and disjunctive statements; similarly, Section VI presents abbreviation and acronym detection challenges together with the corresponding experimental results. Section VII provides a discussion of our experimental findings, the identified issues, and possible improvements. Finally, the paper is concluded with Section VIII providing certain insights on the future work and conclusions. II. INTRODUCING NLP TO MODEL-TO-MODEL (M2M) TRANSFORMATIONS Let us assume that a system analyst has a valid UML use case model, created either by himself or obtained from external parties, which he intends to use as a part of some system specification. Therefore, he wants to use it as a source of knowledge to develop a conceptual data model for that business domain in form of a UML class model. Model-to-model transformations enable direct reuse of the input model without the need to manually develop the target model; they also provide the benefits of transferring and reusing the whole logic of model transformations for other instances. Unfortunately, existing solutions provide only complete model transformations which are quite rigid due to their solid formal foundations and are very limited for integrations with complementary functionality, such as natural language processing [18], [19]. Therefore, in this section we will rely on our own development [7], [20] to demonstrate use cases for NLP-based transformations, as our solution provides the ability for the user to use intuitive drag and drop actions on certain model source elements, as well as provides relevant extension points to integrate required functionality. These actions trigger selective transformation actions to generate a set of one or more related target model elements and represent those elements in the opened target model diagram. In our example, we use the UML use case model as the source model, and UML class model as the target model. Furthermore, we present the situation where it is necessary to apply more advanced processing to produce a semantically valid fragment of a target model. We assume that the user dragged Actor element Customer from the UML use case model onto the opened UML class diagram Order Management (Fig. 1, tag 1), which triggered a transformation action to execute the specific transformation specification. This specification is visually designed and is specified to be executed particularly after an action, dragging an Actor element from the use case model onto the UML class diagram, is triggered. The transformation specification instructs the transformation engine to select Customer element together with instances of Use Case elements associated with this Actor and transform them into UML Class elements and a set of UML Associations connecting those classes. Now, we assume that in the exemplary use case model, Customer is associated with two Use Case elements, particularly, Return back item and Fillin complaint form. This results in generation of a UML class diagram fragment as presented in Fig. 1, tag 2. While from the very first sight this would seem like a straightforward and simple transformation, this particular example illustrates a situation where certain NLP processing is already required to acquire a correct result. The reason behind this is that the conditions defining the extraction of multi-word verb and/or noun phrases are non-trivial. In our case, the association between the two classes Customer and Item is named as the two-word phrase return back, which is extracted from the name of the source element, particularly Use Case element Return back item. Moreover, actual verb phrases are not limited to one or two words, like phrasal-prepositional verbs containing both particle and preposition (come up with) or even distributed in the whole phrase, e.g. when the particle is after the object (associate the object with), although the such cases are observed less frequently in the formal language used in modeling practice. The above-mentioned examples are just sample cases where a straightforward text chunking is not sufficient and certain involvement of NLP technology is required to obtain correct transformation results. Further, we provide more examples which may require additional steps for linguistic preprocessing: โ€ข Hierarchical relations created after one element is dragged onto another if text labels of these elements match some form of the semantic relationships (such as generalization, synonymy, hyponymy, hypernymy or holonymy) โ€ข Entity deduplication when multiple entries have the same meaning but different expressions. In some cases they are not considered synonyms, for instance, acronym and abbreviation resolution does not result in synonymous entries but rather in duplicate representations โ€ข Processing of more complex phrasal structures like conjunctions/disjunctions, or combinations of the above (e.g. create invoice and send it to the manager). This may also include mining of ternary associations or relationships, as well as identifying possible coreferences โ€ข Text normalization, such as having two sets of elements that differ only in syntactic structures. For instance, consider two sets of associated elements in the source model, Actor Administrator and Use Case Monitors instance, and Actor Administrator and Use Case Monitor instance. The only difference here lies in the present tense form of verb monitors, where normalization to infinitive form monitor would result in deduplication of output elements, and hence, more clarified and concise output model. While this is a very straightforward and less likely scenario, more sophisticated cases may involve disambiguation of acronyms, or detection of missing words as well as grammatical errors. Furthermore, we list the main NLP fields which could be applicable in this context in Table 1, together with our insights on their further applications in improving the quality of model-to-model transformations. Most of them will not be considered in this research, yet, they are proposed as additional extension points for improving the final pipeline. Moreover, this table is also supplemented with core techniques used to solve these problems; it is clearly indicated that deep learning techniques are the most widely researched and applied to solve these problems. For more extensive reviews of the techniques, as well as more discussions on their weaknesses or future prospects, we refer to recent survey NLP papers such as [76], [77], [78], [79], and [80]. Additionally, their performance can be significantly boosted after applying transfer learning with pretrained language models, such as BERT [39], ELMO [41], RoBERTa [81], ELECTRA [82], XLNet (83), T5 [84] or Microsoft's DeBERTa [85]. Therefore, from the technological point of view, one would need to consider the integration of deep learning based techniques that require to satisfy certain technological constraints. This is the first work which tries to bridge these two fields by performing a thorough evaluation of the existing NLP implementations for processing short text labels, which is required in the context of model-to-model transformations. III. RELATION EXTRACTION-RELATED TEXT LABELING ANTI-PATTERNS In this section, we enumerate a set of modeling and element naming issues, which make the automated processing of labels in graphical models rather intricate. While certain modeling best practices are generally considered in modeling [86], [87], actual real-world cases tend to contain various issues (such as linguistic or modeling ambiguities) making it very difficult to be dealt with using automated tools. Hence, if the processing of text labels created following VOLUME 10, 2022 best modeling practices could be considered as a relatively uncomplicated task (assuming that the tagging bias of the underlying implementation is not considered), significant deviations might easily complicate it. To identify the most common text labeling issues in graphical models, we used a large dataset provided by the BPM Academic Initiative (BPMAI) [88], which contained over 4100 real-world process models presented in BPMN notation. We excluded instances that did not meet certain requirements (e.g., all the elements in the models were named using single letters without any semantic meaning, or the text labels were not in English). Labels from the BPMN Task elements were extracted from the remaining models as one of the main objects of interest in our research. After analyzing the extracted labels, a set of naming anti-patterns for activity-like Task elements was formed (Table 2) together with examples and some heuristic rules for detecting these anti-patterns; in our opinion, the latter could be applied for the initial screening and filtering tasks in other types of graphical models as well. The detection rules are not formal in any way but can be used as guidelines to identify the cases of anti-patterns. Also, the morphosyntactic analysis might have to be carried out to properly detect sophisticated cases of element naming anti-patterns in graphical models. Moreover, other elements representing subjects or entities (such as BPMN Lane, Pool elements) may contain invalid names as well, including multiple subjects, phrases, or some of the anti-patterns from Table 2. It is worth noting that some of the observed naming cases indicated invalid modeling practices, for instance, naming activity elements as conditions or decision points (e.g., Available, Yes, Check if available). Naming activities as whole triplets <actor-relationship-activity> is yet another quite common bad modeling practice used in modeling processes. The latter should be transformed into a combination of a BPMN Lane or Pool element with an activity-like element in it. One may also identify cases that combine multiple anti-patterns, for instance, the name of an activity may contain both conjunctive/disjunctive clauses relating multiple verb phrases into one text rumbling (e.g., Mark the invoice as invalid and return to customer), which increases the complexity of NLP tasks to a whole new level. Even though resolving conjunctive/disjunctive clauses is a challenging task, it can still be processed by using dependency parsing-based extraction, which is further addressed in Section V. IV. PHRASE EXTRACTION EXPERIMENT In this section, we evaluate the capabilities of the existing NLP tools to properly extract noun/verb phrases from the given text labels. This task is closely related to the relation extraction task, given its goal to extract tuples (verb phrase, noun phrase) from the given chunk of text that can further be used to construct semantic associative relations after combining with semantics from the source models (e.g., associative relationships between UML Use Case and Actor elements). Moreover, this task is important for successful model-to-model transformations because the extracted tuples are used to generate sets of elements for various target models or augment the existing models with additional elements. Further, we present basic aspects of our experimentation on extracting noun/verb phrases from the text labels extracted from the real-world dataset which contains BPMN process models and UML use case models; both types of these models contain activity-like elements which are subjects for specific processing. Section IV-A describes the preliminaries and setup of the experiment, while Section IV-B presents the evaluation methodology; in its turn, Section IV-C elaborates on the main findings in this experiment. A. EXPERIMENT SETUP Information extraction (and more specifically, relation extraction) is widely supported by multiple commercial and academic engineering efforts that provided multiple options for selecting the initial starting point for our research. While new techniques emerge frequently, they are based on the generally-available text corpora that do not provide the flexibility and specificity required to fulfill our goals. More specifically, our initial testing of such tools helped us to recognize the possibility of confusion in verb/noun recognition if the infinitive verb form is used -this is not handled correctly by generic POS tagger tools. On the other hand, the development of specialized datasets is usually challenging and time-demanding. Therefore, given the lack of specialized resources required for successful implementation, we chose to adopt and test existing tools by complementing them with additional extraction functionality and applying certain enhancements to the existing ones. Moreover, some of these toolkits provide implementations for wide array of related problems, such as such as tokenization, POS tagging, lemmatization, syntactic analysis, dependency parsing, co-reference resolution, or relation extraction, which may significantly enhance required pipelines. Additionally, some libraries provide other interesting tools, for instance, Stanford CoreNLP [89] also provides natural logic annotator that enables quantifier detection and annotation, as well as CRF-based true case recognition, which is also important for knowledge base acquisition and normalization and relates to the problems addressed in this work; while quantifier detection is not among such issues, it can be tested and integrated into the future pipelines as well. Further, we list the set of implementations selected for our experimental implementation and evaluation 1 : โ€ข Stanford CoreNLP toolkit [89] which relies on conditional random field (CRF) implementations for performing both part-of-speech tagging and NER-related tasks. โ€ข Stanford Stanza [90] which uses Bi-LSTM to implement components and pipelines for multiple NLP tasks such as tokenization, lemmatization, POS tagging, and dependency/constituency parsing. 1 The final datasets, experimental code and results are available at https: [24] toolkit by Zalando Research, which applies pooled contextualized embeddings together with deep recurrent neural networks, as well as provides its pretrained language models. โ€ข AllenNLP [25] which relies on deeply contextualized ELMo embeddings based on combined character-level CNN and Bi-LSTM architecture. โ€ข BERT [39] is one of the most dominant techniques in NLP at the moment of writing this paper, based on transformer architecture and masked language modeling. โ€ข ELECTRA [82] which is an improvement over BERT that applies token replacements with plausible alternatives sampled from a generative network during model training, instead of using masked tokens. The main goal of the model is to predict whether the corrupted input was replaced with a generator sample. ELECTRA authors show that this task is more efficient than BERT and the final model is capable of substantially outperforming BERT model in terms of model size, amount of computing and scalability [82]. The fact that these tools use different machine learning or deep learning approaches to solve NLP tasks has also motivated us to test their performance in the context of our approach. In this work, we use the BERT 2 and ELECTRA 3 implementations from the Hugging Face repository, which are already fine-tuned for part-of-speech tagging tasks. Additionally, we developed our own taggers that were biased towards the recognition of conflicting verb forms by performing augmentations of the original text inputs with their copies containing infinitive verb forms as replacements for the original ones; a similar approach was successfully applied in our previous work to improve performance for base CRF-based tagger [6]. OntoNotes corpus [91] was used as the base data source due to its resemblance to the communication cases observed in graphical process and system models. For the reference implementation, we selected Bi-LSTM-CRF architecture [37] which has been proven to be the best performing one at the time of writing. It consists of a single input embeddings layer, a bidirectional LSTM hidden layer to process both past and future features, and CRF layer at the output, which helps to improve tagging accuracy by learning and applying constraints over sentence level to simultaneously optimize the labeling output and ensure its validity. For our experimental purposes, we implemented two versions of our customized taggers: โ€ข BERT-BiLSTM-CRF that uses original pretrained BERT embeddings at the input layer, โ€ข ELMo-BiLSTM-CRF that relies on ELMo embeddings at the input layer. For training these models, CRF (also known as Viterbi) loss, based on the maximization of the conditional probability, was used; for more details on its derivation, we refer to [37] and [92]. Moreover, learning rate was set to 0.1, the hidden layer size was set to 128, and the early stopping parameter for termination, if no convergence is further observed, was set to 10. SimpleNLG library [93] was used to normalize tense for verb phrases, while NLTK [22] toolkit was used to implement text chunking with POS tags obtained as an output from the above-mentioned tools. Listing 1 represents formal grammar, based on regular expressions (regex) over part-of-speech tags, which was used for noun/verb phrase extraction. It relies upon the Universal Dependencies scheme [94], which is used by Spacy, Flair, and Stanza tools. Here, NP defines a noun phrase, VP defines a verb phrase, and PNP defines a proper noun phrase. As Stanford Core NLP and ELECTRA pretrained implementations use Penn Treebank notation for its POS tagger output, the grammar is adjusted for their cases (Listing 2); here, additionally, ADP defines an adposition, and ANP -a partial noun phrase, which is further used as a block in NP extraction. The datasets used during experimenting were obtained after pre-processing a relatively large number of BPMN process and UML use case models, obtained from various sources. The final experimentation set of such models consisted of: โ€ข 32 BPMN process models and 25 UML models that were collected freely from the Internet; โ€ข A large sample of preprocessed and cleansed BPMN process models, which were selected from a large set of Signavio BPMN models provided by BPM Academic Initiative [88]. The acquired final set of models was processed, and the names of Task elements (for BPMN process models) and Use Case elements (for UML use case models) were extracted for experimentation. It was expected that Task and Use Case elements would contain at least one verb or verb phrase, and one noun or noun phrase. The extracted elements were cleaned from semantic inconsistencies, grammatical errors, invalid names, and common modeling errors, as well as filtered to exclude invalid practices listed in Table 2. In this stage, we also excluded entries containing multiple verb phrases in their names (e.g., conjunctive/disjunctive clauses), as the recognition of such structures was not a part of this experiment (this is later addressed in Section V). However, having a single verb phrase with multiple noun phrases in conjunctive or disjunctive form could be considered processable and would result in multiple valid tuples of target transformation outputs. After performing the aforementioned steps, we obtained a dataset of 4044 valid entries that were then used to manually extract verb phrase and noun phrase pairs. The whole extraction procedure was performed by the authors of this paper. These pairs were set as a ''golden standard'' to validate the outputs acquired from the automatic extraction using selected extractors. Hence, the final dataset included 328 instances having no verb phrases, and 3716 instances containing both verb and noun phrases. B. EVALUATION METHODOLOGY The developed extractors were evaluated in terms of accuracy, precision, recall, and F-measure, which measured the ability to match the acquired outputs to the ''golden standard'' outputs. In our experiment, two different aspects were taken into consideration: โ€ข Whether the extractor successfully determined that the phrase contained one or more noun/verb phrase that must have been extracted. In case there is no particular phrase found, the output would be empty. โ€ข Whether the extractor successfully extracted the required verb phrases or noun phrases. Note, that it was required to evaluate if both verb phrases and noun phrases were successfully extracted. In cases, where multiple phrases were marked as an output, it was considered that strictly all of them had to be present in the output for it to be marked as correct. Extraction accuracy is defined as the ratio of correctly extracted verb/noun phrase instances (together with empty outputs when such instances were absent) to a total number of entries: accuracy = number of correctly extracted instances number of total instances Precision is defined as the ratio of correctly extracted concepts to the number of total extracted concepts, whereas recall is a ratio of correctly extracted concepts to the number of correct concepts: F1-measure (also referred to as F1-score) is defined as a harmonic mean of these two measures: C. EXPERIMENT RESULTS The results of the experimental extraction of verb phrases and noun phrases from the names of activity-like elements are presented in Table 3. It depicts both results of detecting whether the given entry had particular types of phrases, as well as the performance of extracting these phrases from the respective entries. The obtained results indicate that the extractor based on the RNN-based Stanza tagger outperformed CNN-based and CRF-based tools (Spacy and CoreNLP respectively) in solving our problem. Extraction using Stanza's Bi-LSTMbased tagger showed the best performance in 2 tasks, while Flair tagger use resulted in the second-best. Extractor based on our custom BERT-BiLSTM-CRF tagger outperformed other implementations while detecting verb phrase presence and verb phrase extraction. Moreover, both custom taggers also showed improvements over their generic versions, i.e., ELMo-BiLSTM-CRF resulted in a better performance than the original AllenNLP ELMo, and BERT-BiLSTM-CRF proved to be better performing compared to the BERT-based POS tagger. This is quite optimistic, considering the size and specificity of the dataset. However, some caution should be taken while interpreting these results, given that our custom-trained tagger was biased towards the identification of infinitive forms of conflicting verbs. This implies that in some other cases it could fail to correctly tag other words that were handled correctly by the tagger trained using conventional corpora, and was initially confirmed in our previous research applying similar principles to train custom POS taggers [6]. Therefore, more attention should be given to improving and tuning custom taggers applied in the research, as well as finding an optimal balance between an increase in performance for verb detection and a possible decrease in other tasks that are performed better using generic POS taggers. Nevertheless, the results of the leading extractor (based on the Stanford Stanza toolkit) are quite encouragingthe achieved F1-Score was more than 0.8 in most of the performed evaluation tasks, especially given the limitations and the level of unavoidable ambiguity in the testing dataset. One of the main challenges in this particular case is the fact that corpora currently available for training, like OntoNotes [91] or English Web Treebank [95], are better accustomed to working with whole documents rather than the analysis of short text and, therefore, do not represent the specificity addressed in this paper. We tried to mitigate this issue with additional augmentations of the input text, which resulted in certain performance improvements; developing text corpora, which are better adjusted for this specific task, would certainly help to improve its performance even further. V. PARSING CONJUNCTIVE/DISJUNCTIVE STATEMENTS The techniques described in Section IV proved their efficiency during the extraction of verb phrases and noun phrases, the tools we experimented with in the phrase extraction task are not capable of processing more complex examples discussed in Section III when applied directly -here, the conjunctive/disjunctive statements are a good example of that. The complexity can be illustrated with the following examples which depict multiple cases of conjunctive statements (disjunctive statements may be formulated almost identically): โ€ข check dates and suggest modifications -the statement includes the conjunction ''and''. โ€ข consult project, check progress -the statement does not include direct conjunction, but it is inferred. โ€ข receive invoice, packing slip, and shipment from supplier -multiple nominal subjects are related to the single verb receive. โ€ข calculate and send price offer -contains a single nominal subject that has dependencies on multiple verbs. Obviously, the presented examples are not the most sophisticated text labels one could find in real-world models. This is not surprising due to the well-known fact that natural language is one of the most complex objects there is for automated machine processing. It is worth noting that the topic of processing conjunctive/disjunctive statements is not widely researched, although it has received some attention from researchers working on sentence simplification [96] or detecting boundaries of the whole conjunction span [97]. Also, many works on sentence simplification rely upon parse trees [15], [98], [99], which is in line with our research. In Section V-A, we provide an algorithm based on dependency parsing, which is used to extract pairs of noun/verb phrases from conjunctive/disjunctive statements. Section V-B describes an experimental setup using a real-world dataset consisting of conjunctive/disjunctive phrases that are then processed using the proposed solution. Finally, Section V-C provides the evaluation results, a discussion, and some ideas for our future research. A. ALGORITHM FOR EXTRACTING NOUN/VERB PHRASES FROM CONJUNCTIVE/DISJUNCTIVE PHRASES Further, we briefly describe a dependency parsing-based algorithm for extracting pairs of noun phrases and verb phrases from conjunctive/disjunctive phrases (see Algorithm 1). The input is the parsed and tagged document D; hence, it requires a part-of-speech tagger and a dependency parser as part of its processing pipeline. We define D TOK as the set of tokens that constitute document D, together with the parsing and tagging output. Further, this document is also processed to create noun phrase spans (further denoted as SNP) and verb phrase spans (denoted as SVP) by using predefined grammars (such as presented in Section IV-A). Later, we use correspondence indexes Ind NP and Ind VP to map each token in the document to a corresponding noun phrase or a verb phrase. These indexes enable traversing dependency relationships at a phrase level and at the same time reduce the ambiguity that is observed after using different dependency parsers. We denote the head of the dependency relationship from the token tok as Dep Head (tok), and the end as Dep End (tok). Finally, we denote GET operation as the operation, which enables retrieving an entry from the index, given its indexing value. The syntactic dependencies are expected to be labeled using Universal Dependencies format [94], particularly DOBJ as the dependent object, OBJ as the object, POBJ as the object of the preposition, CONJ as the conjunction. The output of this algorithm is a collection of tuples of verb phrases and noun phrases. It is expected that the input contains both nouns and verbs, otherwise, tuples with empty values instead of the verb or noun phrases can be returned as a result. B. EXPERIMENT SETUP To evaluate our approach, we extract a dataset of 410 entries acquired from the same set of process models which was used in our phrase extraction experiment. The final dataset comprised only those text labels that included at least one conjunctive or disjunctive clause. Then we manually extracted all available verb/noun phrase parts to create a ''gold standard'' dataset to be used as a reference point for our evaluation. The algorithm presented in Section V-A was implemented as a separate module without any text normalization capability. To perform comparative testing, we implemented the module in Python, using Spacy, and extended it to use Stanford Stanza, due to its flexible integration with the Spacy framework, to enable comparing the performance of dependency parsing capabilities of these toolkits. Again, for the evaluation, we used metrics like the ones described in Section IV-B, that is, accuracy, precision, recall, and F1-Score. Here, accuracy is defined as the ratio of the entries processed correctly and the total number of entries. Note, that this is a very strict measure as it considers a valid extraction only if all noun/verb phrase pairs were extracted correctly. However, this technique is capable to generate a larger or smaller number of entries compared to the actual outputs. To address this issue and provide an evaluation of partially correct outputs, we defined two additional metrics to evaluate the performance in terms of the number of generated output instances: โ€ข The mean deviation between the number of extracted outputs and benchmark output results: โ€ข The mean Sรธrensen-Dice coefficient, which is used to evaluate the average similarity between the actual and extracted instance sets: Here, n is the total number of processed entries in the dataset; O i actual is the benchmark set of verb phrase/noun if DEP END (tok) โˆˆ (DOBJ , OBJ , POBJ ) then 4: results โ† results โˆช (GET(Ind VP , Dep START (tok)), GET(Ind NP , Dep END (tok))) 5: else if DEP END (tok) = CONJ then 6: Ind POS โ† index of POS tags and tokens for conjuncts in DEP END (tok) Assume pattern <VERB>, <VERB> and <VERB> <NOUN> 7: if |GET(Ind POS , NOUN )| = 1 and |GET(Ind POS , VERB)| > 1 then 8: noun โ† Get(Ind POS , NOUN ) 9: for all verb โˆˆ GET(Ind POS , VERB) do 10: results โ† results โˆช (GET(Ind VP , verb), GET(Ind NP , noun)) Assume pattern <NOUN> <VERB>, <VERB> and <VERB> 11: else if ||GET(Ind POS , VERB)|| = 1 then 12: verb โ† GET(Ind POS , VERB) 13: for all noun โˆˆ GET(Ind POS , NOUN ) do 14: results โ† results โˆช (GET(Ind VP , verb), GET(Ind NP , noun)) 15: else if ||GET(Ind POS , NOUN )|| > 1 then 16: for all noun โˆˆ GET(Ind POS , NOUN ) do 17: results โ† results โˆช (GET(Ind VP , LEFTMOSTVERBnoun), GET(Ind NP , noun)) Output: the set of (verb phrase, noun phrase) tuples results phrase pairs extracted for the i-th dataset entry; O i extracted is the set of output elements extracted for the i-th dataset entry; # i actual and # i extracted represent the number of elements in O i actual and O i extracted , respectively. C. EXPERIMENT RESULTS The results of the experiment are presented in Table 4. They summarize the performance of both Spacy and Stanza models. The obtained results prove the influence of the underlying dependency parser. Here, the implementation based on the Stanza toolkit significantly outperformed the Spacy-based implementation. Unfortunately, the extraction accuracy score for both implementations was very low proving that those implementations failed to extract all the expected verb/noun phrase pairs from each given input text; this is also reflected in relatively high values of MeanDiff and MeanSDC. Moreover, precision, recall, and F1-Score scores, which are calculated on a macro-level, show that results at the macro level are not disappointing, yet, both implementations of the algorithm and the underlying technology could still be improved in the future. Here, the performance of experimental implementation resulted in F-Score = 0.631, although we must also take into consideration the influence of a sample bias. The significance of the underlying parse model was also obvious, as the Stanza-based processor significantly outperformed the implementation based on Spacy. Again, all the mandatory pipeline steps -text tagging, text chunking into noun/verb phrases, and dependency parsing -have proven to be crucial to the overall quality of phrase processing. A failure in any of these steps inevitably translates into errors in the further steps of the developed pipeline. Therefore, we safely conclude that the dependency parser plays the most important role of all. This was extremely well visible in the experimental cases when insignificant changes to the input entry (e.g., adding an adjective to one of the nouns) resulted in completely different parse trees compared to the initial ones; it complicated the analysis significantly or even resulted in cases not covered by the used formal grammar. This indicates the need for more extensive research and improvements in both extraction and dependency parsing areas. We believe that it could be achieved by integrating and testing recent developments in dependency parsing, based on neural techniques as described in [51] and [52] among the others. VI. ACRONYM/ABBREVIATION DETECTION Acronym/abbreviation detection is an issue in text normalization which deals with multiple issues and ambiguities while detecting whether the given word in the text is an abbreviation or an acronym. While many cases can be handled by simply performing a search for a particular VOLUME 10, 2022 candidate's expansive form in the text or performing a search in dictionaries and word lists, this is not trivial when it comes to widely used acronyms. The first issue is that these acronyms/abbreviations might be present in dictionaries and at the same time overlap with some general words (e.g., acronym IT overlaps with pronoun it); another common issue is omitting the expanded form of an acronym/abbreviation due to its widespread use, which makes it almost impossible to automatically identify it as an acronym/abbreviation of some particular phrase with simple backtracking in the input (the aforementioned acronym IT can be seen as an example in this context as well). Acronym/abbreviation expansion is yet another similar task aiming to solve the problem when a given abbreviation or acronym should be replaced by its expansive form, which is the most appropriate in the given context. This task is not a trivial one either -for instance, EM could be referred to as entity matching; however, it could also be expectation maximization or entity model, with all these expansive forms coming from a single computer science domain. Unfortunately, current research tends to focus on long text passages, which highly reduces their applicability in the context of our research. In model-to-model transformation, as well as in other relevant topics, the acronym/abbreviation (A/A) detection task helps one to properly match full concept names with their abbreviated forms, thus adding to greater consistency of the models being developed. The A/A detection task itself comprises two interrelated subtasks: โ€ข PA/A detection seeks to detect candidate A/A, which must be expanded (what must be replaced?); โ€ข A/A expansion is focused on finding the right expansion for the given A/A (what is the replacement?). Acronym/abbreviation (A/A) expansion is often considered as a simple expansion of entries that are identified as A/A due to their writing style or absence in relevant sources, like thesauri or dictionaries. While simple A/A mapping lists are generally applied for common text normalization tasks, they may not always provide the correct result, unless they are restricted to having single meanings in specific or even multiple contexts. Therefore, real-world use cases may easily complicate his seemingly uncomplicated task. The complexity of the task may rise depending on the diversity of corpus or data required to properly train one's implementation to resolve models. The expansion problem will not be further addressed in this paper due to certain limitations of the dataset. While recent developments in acronym detection tend to apply state-of-the-art deep learning techniques (as stated in Table 1), they are not applicable in our context due to relatively short text input. Therefore, we will model this problem in a more traditional yet efficient way by applying context-based classification techniques within a space of contextual, morphological, and linguistic features. While a similar approach was successfully tested in [56] and [100], we propose using a different set of features that are preferred due to data limitations. The target variable of the classifier is simply an indicator of whether the particular word represents an acronym or abbreviation. Further in this section, we provide an empirical evaluation of A/A detection in BPMN element names. To make it more consistent with other experiments presented in this paper, we will use the same initial set of the BPMN process and UML use case models as in the experiment presented in Section IV. Hence, Section VI-A describes the preliminaries and setup of the experiment, while Section VI-B presents and briefly discussed the results obtained during that experiment. A. EXPERIMENT SETUP The initial dataset of process models was used as the source for developing the feature dataset for our A/A detection experiment. The feature dataset was created from all the available words in the extracted text by applying simple heuristic rules: โ€ข Acronym or abbreviation must contain at most 5 characters. It can be observed that the longer the word is, the smaller is the probability of it being an acronym. Therefore, words with more than the predefined number of characters are not considered to be acronyms and are excluded from further analysis. โ€ข The word representing an acronym or abbreviation is not available in the dictionary. Since WordNet does not contain all the English words and their forms, we used Enchant 4 library, which is generally used for grammatical error correction, to check for the word existence. The first rule helped to identify the candidate entries for the feature dataset, and the entries longer than the predefined length threshold were not considered as candidates for acronyms and abbreviations. The second rule helped to perform its primary labeling. After the automated generation of the dataset, some manual adjustments were performed fixing automated labeling errors and ambiguities, removing redundant and duplicate entries, as well as identifying situations that were not covered by the above-listed heuristics and could not be handled automatically -all this was done to make the feature dataset more consistent and suitable for the development of our detection classifier. The feature dataset examination also helped to identify that most of the acronyms were written in uppercase, which also helped to simplify the semi-automated labeling task. To avoid feature leakage, we removed the feature of the uppercase word as it would serve as a proxy for the label otherwise (in practical applications, it might serve as a very strong indicator for acronym presence). To perform POS tagging required for the POS-based feature generation, we used Stanford Stanza tagger that showed the best performance in our previous experiment presented in Section IV-C. After performing the feature generation procedure, a feature dataset with a total of 16579 entries was created. Each entry in the dataset was a vector of 16 features extracted from the text labels in the BPMN process and UML use case models, together with the label indicating whether a word represents an acronym or an abbreviation. The full set of features is presented in Table 5. The features has.special and long.char.seq were excluded from further analysis as the final dataset did not contain any such entries. Nonetheless, these features could be useful while performing further research with more extensive datasets and/or contexts, and thus they are included in Table 5 along with other features as a reference for future consideration. This left us with 14 features that were further used as the inputs for the classifier. For the development of acronym detection classifier, the following techniques were considered: โ€ข CatBoost [101] is a high-performing gradient boosting classifier. One of its most exceptional features is the ability to efficiently work directly with the categorical feature variables, which helps to improve performance when numerous categorical features are used. โ€ข XGBoost [102] is one of the best performing gradient boosting-based ensemble classifiers, widely used to solve various classification tasks. โ€ข Random Forest [103], [104] is a widely used decision tree ensemble technique based on bagging and random feature selection. To handle the high level of class distribution imbalance of the input dataset, weighted classification was applied to improve detection performance. Also, grid search was used to optimize the performance of CatBoost and XGBoost by selecting their optimal hyperparameters. Random Forest classifier was run with default parameters, but using 200 estimators. All the classifiers were implemented in Python using scikit-learn, catboost and xgboost libraries. Similar to the experiments presented in Section IV-B, for performance measuring, the measures of accuracy, precision, recall, and F1-score were used. Figure 1 presents the results obtained using the classifiers described in Section VI-A. They show that CatBoost significantly outperformed Random Forest and slightly -XGBoost classifiers in terms of precision and F1-Score. This is not surprising, due to the design of the CatBoost tool and its ability to work directly with categorical variables. Its superiority over the XGBoost classifier was also confirmed by the McNemar's test that resulted in p < 0.05 (p = 0.029). Table 6 also provides an insight into the feature importance obtained using CatBoost classifier. The results indicate that morphological features of tokens next to the target word were identified as the most important, whereas the presence of a particular word in an English dictionary or similar referential source played a less influential role as expected. One of the reasons for this is the fact that usually abbreviations are created by the people, who create models and write documentation (e.g., business/system analysts). And so, those people create various acronyms and abbreviations by themselves, or they use already established A/A to make the text more compact (compact text labels are particularly relevant in visual modeling). Contextual part-of-speech features seem to play an important role as well because they capture acronym VOLUME 10, 2022 usage patterns in spoken or written language; this is also proved by the high importance of the features of preceding tokens, as well as more distant contextual features. This prompts for testing wider context features (like prev.pos3, next.pos4, etc.); however, such features are not considered in this paper due to the limited size of the processed text phrases. B. EXPERIMENT RESULTS Alternatively, one might consider sequence-tagging models (such as Markov models or recurrent neural networks) that directly apply such context, yet their training would require larger datasets and the inclusion of an even greater number of additional features (lexical and morphological). Emerging deep learning approaches, such as [58] or similar, seem to be a viable solution as well, although their training might require a significant amount of labeled data, and their applicability for the given problem must be verified. VII. DISCUSSION With the experiments described in this paper, we explored the capabilities of the advanced NLP tools to process short text fragments (text labels) which are required to enable advanced capabilities in processing our model-tomodel transformations. While this is inspired by our previous research [6], [7], we believe that the presented research results could be applicable in other relevant fields as well. Similar text normalization is required for practical process mining where the names of the composing elements need to be unified from multiple data sources while reducing the number of duplicates to a minimum. It is also applicable in conversational intelligence when intent processing is required to identify the responsive action for the inquiry. The experiments prove that the recent developments in the field of NLP and deep learning could provide the needed tools to solve such and other similar problems. Overall, the experiments presented in this paper revealed several issues, which should be addressed and might be required to handle separately: โ€ข Bad modeling (in particular, element naming) practices were not considered in the extraction activities. During the initial dataset screening, we observed many such cases that were summarized in Table 2. Detecting the most common bad modeling practices and introducing an automated resolution of such cases into the developed solution could provide even greater automated processing results. โ€ข A more thorough analysis of the outputs showed that some tagging tools, like Spacy, were quite sensitive to the letter casing, which is also significant for the practical application of NLP technology in model-tomodel transformations as well as in other relevant fields. While this is less relevant when processing long text passages or whole documents, the importance increases when more specific text processing is considered. This is stipulated by different modeling styles used by practitioner modelers who prefer starting each word with a capital letter while naming model elements such as activities, tasks, use cases, etc. (this is verified by the analysis of the BPMAI dataset used in our research, as well as our personal experience), and some tools may fail to tag such labels correctly. For example, return invoice could be tagged as <VERB><NOUN>; however, Return Invoice might as well become <NOUN><NOUN>, which would be an incorrect tagging result. Again, in our related experiments, we reverted all text labels to lowercase to mitigate this problem. Unfortunately, such normalization might remove relevant features that could be used to detect abbreviations. โ€ข The previous issue is also relevant for other related problems. While such cases could be normalized to lowercase, doing so increases the risk of failure in the other tasks like named entity recognition where capital letters play a crucial role. Moreover, NLP tools may face difficulties detecting named entities within fully lowercase entries (e.g., United States was identified as LOCATION, while united states was not). โ€ข Detection performance can be negatively affected by the presence of non-alphanumeric symbols (e.g., dashes, commas, apostrophes) within words. It is advisable to remove such symbols from the model element names wherever possible. This issue might be mitigated using more advanced tokenizers capable of handling most of these cases, but the risk of failing to properly handle them still exists. โ€ข Generally, using conjunctive/disjunctive clauses in activity-like element names indicates a bad modeling practice as such instances should be refactored to two or more atomic elements. As stated previously, processing such statements appeared to be a very challenging task requiring the support of several advanced NLP techniques, such as dependency or constituency parsing. In its turn, this would bring in other kinds of errors from the underlying parser model. โ€ข In our experimentation, we observed general ambiguity in detecting abbreviations. The A/A detection experiment confirmed the applicability of a machine learning-based approach to handling this problem. Yet, A/A expansion is a more complicated task as full forms of concepts designated by A/A might not be present in models under the scope, especially if those A/A are well-known and heavily used (e.g., IT, USA). External sources, such as domain vocabularies and linked data can be applied by matching them contextually to each model instance containing cases of acronyms and abbreviations. Again, this requires additional sources of input data, together with a more extensive dataset, and could be considered as one of the directions for our future research. VIII. CONCLUSION NLP discipline has seen impressive advancements and improvements during the last several years, with the number of NLP applications increasing dramatically. Also, the progress in deep learning has resulted in a significant increase in the performance of solving different linguistic tasks. In this paper, research on applying the recent developments for processing small text phrases is discussed. While the need for this research originated from our recent research on modelto-model transformations [6], [7], we may identify several other areas that could benefit from similar text processing capabilities, such as process mining, aspect-based sentiment analysis or conversational interfaces with command-like short text processing capability. At the same time, all these areas share the same NLP-related issues that have to be dealt with to ensure satisfactory performance of the underlying NLP technology (e.g., identical representation of verbs and nouns, lack of context required for the automated processing). In this paper, we addressed the problem of extracting relation tuples from the process and system requirements' models containing elements expressing activity-like statements. As it is stated in Section III, it is not an easily solved problem, due to multiple ambiguities, applied modeling practices, and many other issues that are not addressed in common NLP processing toolkits. Among such issues, one may emphasize the processing of disjunctive or conjunctive statements (which is considered to be a bad modeling practice), the presence of shortened forms, like acronyms or abbreviations. To solve the issues addressed in this paper, we evaluated several current state-of-the-art implementations from the perspective of our research, while combining them under our custom formal grammar-based extraction to derive prototype implementations. Additionally, we implemented and tested our custom tagging tools, based on input corpora augmentations and bidirectional LSTM-CRF architecture with BERT and ELMO embeddings at the input layer. In the first experiment, the Stanza-based implementation showed the best performance results in noun/verb extraction tasks. Yet, we showed that implementation based on our custom BERT-BiLSTM-CRF tagger helped to improve the detection of verb phrase presence and verb phrase extraction as compared to the generic tagger implementations, including generic BERT-based tagger. This was expected as bias towards proper tagging of verbs could reduce the ability to correctly tag nouns in short text statements. Hence, balancing between biased and unbiased tagging still requires further research. Our second experiment with processing disjunctive and conjunctive statements showed this task to be more challenging than expected, due to the dependence of our implementation on the performance of underlying dependency parser toolkits. Unfortunately, while such statements are also considered to be bad modeling practice, they are widely used in real-world cases (this is also verified from the initial analysis of BPMAI dataset) and need to be addressed carefully. This is an important topic relevant for multiple information extraction and other NLP-related areas, such as relation extraction or aspect-based sentiment analysis. It has been proven to be a complicated task due to the generally unstructured nature of natural language texts. Handling of these issues is also discussed in this paper providing additional insights for further improvements in this area. Results obtained after applying our technique described in Section V-C indicate that there is still a lot of potential for further improvements. While at this stage, we did not consider training custom parsers, we hope to achieve more progress in the future after carrying out more extensive studies and taking advantage of the improvements in dependency parsing, constituency parsing, and general relation extraction algorithms. Finally, in the third experiment, we tested a machine learning-based approach for the acronym/abbreviation detection issue. While this issue is widely discussed in multiple papers (see Table 1 for more details on that), these works tend to focus on processing longer text statements or even whole documents, which is not suitable for our particular case. Due to limitations discussed in previous sections, we approached this issue by applying context-based classification using token-level and text label-level features. We found out that our trained classifier was able to obtain a precision of 0.78 and F1-Score of 0.73, which we consider to be a rather positive result due to multiple constraints and limitations. In the future, we might as well test the developed solution in other settings by expanding our developed dataset to include more specific cases. The results are expected to be improved after applying the classifier to a more extensive and comprehensive dataset, which would lead to exploiting additional tokenlevel, phrase-level, or even whole model-level features, and is still subject to our further research. In this paper, we did not consider acronym/abbreviation expansion, due to certain limitations and requirements discussed in Section VI. Yet, it is an interesting challenge that will be addressed in our future developments. While our research presents a certain amount of contribution in text processing for the system modeling domain, there is still a lot of space for future research. In this paper, we experimented with text labels of activity-like elements acquired from the BPMN process models and UML use case models. However, other models, like UML activity models, state machines (or other kinds of statechart models) could also be successfully tested. Moreover, applying these techniques to larger and more elaborate datasets might reveal other cases that could be addressed by tuning the formal grammars or processing algorithms discussed in this paper. Additionally, one could also resort to creating specialized datasets or text corpora which would enable the development of even-more specialized extraction tools. Complementary, several technological constraints should be addressed, particularly optimization of the final models for deployment due to the requirement of a significant amount of resources needed to run larger deep learning models. This may require investigation of model reduction techniques such as distillation or quantization. Finally, it is safe to state that in model-to-model transformation (as well as in other areas involving the processing of graphical models), one could also benefit from other existing NLP capabilities, such as the extraction of semantic relationships (synonymy, hyponymy, hypernymy, etc.), analysis and correction of grammatical errors. Indeed, fully automated processing requires significant input and capabilities from multiple fields of linguistic processing to ensure the high performance of the developed NLP applications, as discussed in Table 1. This paves the road for our next near-future developments and experimentation.
The initial physical conditions of the Orion BN/KL fingers Orion BN/KL is an example of a poorly understood phenomena in star forming regions involving the close encounter of young stellar objects. The explosive structure, the great variety of molecules observed, the energy involved in the event and the mass of the region suggest a contribution in the {chemical diversity} of the local interstellar medium. Nevertheless, the frequency and duration of other events like this have not been determined. In this paper, we explore a recent analytic model that takes into account the interaction of a clump with its molecular environment. We show that the widespread kinematic ages of the Orion fingers -- 500 to 4000 years -- is a consequence of the interaction of the explosion debris with the surrounding medium. This model explains satisfactorily the age discrepancy of the Orion fingers, and infers the initial conditions together with the lifetime of the explosion. Moreover, our model can explain why some CO streamers do not have a H$_2$ finger associated. INTRODUCTION Orion BN/KL is a complex massive star formation region that is associated with an explosive event that occurred some 500 years ago. In particular, it contains around 200 filamentary structures in H 2 emission known as the Orion fingers, which could be formed by the close encounters of young stellar objects (Zapata et al. 2009;Bally et al. 2011, and references therein). The most accepted interpretation of these fingers is that they were formed by the interaction of high velocity gas clumps with the environment ). We will consider this interpretation. The age of the event have been determined by several authors using different techniques. Bally et al. (2011) analyzed the projected position and velocity of the heads of the H 2 fingers. For each finger, they found an individual age that is between 1000 and 500 yr. This is in contradiction with the idea that Orion BN/KL was produced by a single explosive event and that the expelled clumps are in ballistic motion, so they concluded that there must be some deceleration. Zapata et al. (2009) reported the counterpart of the H 2 fingers observing the J= 2 โ†’ 1 CO transition, called CO streamers. Each streamer has a radial velocity that increases linearly with the distance to a common origin and, assuming a simultaneous ejection, they determined the 3D structure and obtained a most probable age of approximately 500 yr. This is in agreement with the age estimated by Rodrรญguez et al. (2017), who used the proper motions and projected positions of the runaway objects I, n and BN to estimate a close encounter 544 years ago. Also, Zapata et al. (2011a) calculated the age of a expanding bubble in 13 CO centered in the same possible origin of the region. The radial velocity and the size of this outflow result in โˆผ 600 years. The momentum and kinetic energy of this outflow is at least 160 M km s โˆ’1 and 4 ร— 10 46 and 4 ร— 10 47 erg (Snell et al. 1984;Kwan & Scoville 1976). There is a chance that the fingers could be originated at different moments. Perhaps, there is an unexplored mechanism to produce such an extended structure. The machine-gun model has been mentioned as a possible explanation, but previous models (Raga & Biro 1993), even when they are not colimated, are far from being as isotropic as the Orion fingers. Then, the runaway stars (Rodrรญguez et al. 2017), the expansion of the molecular bubble Zapata et al. (2011b) and the age determined by the CO streamers (Zapata et al. 2009), are strong evidence of a single and simultaneous event. Then, the widespread ages could be explained by a dynamical model that takes into account the deceleration of a dense clump by the surrounding environment. There are several attempts to describe the interaction of a moving cloud against a static medium. De Young & Axford (1967) (hereafter DA) analyzed the plasmon problem, which consists in a moving cloud that adopts a particular density structure, and derived its equation of motion. Cantรณ et al. (1998) improved the plasmon solution including centrifugal pressure. Also, Raga et al. (1998) proposed the equation of motion of a static spherical cloud that is accelerated with a high velocity wind due to the ram pressure. More recently, Rivera-Ortiz et al. (2019) (hereafter RO19) proposed a modification to the plasmon problem, considering the mass lost by the clump, which can modify a plasmon dynamic history if it is embedded in a high density environment. The plasmon problem is based on the direct consideration of the balance between the ram pressure of the environment and the internal, stratified pressure of the decelerating clump. Fig. 1 represents the plasmon profile adopted by the pressure balance, the post-shock region, where the material is ionized, and the inner neutral region. A similar representation has been proposed by Burton (1997). Then the dynamical analysis of the motion of the Orion fingers could lead to a better understanding of the con- Figure 1. a) Schematic representation of the initial clump at the ejection moment. The ejected clump takes a plasmon profile by the pressure balance between the internal pressure and the ram pressure produced by the velocity component v cos ฮฑ, where ฮฑ is the angle between the plasmon surface normal and the motion direction. b) In our model (see RO19) the reverse shock deforms the initial clump that becomes into a plasmon in a negligible time. The environment has a density ฯa, the plasmon has a velocity v and a density ฯ(x) with a density structure studied in DA. The post-shock region that separates the environment and the plasmon structure has been exaggerated for clarity. An intermediate phase between this two cases was well studied by Burton (1997) and Bally et al. (2015). ditions that formed such a structure. Bally et al. (2015) performed numerical simulations of the fingers using observational restraints and obtained a notable resemblance to the actual fingers. Nevertheless, as they described, the interpretation of such simulations is limited since they used an adiabatic system, while, in reality, the cooling length is much shorter than the total length of the longest fingers. Therefore, more detailed numerical solutions and an adequate analytic model can be helpful to determine the physical conditions and, perhaps, the ejection mechanism of the fingers, which can be helpful to understand the relevance and duration of similar events in the star forming processes. Then, adopting an age of t = 544 yr (Rodrรญguez et al. 2017), we propose a model to obtain the physical conditions of the ejection. The mass-loss plasmon has a implicit dependence on its own size and it can be used to find better restrictions on the ejection mechanism. In Section 2 we describe the sample of objects to be analyzed, in Section 3 we present the estimation of the properties for the clumps before the explosive event that gen-erated the Orion fingers in Orion BN/KL. We summarize our conclusions in Section 4. (2000), and Bally et al. (2011) we have obtained the proper motion of several features and the projected positions for the reported data. In the follow paragraphs we describe with more detail how this was done. โ€ข Lee & Burton (2000) analyzed the proper motions of 27 bullets, with emission in [Fe II], and 11 H 2 knots, using a time baseline of 4.2 yr (see Figure 2). From these 38 objects only 19 have proper motion vectors aligned with the position vectors with respect to IRc2, the possible origin of the explosive event. They used a distance to the Orion Nebula of d = 450 pc (Genzel & Stutzki 1989), that is larger than the actually accepted d = 414 pc (Menten et al. 2007)) which leads to overestimate the projected distance and proper motion of the data. We have corrected this effect for this paper. In general, they conclude that the farther features have larger proper motions, which is consistent with, at least, some kind of impulse with an age shorter than 1000 yr. However, it is interesting to note that they reported some H 2 knots as almost stationary, but these are not included in the final analysis. โ€ข measured the proper motions of several HH objects in the Orion nebula. For the Orion BN/KL region they found 21 HH objects moving away from IRc2. As Lee & Burton (2000), they found that the larger objects are faster. HH 210 is also a prominent feature that has a proper motion of almost 400 km s โˆ’1 . The uncertainties lead them to fit an age of 1010 ยฑ 140 yr. Even in this case, several objects are not in the range of 870 to 1150 yr. Also,they used a distance of 450 pc, that has been corrected in this work to 414 pc. โ€ข Bally et al. (2011) (see also, (Cunningham 2006)) obtained the proper motions of 173 fingers in H 2 , but in this case there is no clear evidence for a linear dependence of the velocity on the projected distance. They only mentioned that the age of the event could be between 500 and 1000 yr, whether the simultaneous ejection assumption is maintained. The three data sets are represented in Figure 2. โ€ข Also, Zapata et al. (2009) analyzed the CO streamers that seem to be related to the fingers. These streamers are โˆผ2 times shorter and narrower than the fingers and each one follow a Hubble law. The kinematic age of each one could be related to the projection angle with respect to the plane of the sky, and assuming that the explosion was isotropic they found that the most probable age is around 500 yr. , using ALMA, found more streamers and confirmed that these streamers has isotropic extension. This means that some of the CO streamers do not have associated fingers. 2.2. Mass, density and size On the other hand, from Rodrรญguez et al. (2017), Cunningham (2006) and we have obtained the mass, density and size of several features and the projected positions for the reported data. In the follow paragraphs we also describe with more detail how this was done. โ€ข Recently, Rodrรญguez et al. (2017) has measured, with high precision, the proper motions of the objects I, BN and n. They found that these objects had to be ejected from a common origin 544 ยฑ 6 yr ago. This uncertainty does not take into account systematic effects, which can increase it up to ยฑ25 yr. In any case, 544 years is consistent with the age determined by the CO streamers of about 550 years. In this work, we assume this event to be the origin of the ejection of the material that created the fingers and the streamers. โ€ข Cunningham (2006) measured 8M as the mass of the moving gas. We can use this estimate to find the upper limits for either the mass of an individual clump, or its size. Nevertheless, due to the complexity of the region there is an uncertainty of a factor two in this mass estimate. โ€ข For the mass, we assume that the observed moving gas corresponds, exclusively, to that of the ejected clumps. Since there are 200 fingers, then the average mass of each clump is simply 8/200 = 0.04M . An inferior limit for the clump mass is that calculated by Allen & Burton (1993) and Burton & Allen (1994) of 10 โˆ’5 M based on the [Fe II] 1.64ยตm line flux and size. โ€ข On the other hand, an upper limit for the size of the initial clump is obtained by adopting the opposite assumption than above, that is, that all the moving mass comes from the swept up environmental material, and, a negligible amount from the clumps themselves. To follow this idea we have to fix the density of the environment. Extinction observations of the region by Oh et al. (2016) and indicate densities between 10 5 and 10 7 cm โˆ’3 . We adopt this latter limit, n a = 10 7 cm โˆ’3 . In reality, the density is highly structured (Kong et al. (2018), Bally et al. (1987)). A better approximation would be to assume cylindrical symmetry for the Integral Spine Filament with a steep density gradient orthogonal to the spine. In this paper we assume an homogeneous environment, a cylindrical density profile would require to improve the presented plasmon dynamics. 3. ANALYTIC MODEL We now model a finger as a cylinder of radius R cl and individual length l i . Thus, the mass swept up by all the fingers (assuming the same radius) is, Cunningham (2006), the filled circles stand for the [FeII] bullets (Lee & Burton 2000) and the crosses represent the HH objects reported by . The lines indicate an age consistent with no deceleration, t 1 = 500 yr (dashed) and t 2 = 1000 yr (dot-dashed). where ยต = 2 is the mean molecular mass, m h mass of hydrogen and n a is the numerical density of the ambient medium. Considering, as a limit, that M t = 8M is equal to the accelerated mass we can obtain R cl โˆผ 90 au, then this is the upper limit for the initial size of the ejected clumps. Ballistic motion The simplest model is to suppose that every ejected clump travels with constant velocity and, therefore, the motion is described by: Since the projected length, r, and the velocity, v, also in projection, are observational data, then, the age of each clump can be obtained straightforward: which is independent of projection. Therefore each clump has an individual age and if we assume that all of them were ejected in a single event, each age should be, at least, similar. This is far from which we observe. In Figure 3 we show the result of Equation 3 applied to each data. The calculation of the spread of the error for the age was done using the standard procedure. The reported errors for the velocities of all the HH objects is 10 km s โˆ’1 , of all the H 2 fingers is the 25 km s โˆ’1 (Cunningham 2006) and for the [FeII] bullets is reported in Lee & Burton (2000) for each of them. Then, Figure 3 implies that there was no simultaneous event or that the ballistic motion model is not an appropriate assumption. Deceleration is the most likely interpretation. Notice that the plasmon model assume an early interaction of the original clump with the environment that will modify its initial characteristics quickly (shape, density stratification or sound speed) to those of a plasmon. But the ram pressure prevents the plasmon's free expansion, and this effect gives shape to the material (see also, (Rivera-Ortiz et al. 2019), Figure 1). Dynamic model In order to determine the fundamental parameters that control the dynamics of a high velocity clump, such as the ejection velocity v 0 , the initial size of the clump R cl , the density of the ejected material ฯ cl and the density of the environment ฯ a , or their initial density contrast ฮฒ = ฯ a /ฯ cl , we use an analysis based on the plasmon proposed by DA. Assuming a spherical clump at the ejection, the initial mass can be expressed as, We assume that every clump was ejected with the same size (R cl = 90au) and the environment density is 10 7 cm โˆ’3 , therefore we can estimate the ejection conditions. The plasmon density is not constant because of the enlargement of the traveled distance and the mass detachment included in the model. In this section we explore a model which takes into account the deceleration of the clump as it losses mass due to the interaction with the environment. This is the model developed in RO19. As stated in RO19, no matter the physical characteristics of the original clump (shape, size, density, velocity or temperature) the initial interaction of the clump with the surroundings will transform it into a plasmon as proposed by DA, Cantรณ et al. (1998) and RO19. Mass, on the other hand, is preserved. RO19 shows that the mass M , velocity v, and position R of the newly created plasmon after a time t of ejection/formation are given by the parametric form and respectively, where M 0 is the initial mass of the clump, v 0 the ejection velocity, u = v/v 0 is a dimensionless velocity, ฮฑ a parameter given by, and a scale time t 0 with ฮพ DA = 9.22 from the DA model, ฮป = 0.0615, and ฮณ = 1.4 is the adiabatic coefficient for an ideal diatomic gas. Combining Equations (8) and (9), we obtain: The purpose of the present paper is to use Equations (4) to (10) to estimate the physical parameters, such as mass, ejection velocity, density, of each of the original clumps that produce the fingers we see today and formed by the interaction of the clumps with the surrounding molecular cloud. We begin by assuming that all the clumps were ejected in a single explosive event that took place 544 years ago from the place of the closest interaction that expelled BN, n and I objects reported by Rodrรญguez et al. (2017). So, in Equation (6) we set t = 544yr for all the clumps, although each clump had their own initial mass and ejection velocity. Next, for each clump we know, from observations, its distance to the origin of the explosion R and its current velocity v. Both quantities are those on the plane of the sky. However, we take them as estimates of the real values, since there is no way to de-project them without making further assumptions. Even so, we need to make a further assumption, since we have more unknowns than equations. We might, for instance, choose to assume a fixed value of ฮฒ, which means the same initial density for each clump, or, perhaps, the same initial mass, or any other reasonable constrain. We choose, however, to assume a unique initial radius for all the clumps of R cl = 90au, based on the assumption that all the clumps were produced by the close encounter of two protostellar objects that ripped off material with the same cross section interaction. Then, we have a set of equations (equations 5, 6 and 10) that can be solved for v 0 , t 0 and ฮฑ simultaneously, and by Equation (4) we also can obtain the mass of each ejected clump. The number density of the surroundings was taken n a = 10 7 cm โˆ’3 . In Figure 4 we show the trajectories of clumps in the vโˆ’R plane as calculated by our model, using Eq. (5) to Eq. (10). A fixed clump radius R cl = 90au was assumed in all the calculations. In the upper panel, we have taken a fixed initial velocity for a clump with v 0 = 500km s โˆ’1 , and vary its initial mass from 2 ร— 10 โˆ’2 (the lower dashed line) to 2 ร— 10 โˆ’1 M (the upper dashed line). The solid line marks the time t = 500yr after ejection. In the bottom panel, the initial clump mass is also fixed at M 0 = 0.2M and each dashed line corresponds to a different initial velocity v 0 , from 100 to 1100km s โˆ’1 . The solid line, again, marks the time t = 500 yr after ejection. Note that clumps stop at the same distance, in this case at 75000au. In Figure 5, we can see that the model curves that envelope the data set do not have high mass (> 0.2 M ) and high velocity clumps (> 800 km s โˆ’1 ). We could expect slow points with low velocities at a distance greater than 8 ร— 10 4 au, but there is not any evidence of such clumps but in this case we have that 800 km s โˆ’1 is the fastest velocity that meets the longer features. Also, a plasmon with ejected mass of 0.2 M will reach a final distance of โˆผ 8 ร— 10 4 au. This means that a less massive plasmon, with less than 800 km s โˆ’1 could be near to its lifetime or maybe it has already stopped. This could explain the CO streamers that are not related to any H 2 finger. Finally, the RO19 plasmon solution is applied to each of the object of the data sets of the Sect. 2.1 and the initial mass, ejection velocity and lifetime are obtained and shown in Figure 6, 7 and 9, respectively. The total mass, Figure 6, is 11.93 M with mean mass of 0.06 M which is close to the limits of 4 ร— 10 โˆ’2 M analyzed in Section 2.1. Figure 7 shows the ejection velocity distribution. It is interesting to note that there are 2 peaks in this distribution around 200 and 500 km s โˆ’1 . Further analysis is required to propose a mechanism of explosion that could explain this characteristic. Also, the total kinetic energy of the model is 3 ร— 10 49 erg. Once the ejection parameters are obtained, we can infer the lifetime and stopping distance of each clump using v = 0 in Equations (6) and (7). In Figure 9 we show the distribution of the lifetime for the clumps. This can give an idea of the lifetime of the explosive event, in this case 2000 yr after the explosion, there will be just a few fingers and this can the reason why there are just a few cases of encounters of this kind. Finally, in Figure 10 we show the time and position of each clump compared with its own lifetime and stopping distance, respectively. Again, there is a tendency for the most of the clumps to be at the end of their lives. This suggests that maybe some fingers have already ended their lives, explaining that there are H 2 features with no proper motion and CO streamers with no H 2 fingers associated. This characteristic can be explained in terms of extinction, but the radial velocities of the H 2 fingers are needed in order to correctly associate them to the CO streamers. CONCLUSIONS The plasmon model is a useful tool for the analysis of the dynamics of a clump interacting with a dense environment. Using the dynamic models presented in DA and RO19 we estimate the physical features, initial velocities and masses, for the components (clumps, [FeII] and HH object) reported in Lee & Burton (2000), and Cunningham (2006). We obtain that the individual maximum mass for the clumps is 0.2 M , but the maximum velocity of this sample is of 800 km s โˆ’1 . The total kinetic energy, in this case, is โˆผ 3 ร— 10 49 erg, which represents 10 2 times more energy than the energy obtained for the total luminosity in the Orion Fingers region. Other two consequences of the plasmon model is that the larger ejection velocities produce the shorter lifetimes, and the initial mass of a clump determines its stopping distance. The RO19 plasmon predicts that the longest fingers in Orion BN/KL have almost reached their lifetime, but they are not far from their final length and they required ejection velocities as high as 800 km s โˆ’1 to reproduce the observations. This implies that the slower fingers could have lifetimes as long as 3000 yr, and the explosion signatures could disappear in 2000 yr. The mass-loss plasmon can explain that there are not visible longer fingers because, if there were clumps thrown with higher speed or less mass, they could have died by now. Also, the required ejections velocities for most of the longest fingers are about 500 km s โˆ’1 which is less than twice their observed velocity. Therefore, using the RO19 model we obtained the initial masses of each of the clumps, from their mass distribution it is observed a large quantity of clumps has a mass in the interval of 8 ร— 10 โˆ’3 โˆ’ 2 ร— 10 โˆ’1 M and from the velocities distribution, we obtain a distribution of 2 populations, one of them with a maximum at 200 km s โˆ’1 and another with a velocity of 500 km s โˆ’1 . Finally, from our calculated time and position of each clump and their own expected lifetime we can see a tendency for the most of the clumps to be at the end of their lives. We proposed that some fingers have already ended their lives, it explains that there are H 2 features with no proper motion and CO streamers with no H 2 fingers associated.
Cofilin-1 and Other ADF/Cofilin Superfamily Members in Human Malignant Cells Identification of actin-depolymerizing factor homology (ADF-H) domains in the structures of several related proteins led first to the formation of the ADF/cofilin family, which then expanded to the ADF/cofilin superfamily. This superfamily includes the well-studied cofilin-1 (Cfl-1) and about a dozen different human proteins that interact directly or indirectly with the actin cytoskeleton, provide its remodeling, and alter cell motility. According to some data, Cfl-1 is contained in various human malignant cells (HMCs) and is involved in the formation of malignant properties, including invasiveness, metastatic potential, and resistance to chemotherapeutic drugs. The presence of other ADF/cofilin superfamily proteins in HMCs and their involvement in the regulation of cell motility were discovered with the use of various OMICS technologies. In our review, we discuss the results of the study of Cfl-1 and other ADF/cofilin superfamily proteins, which may be of interest for solving different problems of molecular oncology, as well as for the prospects of further investigations of these proteins in HMCs. Introduction The key features of malignant neoplasms include uncontrolled proliferation, as well as the ability to invade surrounding tissues (invasion) and to spread locally and regionally or even to distant parts of the body (metastasis). These features are the basis for ideas (which appeared in the 19th century) about the common origin of malignant tumors from stem cells [1,2] and for revealing typical patterns that are associated with tumor phenotypes [3], in particular, by using different OMICS technologies [4]. Nevertheless, malignant tumors vary by tissues of origin and types of differentiation. Moreover, there is a body of evidence that the majority of malignant tumors have intratumoral cell heterogeneity, i.e., are composed of multiple clonal subpopulations of tumor cells with heterogenic morphology that differ on functional properties, in particular on invasive and metastatic potential. Accordingly, malignant tumors can significantly differ by gene expression patterns, including those that are involved in the regulation of proliferation, invasion and metastasis [5][6][7]. The invasion and metastasis are considered to be caused by the dysregulation of motility of malignant cells (see, e.g., Bravo-Cordero et al. [8] and Martin et al. [9]). The accumulated data suggests that changes in cell motility can be triggered by certain actin-binding proteins (ABPs) which provide the formation, function, and restructuring of the actin cytoskeleton [10][11][12][13]. The detection of these In the 1980s, several different proteins with actin-depolymerizing activity were identified in vertebrates [22,23]. According to various authors, actin-depolymerizing proteins were characterized by molecular weight (MW) ~19 kDa [22] or ~93 kDa [23]. Almost at the same time proteins with MW ~19 kDa became known as cofilins for their ability to form cofilaments with actin [24]. A similar protein with low MW was termed destrin (destroys F-actin; Dstn), or ADF (e.g., Vartiainen et al. [25] and UniProt P60981). An actin-depolymerizing protein with MW ~93 kDa proved to be gelsolin [23,26]. Confusingly, the alternative name ADF is sometimes used for gelsolin as well as for destrin (UniProt P06396). Three closely related actin-depolymerizing proteins that are usually identified in most vertebrates, Cfl-1, Cfl-2, andDstn (ADF), are often referred to as traditional cofilins. In the late 1990s, traditional cofilins and some related proteins found in different species began to be regarded as a special family, called the ADF/cofilin family [25,27,28]. At the turn of the 20th-21st centuries Lappalainen et al. found special actin-binding modules of about 150 amino acid residues in polypeptide chains of ADF/cofilins [29]. These modules formed specific three-dimensional structures with six-stranded mixed ฮฒ-sheets. The abovementioned modules were named actin-depolymerizing factor homology domains, or ADF-H domains ( Figure 1). distributed in various tissues and is named non-muscle isoform (UniProt P23528). Cfl-2-muscle isoform-may exist in at least two variants due to alternative splicing of a single gene CFL2 [43]. One of these isoforms (Cfl-2b) is present in skeletal muscle and heart, and the other (Cfl-2a) has been revealed in various tissues (see also UniProt Q9Y281). Dstn encoded by DSTN gene is also widely distributed in various tissues (UniProt P60981). ADF/cofilins can bind F-actin and sever actin filaments. On the one hand, severing of the actin filament causes actin depolymerization. On the other hand, it can lead to actin polymerization directly or indirectly by producing free barbed ends [44]. Along with binding of F-actin, ADF/cofilins have the ability to bind G-actin in a 1:1 ratio [24,29]. It is currently believed that the molecules of the traditional ADF/cofilins have two distinct actin-binding sites, the G/F-site located in the C-terminus and the F-site located in the N-terminus. The F-site is involved in the binding of F-actin, and the G/F-site is required for binding to both the G-actin and the F-actin [45]. The functionally important amino acid residues at the N-terminal end of the human cofilins are shown in Figure 2. ADF/cofilins bind preferably to ADP-forms of G-or F-actin and use energy from ATP hydrolysis in actin polymerization [46]. It has been demonstrated that cofilin can directly bind not only to actin, but also to phosphatidylinositol 4,5-bisphosphate (PIP2) [47] and to serine/threonine-protein kinase LIMK1 [48]. ADF/cofilins from vertebrates are found to contain nuclear localization sequences (see Figure 2 and UniProt P23528). Cfl-2-muscle isoform-may exist in at least two variants due to alternative splicing of a single gene CFL2 [43]. One of these isoforms (Cfl-2b) is present in skeletal muscle and heart, and the other (Cfl-2a) has been revealed in various tissues (see also UniProt Q9Y281). Dstn encoded by DSTN gene is also widely distributed in various tissues (UniProt P60981). ADF/cofilins can bind F-actin and sever actin filaments. On the one hand, severing of the actin filament causes actin depolymerization. On the other hand, it can lead to actin polymerization directly or indirectly by producing free barbed ends [44]. Along with binding of F-actin, ADF/cofilins have the ability to bind G-actin in a 1:1 ratio [24,29]. It is currently believed that the molecules of the traditional ADF/cofilins have two distinct actin-binding sites, the G/F-site located in the C-terminus and the F-site located in the N-terminus. The F-site is involved in the binding of F-actin, and the G/F-site is required for binding to both the G-actin and the F-actin [45]. The functionally important amino acid residues at the N-terminal end of the human cofilins are shown in Figure 2. ADF/cofilins bind preferably to ADP-forms of G-or F-actin and use energy from ATP hydrolysis in actin polymerization [46]. It has been demonstrated that cofilin can directly bind not only to actin, but also to phosphatidylinositol 4,5-bisphosphate (PIP2) [47] and to serine/threonine-protein kinase LIMK1 [48]. ADF/cofilins from vertebrates are found to contain nuclear localization sequences (see Figure 2 and UniProt P23528). . N-termini of human traditional cofilins Cfl-1, Cfl-2 and destrin (Dstn) according to UniProt (P23528, Q9Y281, P60981, respectively). Red "S" indicate serine residues which can be phosphorylated. Identical regions of amino acid sequences are framed. Nuclear localization signals are labeled in yellow. Repeating hydrophobic amino acid residues are labeled in green. Repeating positively charged amino acid residues are labeled in blue. Repeating negatively charged amino acid residues are labeled in gray. Starting parts of ADF-H domains (see below) are shown by the dotted red lines. The proteins from the second group (twinfilins) have two tandem ADF-H domains that are located near the N-terminus of the polypeptide chain and are separated by a linker area of several dozen amino acid residues. Typical twinfilins have a MW of about 40 kDa. It has been shown that at least in humans, mice, and S. cerevisiae, twinfilins are presented by two isoforms, each of which are encoded by their own gene (e.g., TWF1 and TWF2 in human, according to UniProt Q12792 and Q6IBS0). Additionally, in mice, an alternative promoter is responsible for production of two proteins: TWF-2b in striped muscles (heart and skeletal muscles) and TWF-2a mainly in non-muscle tissues and organs [49]. Twinfilins can interact with G-actin forming 1:1 complexes, and some of the twinfilins can bind F-actin, as well. In mammals, two ADF-H domains of twinfilins allow both capping of the barbed end of actin filaments and sequestering of actin monomers [50]. The third group is composed of drebrins and Abp1s, proteins with a single ADF-H domain, but with higher MW (~70 kDa) than the traditional cofilins and twinfilins. Drebrins are typical for vertebrates. Three isoforms-embryonic (E1 and E2), and adult (A)-have been found to be generated by alternative splicing from a single gene DBN1. In humans, drebrins are presented in brain neurons and also in the heart, placenta, skeletal muscle, kidney, pancreas, peripheral blood . N-termini of human traditional cofilins Cfl-1, Cfl-2 and destrin (Dstn) according to UniProt (P23528, Q9Y281, P60981, respectively). Red "S" indicate serine residues which can be phosphorylated. Identical regions of amino acid sequences are framed. Nuclear localization signals are labeled in yellow. Repeating hydrophobic amino acid residues are labeled in green. Repeating positively charged amino acid residues are labeled in blue. Repeating negatively charged amino acid residues are labeled in gray. Starting parts of ADF-H domains (see below) are shown by the dotted red lines. The proteins from the second group (twinfilins) have two tandem ADF-H domains that are located near the N-terminus of the polypeptide chain and are separated by a linker area of several dozen amino acid residues. Typical twinfilins have a MW of about 40 kDa. It has been shown that at least in humans, mice, and S. cerevisiae, twinfilins are presented by two isoforms, each of which are encoded by their own gene (e.g., TWF1 and TWF2 in human, according to UniProt Q12792 and Q6IBS0). Additionally, in mice, an alternative promoter is responsible for production of two proteins: TWF-2b in striped muscles (heart and skeletal muscles) and TWF-2a mainly in non-muscle tissues and organs [49]. Twinfilins can interact with G-actin forming 1:1 complexes, and some of the twinfilins can bind F-actin, as well. In mammals, two ADF-H domains of twinfilins allow both capping of the barbed end of actin filaments and sequestering of actin monomers [50]. The third group is composed of drebrins and Abp1s, proteins with a single ADF-H domain, but with higher MW (~70 kDa) than the traditional cofilins and twinfilins. Drebrins are typical for vertebrates. Three isoforms-embryonic (E1 and E2), and adult (A)-have been found to be generated by alternative splicing from a single gene DBN1. In humans, drebrins are presented in brain neurons and also in the heart, placenta, skeletal muscle, kidney, pancreas, peripheral blood lymphocytes including T-cells (see [51] and UniProt Q16643). Abp1 proteins have a slightly lower MW than drebrins, but a similar primary structure. Abp1s have been found in mammals, including humans [37,38]. The human Abp1 protein has a MW of 48 kDa and the structure of the polypeptide chain which is very similar to the structure of typical drebrins. This fact has served as the basis for the recommended name of this protein-drebrin-like protein (synonyms hematopoietic progenitor kinase 1-interacting protein of 55 kDa (HIP-55), drebrin-F) (UniProt Q9UJU6). Drebrins and Abp1s have a single ADF-H domain in their N-termini, followed by a nonconserved central region and a C-terminal region. These proteins have been shown to bind F-actin and stabilize actin filaments. Some proteins of this group (but not human drebrin) have a C-terminal Src homology 3 (SH3) domain [50]. The fourth group is presented by the GMF-family proteins. These proteins have a small MW (14-17 kDa). GMF has been found in the tissues of some vertebrates. Despite the presence of ADF-H domain, GMF is not able to directly bind actin. GMF-B that is present in the brain of all vertebrates is also not able to bind actin (UniProt P60983). GMF-G is present predominantly in lung, heart, and placenta (e.g., [52] and UniProt O60234). It has structural similarity to GMF-B; however, unlike GMF-B, it was found to interact with F-actin [52,53]. Goroncy et al. analyzed the structure of ADF-H domains of GMF proteins. The authors obtained recombinant mouse GMF-B and GMF-G proteins, and studied their structures using nuclear magnetic resonance spectroscopy [39]. Both GMF structures displayed two additional ฮฒ-strands in one of the loops. These ฮฒ-strands were not seen in the protein structures of other ADF-H classes, thus, according to Goroncy et al. [39], these ฮฒ-strands may be a class-defining feature. Both GMF-B and -G can interact with the actin-related protein 2/3 (Arp2/3) complex, inhibit its activity and induce actin disassembly [40]. Another member of GMF-family, GMF1, the yeast protein discovered by Nakano et al. [40], is also able to interact with the Arp2/3 protein complex and to suppress its activity. The fifth, separate, group includes coactosin from D. discoideum and coactosin-like proteins (from different species including that of H. sapiens-UniProt Q14019). These proteins are entirely composed of a single ADF-H domain and have a MW (about 17 kDa) similar to the MW of traditional cofilins. However, unlike ADF/cofilins, coactosin and coactosin-like proteins bind only F-actin and do not promote actin depolymerization [40,50]. Moreover, some antagonistic relations between the traditional cofilins and coactosin-like 1 protein have been reported [54]. Interestingly, coactosin from Entamoeba histolytica has been recently described as an unusual type of coactosin which binds both F-and G-actins [55]. Biological Functions Traditional cofilins, the most well-studied members of the ADF/cofilin superfamily, are known to modulate actin dynamics by catalyzing actin depolymerization or polymerization through the severing of actin filaments. The effect of cofilins on actin filaments (assembly or disassembly) depends on the concentration of active cofilins, the relative concentration of G-actin, and some protein factors. In low concentrations, ADF/cofilins sever the actin filaments and promote depolymerization. High concentration of cofilins is suggested to promote actin nucleation and polymerization [64]. Cofilins can contribute to actin polymerization producing free barbed ends and supplying actin monomers. Cfl-1 is currently understood to modulate actin nucleation and filament branching through synergy or competition with the Arp2/3 complex. The Arp2/3 protein complex is a seven-subunit complex of actin-related proteins that enables binding to actin, providing nucleation and formation of actin branches [65,66]. The formation of actin branches is one of the key events of the production of lamellipodia, which are essential for cell motility. Cfl-1 and Arp2/3 have been shown to work in synergy (i.e., with a cooperative effect) producing free barbed ends for actin polymerization [67]. In parallel, Cfl-1 can reduce the affinity of the Arp2/3 complex for filaments and promote dissociation of old actin branches [68]. Cfl-1 and Cfl-2 have also been shown to regulate the assembly of actomyosin complex blocking the binding of tropomyosin and myosin II to actin filaments [24,69]. It was found that, in vivo, cofilins participated in the reorganization of actin cytoskeleton in response to stresses and different cell stimuli [70]. Overexpression of cofilins leads to the formation of stress fibers, contractile actin bundles that have been found in non-muscle cells and shown to play an important role in cellular contractility, providing cell adhesion, migration (including assembly of lamellipodia and filopodia), and morphogenesis [71]. Due to this function, cofilins are regarded as molecular regulators of development processes. Cfl-1 and destrin are required for ureteric bud branching morphogenesis [72]. According to Sparrow et al. [73], Cfl-1 is necessary for dynamic changes in the cytoskeleton needed for axon engagement and is essential for Schwann cell myelination. Evidence for the involvement of Cfl-1 (and the Arp2/3-complex) in the regulation of axonal growth cones has been recently reviewed by Dumpich et al. [74]. Cofilin can also participate in regulation of cell proliferation in response to mechanical stresses. In mammalian epithelial cells it inhibits through the cytoskeleton remodeling activity of Yes-associated protein 1 (YAP1) and Translin-associated zinc finger protein 1 (TAZ1), mediators of Hippo signaling pathway and organ growth, thus inhibiting cell proliferation [75]. Numerous data on the participation of Cfl-1 in development are summarized in review [76]. To sum up, ADF/cofilins play an essential role in the controlling of actin dynamics. They have a dual effect on actin filaments and may contribute to cellular contractility through both the local actin depolymerization and the formation of stress fibers, and therefore they are important for morphogenesis and development. In addition, cofilins have also functions in cells that are not directly related to the regulation of actin dynamics. The first is that cofilins can provide transport of actin molecules (which do not contain the nuclear localization signals) to the nucleus [77]. Using immunofluorescence microscopy, Ono et al. revealed ADF (Dstn) and cofilin in nuclei of cultured myogenic cells and demonstrated the colocalization of ADF and cofilin in intranuclear actin rods [78]. G-actin which is transported to the nucleus by means of Cfl-1 may act as a key player for nuclear structure and function regulating both chromosome organization and gene activity (e.g., see [79]). Cofilin has been characterized as a connecting link between T-cell co-stimulation and actin translocation to the nucleus [80,81]. Co-stimulatory signals from ligand attachment to accessory receptors like the cluster of differentiation 2 (CD-2) are required for the production of the T-cell growth factor interleukin 2 (IL-2) and cell proliferation. In T lymphocytes, cofilin is a component of the costimulatory signaling pathways: CD-2 stimulation leads to dephosphorylation of cofilin, binding to G-actin and translocation into the nucleus [80,82]. In addition to G-actin, Cfl-1 is also able to transport to the nucleus various regulatory proteins that affect the processes of transcription (e.g., Runt-related transcription factor 2 (Runx2)) and cell differentiation [83]. The other function of cofilins that is not related to the regulation of actin dynamics is their participating in apoptosis. Cofilin oxidation and translocation to the mitochondrion has been found to induce apoptosis through the opening of the mitochondrial permeability transition pore and release of cytochrome c [84]. At last, Cfl-1 has been shown to directly activate phospholipase D1 which is important for cell migration [85,86]. The essential roles of traditional cofilins (Cfl-1, Cfl-2 and Dstn) in mammals have been proved by experiments on cofilin/ADF-knockout mouse strains [87][88][89]. In such experiments, the homozygous mice Cfl-1 โˆ’/โˆ’ were embryonic lethal while heterozygous mice Cfl-1 +/โˆ’ were viable. It was shown that Cfl-1 was not essential for the extensive morphogenetic movements during gastrulation, because the other proteins (e.g., Dstn) can provide cellular contractility instead of Cfl-1 at this stage of embryogenesis. However, the Cfl-1 knockout at later stages dramatically altered the processes of neuronal development. Although Dstn was overexpressed in mutant embryos Cfl-1 โˆ’/โˆ’ , this could not compensate for the lack of Cfl-1, suggesting that these proteins might have a different function in embryonic development. Mice lacking ADF were viable and had no alterations during embryonic development [87]. The Cfl-2 knockout led to severe protein aggregate myopathy in a mouse model [89]. The various cellular functions of traditional cofilins including those in regulation of nuclear integrity and transcriptional activity, apoptosis, nuclear actin monomer transfer, and lipid metabolism are discussed in recent review of Kanellos and Frame [90]. A schematic model summarizing the Cfl-1 functions in vertebrates is shown in Figure 3. translocation to the mitochondrion has been found to induce apoptosis through the opening of the mitochondrial permeability transition pore and release of cytochrome c [84]. At last, Cfl-1 has been shown to directly activate phospholipase D1 which is important for cell migration [85,86]. The essential roles of traditional cofilins (Cfl-1, Cfl-2 and Dstn) in mammals have been proved by experiments on cofilin/ADF-knockout mouse strains [87][88][89]. In such experiments, the homozygous mice Cfl-1 โˆ’/โˆ’ were embryonic lethal while heterozygous mice Cfl-1 +/โˆ’ were viable. It was shown that Cfl-1 was not essential for the extensive morphogenetic movements during gastrulation, because the other proteins (e.g., Dstn) can provide cellular contractility instead of Cfl-1 at this stage of embryogenesis. However, the Cfl-1 knockout at later stages dramatically altered the processes of neuronal development. Although Dstn was overexpressed in mutant embryos Cfl-1 โˆ’/โˆ’ , this could not compensate for the lack of Cfl-1, suggesting that these proteins might have a different function in embryonic development. Mice lacking ADF were viable and had no alterations during embryonic development [87]. The Cfl-2 knockout led to severe protein aggregate myopathy in a mouse model [89]. The various cellular functions of traditional cofilins including those in regulation of nuclear integrity and transcriptional activity, apoptosis, nuclear actin monomer transfer, and lipid metabolism are discussed in recent review of Kanellos and Frame [90]. A schematic model summarizing the Cfl-1 functions in vertebrates is shown in Figure 3. Similarly to cofilins, twinfilins are also involved in the regulation of actin dynamics and can participate in formation of cellular protrusions such as lamellipodia and filopodia in collaboration with other actin binding proteins (Arp2/3, cortactin, etc.) [91]. In Drosophila twinfilin is required for cell migration and endocytosis. In mammalian cells, TWF-1 is also involved in endocytosis and migration, and participates in cell morphogenesis [50]. TWF-2a is shown to be involved in the morphogenesis of neurons. TWF-2a knockout mice developed normally without any abnormalities, due to the fact that it is typically co-expressed in the same tissues with TWF-1 and has similar function [92]. The specific role of TWF-2b, which is expressed exclusively in heart and skeletal muscles, is currently unclear. Similarly to cofilins, twinfilins are also involved in the regulation of actin dynamics and can participate in formation of cellular protrusions such as lamellipodia and filopodia in collaboration with other actin binding proteins (Arp2/3, cortactin, etc.) [91]. In Drosophila twinfilin is required for cell migration and endocytosis. In mammalian cells, TWF-1 is also involved in endocytosis and migration, and participates in cell morphogenesis [50]. TWF-2a is shown to be involved in the morphogenesis of neurons. TWF-2a knockout mice developed normally without any abnormalities, due to the fact that it is typically co-expressed in the same tissues with TWF-1 and has similar function [92]. The specific role of TWF-2b, which is expressed exclusively in heart and skeletal muscles, is currently unclear. Drebrin and Abp1 have been shown to regulate actin filament organization, especially during development of neuronal cells. Drebrin E is highly abundant in the developing brain. This protein may modulate actomyosin interaction within dendritic spines and alter spine shape [93]. Similarly, drebrin (isoform E) is involved in the regulation of axonal growth through actin-myosin interactions [94]. Drebrin E regulates neuroblast migration in the postnatal mammalian brain [95]. Drebrin A predominates in neurons of the adult forebrain. Neuronal drebrin (isoform A) inhibits cofilin-induced severing of F-actin due to direct competition between these two proteins for F-actin binding [96]. Drebrin (E2 isoform) has been also found in various non-neuronal cells, including fibroblasts, stomach and kidney epithelia [97], and keratinocytes [98], where it plays a role in, for example, adhering junctions. Abp1 is shown to be implicated in endocytotic processes. It uses C-terminal SH3 domain to bind various proteins including regulators of endocytosis. Particularly, it associates with dynamin, a large GTPase essential for vesicle fission [99]. Due to its ability to interconnect the actin cytoskeleton and participate in endocytosis, Abp1 regulates lymphocyte and leukocyte responses [38,50,100]. GMF does not bind actin, but binds Arp2/3 complex and suppress its activity which results in stimulation of filament debranching and inhibition of actin nucleation [101]. Nakano et al. described the blocking of the Arp2/3 complex by GMF1 protein as a reason for the modulatory effect of GMF1 on the yeast actin cytoskeleton [40]. GMF has been shown to regulate lamellipodial protrusion dynamics and cell migration [102]. GMF-B including the human one is also not able to bind actin. To date, it has been established that GMF-B induces synthesis of some proinflammatory cytokines, as well as influences the differentiation and aging of various cells of the nervous system in normal and pathological conditions (e.g., see [103,104] and UniProt P60983). In fibroblasts, GMF-B controls branched actin content and lamellipodial dynamics [105]. The main function of GMF-G is still unclear. This protein found predominantly in lung, heart, and placenta is capable of interacting with F-actin and influencing cell motility [52,53]. Functions of coactosin and coactosin-like proteins are insufficiently understood. It has been shown that coactosin inhibits barbed end capping of actin filament and is involved in actin polymerization. The knockdown of coactosin has resulted in the disruption of actin polymerization and of neural crest cell migration [106]. In chick embryos, coactosin was expressed during morphogenetic movement and associated with actin stress fibers in cultured neural crest cells [107]. In vitro studies demonstrated that coactosin-like protein can protect F-actin from cofilin-mediated depolymerization [54]. Additionally, coactosin-like protein is known to support the activity of 5-lipoxygenase, an enzyme involved in leukotriene biosynthesis. Coactosin-like protein binds 5-lipoxygenase and translocates it from cytosol to the nucleus. In coactosin-like protein knockdown human monocytic cell line, the activity of 5-lipoxygenase is decreased, but not absent [108]. Regulaton The activity of ADF/cofilin superfamily members is regulated by various mechanisms. ADF/cofilins are shown to be regulated by pH, phosphatidylinositols, protein kinases, and phosphatases, as well as some other proteins. Moreover, their activities can depend on cellular redox status. It is well known that F-actin binding and depolymerizing activity of cofilins depends on pH. Yonezawa et al. reported that in vitro, in an F-actin containing model system, at pH < 7.3 the concentration of monomeric actin (G-actin) was less than 1 ยตM, even with an excess of cofilin added [109]. However, at pH > 7.3 the concentration of G-actin increased proportionally to the concentration of cofilin added, until the complete depolymerization of F-actin. The authors formed the conclusion that cofilin is capable of reversibly controlling actin polymerization and depolymerization in a pH-sensitive manner. Later, pH was demonstrated to modulate cofilin activity in vivo [110]. However, pH sensitivity is apparently not a common feature of all ADF/cofilins in all species. For example, mouse Cfl-1 unlike human has been shown to be pH-independent, as well as mouse Cfl-2 [25]. Membrane phosphoinositides, particularly PIP2, are also known to regulate ADF/cofilin activity. Cofilins can directly bind phosphatidilinositols, and PIP2-binding area on the surface of the cofilin molecule overlaps with the actin-binding site [47]. Therefore, binding to PIP2 leads to inhibition of ability to bind to actin. Changes in PIP2 density of the cellular membrane can regulate a balance between membrane-bound and free active ADF/cofilins [111]. Phosphorylation of Cfl-1 on a serine residue (Ser3) inhibits its binding to F-and G-actin [112]. Similar data were obtained for Dstn [113]. Only dephosphorylated (active) cofilin can carry out the functions associated with binding of actin and protein translocations to the nucleus and mitochondrion. In contrast, phosphorylated cofilin is required to activate phospholipase D1 [114]. The regulation of cofilins by phosphorylation/dephosphorylation is performed via signaling pathways involving kinases and phosphatases in response to extracellular signals and changes in microenvironment [14,17,115]. In mammals, Cfl-1 has been shown to be phosphorylated and inactivated by LIM-kinases (LIMK1, LIMK2) and testicular protein kinases (TESK1, TESK2). Conversely, cofilin is dephosphorylated and activated by slingshot protein phosphatases (SSH1, SSH2, SSH3), protein phosphatases 1 and 2A (PP1, PP2A), and chronophin (CIN) (for a review, see [75]). Reactions of the phosphorylation/dephosphorylation of cofilins have a significant impact on modulation of actin dynamics, thus influencing cell motility and morphogenesis in vertebrates [116,117]. For this reason, kinases and phosphatases of cofilins may play a crucial role in the development. The overexpression of LIMK1 or inactivation of SSH1 results in abnormal accumulation of F-actin and incorrect cytogenesis during mitosis [118]. Since LIMK1 inactivates cofilin, it has been thought to downregulate lamellipodium formation and inhibit cell migration [119]. However, treatment of Jurkat T cells with LIMK1 inhibitor has been shown to block stromal cell-derived factor (SDF) 1ฮฑ-induced chemotaxis of T cells [120]. It has been assumed that LIMK1-catalyzed phosphorylation of cofilin is essential for chemotactic response of T lymphocytes, but the results from Condeelis' group, who showed that non-phosphorylatable mutant cofilin provides the generation of protrusions and determines the direction of cell migration, have contradicted the fact that phosphorylation and inactivation of cofilin are crucial for cell motility [121]. Nevertheless, further experiments confirmed the positive role of LIMK1 in migration of chemokine-stimulated Jurkat T cells. The cell migration turned out to be suppressed by LIMK1 knockdown, whereas knockdown of SSH1 causes the formation of lamellipodia around the periphery of the cell after cell stimulation [122]. Thus, it has been proposed that LIMK is required for generation of multiple lamellipodia in the initial stages of the cell response, and SSH1 is needed to restrict lamellipodial protrusions for directional cell migration [123]. In fact, although LIMK seems to be a positive regulator of cell migration, mechanisms for this regulation are still not completely understood. Apart from the kinases and phosphatases already described, the interaction of ADF/cofilins with actin can be directly or indirectly regulated by a wide range of other proteins. The binding of cofilin to cortactin is one of the mechanisms of cofilin inactivation which is typical for podosomes and invadopodia, actin-based dynamic protrusions produced by invasive cancer cells, vascular cells, and macrophages [124,125]. Actin-interacting protein 1 (AIP1) and cyclase-associated protein 1 (CAP1) promote the disassembly of cofilin-bound actin filaments [126,127]. Coronin provides recruiting cofilin to filament sides and thus enhances actin filament severing [128]. The Rho GTPases are important regulators of actin dynamics, including stress fiber formation, and are involved in the regulation of ADF/cofilins via LIMK. RhoA activates Rho-associated coiled-coil forming kinase (ROCK) which can phosphorylate and activate LIMK. Thus, RhoA stabilizes the stress fibers and prevents depolymerization of actin filaments through the phosphorylation of cofilin, and Rho-ROCK-LIMK-cofilin pathway modulates actin assembly in various cell types in response to extracellular stimuli [76]. Epidermal growth factor (EGF) has been shown to influence cofilin through the LIMK pathway or phospholipase C-mediated hydrolysis of PIP2 and release of cofilin from membrane sequestering [129]. The mechanisms which include activation of cofilin and generation of free barbed ends for lamellipodial extension in response to EGF stimulation have been described mainly for migrating malignant cells [129,130]. However, the increase of cofilin-dependent severing activity after stimulation with EGF does not always correlate with the level of dephosphorylated cofilin [131], indicating a more complex regulatory mechanism than previously thought. The cellular redox state may play an important role in regulating ADF/cofilins. This regulation is performed by oxidative post-translational modifications of Cys residues including S-glutathionylation [132], disulfide bonds [133], and S-nitrosylation [134]. Redox-related modifications influence cofilin activity and signaling pathways with its participation. Cofilin is found to be a target of oxidation under oxidative stress in T cells. Cofilin oxidation leads to formation of intramolecular disulfide bonds and to dephosphorylation at Ser3. Although dephosphorylated oxidized cofilin is still able to bind to F-actin, it cannot perform actin depolymerizing function, and the F-actin level increases [133]. Instead, oxidized cofilin acquires the ability to translocate actin to the mitochondria, where it induces cytochrome c release by opening of the permeability transition pore. As a result, mitochondrial damage and apoptosis are induced [84]. Thus, the cellular microenvironment (namely pH, phosphoinositides and proteins including enzymes) can essentially influence cofilin functions. The other members of the ADF/cofilin superfamily have been shown to share some of these aspects of regulation. However, there are few available data addressing possible mechanisms of their regulation. Twinfilins have been demonstrated to promote filament severing in a pH-dependent manner. As opposed to ADF/cofilins, TWF-1 severs actin filaments in vitro at pH below 6.0 [135]. Twinfilins can bind PIP2 similarly to ADF/cofilins, and this interaction down-regulates the actin binding, filament severing, and actin monomer sequestering activities [91,92,135]. TWF-1 and TWF-2 bind to capping protein (CP), which has been shown to inhibit directly the severing activity of TWF-1 [135]. The small GTPases Ras-related C3 botulinum toxin substrate 1 (Rac1) and cell division control protein 42 homolog (Cdc42) induce the localization of TWF-1 to membrane ruffles and cell-cell contacts, but do not affect the localization of TWF-2 [91]. Drebrin phosphorylation by cyclin-dependent kinase 5 (Cdk5) regulates cytoskeletal reorganization associated with neuronal migration. Drebrin E can be phosphorylated on Ser142, and drebrin A on Ser142 or Ser342 [136]. Localization of drebrin to the distal part of axonal filopodia and branching in drebrin overexpressing neurons are negatively regulated by myosin II [137]. Likewise ADF/cofilins, GMF-family proteins have been shown to be regulated by phosphorylation. GMF-G phosphorylation at Tyr104 by Abelson tyrosine-protein kinase 1 leads to the dissociation of GMF-G from Arp2/3, reduction of actin disassembly and facilitation of smooth muscle contraction [138]. The subfamily of the Rho GTPases, Rac, is involved in regulation of coactosin activity. In response to Rac signaling, coactosin is recruited to lamellipodia and filopodia, promoting actin polymerization and neural crest cell migration [106]. As a whole, ADF/cofilin superfamily proteins play a multifaceted role in cells. Since they are involved in proliferation and migration of mammalian cells, they can also be implicated in various pathological processes, including tumor growth, invasion, and metastasis. The study of the possible contribution of these proteins to malignant phenotype of cancer cells is an important task of molecular oncology. ADF/Cofilins To our knowledge, the first report on detection of Cfl-1 protein in HMCs was published by Stierum et al. [139]. Using proteomic technologies (two-dimensional electrophoresis (2-DE) and mass-spectrometric identification) the authors revealed that Cfl-1 was involved in processes of cell differentiation in colorectal adenocarcinoma (Caco-2) cell line. Later, Cfl-1 was identified in different tumor cell lines and tissues including adenocarcinomas [15,[140][141][142][143], osteosarcoma [144], lymphoid tissue neoplasms [145], astrocytoma [146], glioma [147], and neuroblastoma [148]. Accordingly, it is possible to think that Cfl-1 is a common participant in various tumor phenotypes. In particular, the results of identification of Cfl-1 in various HMCs are presented in the multi-level information database "Proteomics of malignant cells" [21]. These results for Cfl-1 in several carcinomas and sarcomas cell lines are shown in Figure 4. Cfl-1 is present on 2-DE gels in high quantity (since it is detected by routine Comassie R-250 staining) and can be attributed to 200 of the most abundant proteins of HMCs. overexpression of Cfl-1 suppressed growth and invasion of non-small cell lung cancer [152], and the phosphorylation of cofilin was elevated in bladder cancer samples compared with the normal bladder tissues [153]. Many authors have considered Cfl-1 protein as a diagnostic/prognostic tumor biomarker [145,154,155]. Zheng et al. found reliable increasing of Cfl-1 in blood samples obtained from patients with lung adenocarcinoma compared to healthy control [156]. Cfl-1 can be a target for chemotherapeutic treatment. It has been shown that docetaxel induces the apoptosis of prostate cancer cells via suppression of the cofilin signaling pathways [157]. The increased level of Cfl-1 in HMCs is often associated with poor prognosis which can be related with cofilin-dependent drug resistance of cancer cells [142,150,158]. Cfl-1 has been upregulated in multidrug resistant malignant cells compared with non-drug resistant malignant cells [142]. High Cfl-1 levels have been correlated with cisplatin resistance in lung adenocarcinomas [158]. Cfl-1 may serve as a predictor of poor response to platinum-based chemotherapy in human ovarian cancer cells [143] and in astrocytomas cells [146]. The molecular mechanisms of Cfl-1 involvement in the formation of malignant phenotype of cancer cells are still being investigated. In tumor cells, the actin dynamics and cell motility are initiated in response to stimuli in the microenvironment. EGF, as well as transforming growth factor-ฮฑ (TGFฮฑ), stromal cell-derived factor 1 (SDF1) and heregulin have been demonstrated to be involved in stimulation of cell migration and correlated with progression of various tumors [14]. The increased mRNA and protein levels of Cfl-1 in comparison with control (nonmalignant) cells have been shown in various HMCs including those from breast [140], lung [142], prostate [149] etc. Overexpression of Cfl-1 has been mainly associated with tumor cell proliferation, invasion, and metastasis [14,140,150,151]. It has also been suggested that dephosphorylated, active cofilin is increased in HMCs [77,151]. However, there are a few opposing reports. For example, the overexpression of Cfl-1 suppressed growth and invasion of non-small cell lung cancer [152], and the phosphorylation of cofilin was elevated in bladder cancer samples compared with the normal bladder tissues [153]. Many authors have considered Cfl-1 protein as a diagnostic/prognostic tumor biomarker [145,154,155]. Zheng et al. found reliable increasing of Cfl-1 in blood samples obtained from patients with lung adenocarcinoma compared to healthy control [156]. Cfl-1 can be a target for chemotherapeutic treatment. It has been shown that docetaxel induces the apoptosis of prostate cancer cells via suppression of the cofilin signaling pathways [157]. The increased level of Cfl-1 in HMCs is often associated with poor prognosis which can be related with cofilin-dependent drug resistance of cancer cells [142,150,158]. Cfl-1 has been upregulated in multidrug resistant malignant cells compared with non-drug resistant malignant cells [142]. High Cfl-1 levels have been correlated with cisplatin resistance in lung adenocarcinomas [158]. Cfl-1 may serve as a predictor of poor response to platinum-based chemotherapy in human ovarian cancer cells [143] and in astrocytomas cells [146]. The molecular mechanisms of Cfl-1 involvement in the formation of malignant phenotype of cancer cells are still being investigated. In tumor cells, the actin dynamics and cell motility are initiated in response to stimuli in the microenvironment. EGF, as well as transforming growth factor-ฮฑ (TGFฮฑ), stromal cell-derived factor 1 (SDF1) and heregulin have been demonstrated to be involved in stimulation of cell migration and correlated with progression of various tumors [14]. Dephosphorylation and activation of Cfl-1 upon EGF stimulation increases F-actin-severing activity of cofilin and generation of free barbed ends that are required for lamellipodial extension and chemotaxis to EGF, leading to invasion and metastasis [129,159]. Thus, excess of dephosphorylated Cfl-1 may be implicated in malignant phenotype of cells. This concept has been supported by a number of authors [151,160,161]. Particularly, Nagai et al. showed that overexpression of non-phosphorylatable cofilin mutant (cofilin-S3A) in astrocytoma cells resulted in more highly invasive phenotype than those xenographs expressing wild-type cofilin [151]. Nuclear translocation of dephosphorylated Cfl-1 can also contribute to malignant phenotype of cells. Dephosphorylated Cfl-1 provides transport of G-actin to the nucleus. Nuclear actin can be involved in chromatin remodeling, transcription, RNA processing, intranuclear transport, nuclear export, and maintenance of the nuclear architecture [162]. Correspondingly, the gene expression changes during cancer progression can be mediated by Cfl-1 through actin transport. Another mechanism contributing to malignant phenotype of cells and related with Cfl-1 dephosphorylation and nuclear translocation was described by Samstag and colleagues [80,82]. In untransformed T lymphocytes, cofilin is part of a costimulatory pathway that is important for the induction of T-cell proliferation (i.e., for production of IL-2). In response to ligand attachment to accessory receptors like CD-2, cofilin undergoes dephosphorylation and nuclear translocation. In malignant T lymphoma cells, dephosphorylation and nuclear translocation of cofilin occur spontaneously through constitutive activation of serine protein phosphatase. These events lead to T-cell proliferation and inhibition of apoptosis [80,82]. Cofilin activation/inactivation are modulated by changes in balance of kinases, phosphatases and other cofilin upstream regulatory proteins. These changes are responsible for initiation of the early steps of cancer cell motility and metastasis [119]. SSH1 is the most well-studied cofilin phosphatase which has been found to be upregulated in various invasive cancer cells. Wang et al. have revealed that overexpression of slingshot-1L (SSH1L) in pancreatic cancer contributes to tumor cell migration [163]. This enzyme is activated by F-actin which is formed in high quantity during lamellipodial assembly in malignant cells [164]. Phosphorylation and inhibition of SSH1L by protein kinase D (PDK) suppress cancer cell migration [165]. The role of protein kinase LIMK1 in tumor invasion and metastasis is still under discussion, similarly to its role in cell migration. According to various authors, LIMK1 caused either a decrease [119,159] or an increase [122,166,167] in invasion and metastasis. In metastatic rat mammary adenocarcinoma cells, the expression of the kinase domain of LIMK1, resulting in the near total phosphorylation of cofilin, completely inhibited the appearance of barbed ends and lamellipodia protrusion in response to EGF stimulation [119]. Overexpression of LIMK1 suppressed EGF-induced membrane protrusion and locomotion in rat mammary carcinoma cells [159]. In contrast, the increased activity of LIMK1 led to human breast cancer progression [166]. The level and activity of endogenous LIMK1 was increased in invasive breast and prostate cancer cell lines in comparison with less invasive cells [167]. The knockdown of LIMK1 has suppressed chemokine-induced lamellipodium formation and migration of Jurkat T cells [122]. These data about the positive role of LIMK1 in tumor cell migration at first seem to contradict the mechanism of tumor progression related to Cfl-1 dephosphorylation. Thus, some researchers have suggested that LIMK1 may play a role in regulating tumor progression via other mechanisms, independent of cofilin. For example, Bagheri-Yarmand et al. proposed that LIMK1 increases tumor metastasis of human breast cancer cells through stimulation of urokinase-type plasminogen activator system and degradation of the extracellular matrix by the serine protease urokinase type plasminogen activator [168]. However, there is a body of evidence that LIMK1 can influence the metastatic phenotype of tumor cells via regulation of cofilin activity, and the controversial effects of LIMK1 expression on migration and metastasis of cancer cells require an explanation. Wang et al. suggested that LIMK1 expression alone does not determine the motility and invasion status of carcinoma cells, and the collective activity and the output (barbed end production) of the LIMK1/cofilin pathways should be estimated [159]. Besides that, the contradictory results from different groups may be caused by different cell types used in these studies. It has been shown that oncoproteins and tumor suppressor proteins have effect on invasive and metastatic potential of tumors through cofilin-regulating pathways. One of the most known oncoproteins, tyrosine-protein kinase transforming protein of Rous sarcoma virus (v-Src) can disrupt the functioning of the Rho-ROCK-LIM kinase pathway resulting in dephosporylation of Cfl-1 and increased level of active Cfl-1 [169]. The tumor suppressor protein phosphoinositide phosphatase and tensin homolog (PTEN) may inactivate cofilin in cancer cells, while loss of PTEN and activation of phosphoinositide 3-kinase (PI3K) caused differential activation of the cofilin regulators, LIMK1 and SSH1L, and cofilin dephosphorylation, that promote microtentacles formation and enhance metastatic risk [170]. In addition, it was shown that the tumor suppressor Ras association domain-containing protein 1 (RASSF1A) blocks tumor growth by stimulating cofilin/PP2A-mediated dephosphorylation [161]. Thus, the role of Cfl-1 as an important participant of various signaling pathways in HMCs requires further investigation. The contributions of Cfl-1 to the malignant phenotype are schematically presented in Figure 5. Obviously, the results of its study might be interesting in designing new approaches to early diagnostics and to rational treatment. It has been shown that oncoproteins and tumor suppressor proteins have effect on invasive and metastatic potential of tumors through cofilin-regulating pathways. One of the most known oncoproteins, tyrosine-protein kinase transforming protein of Rous sarcoma virus (v-Src) can disrupt the functioning of the Rho-ROCK-LIM kinase pathway resulting in dephosporylation of Cfl-1 and increased level of active Cfl-1 [169]. The tumor suppressor protein phosphoinositide phosphatase and tensin homolog (PTEN) may inactivate cofilin in cancer cells, while loss of PTEN and activation of phosphoinositide 3-kinase (PI3K) caused differential activation of the cofilin regulators, LIMK1 and SSH1L, and cofilin dephosphorylation, that promote microtentacles formation and enhance metastatic risk [170]. In addition, it was shown that the tumor suppressor Ras association domain-containing protein 1 (RASSF1A) blocks tumor growth by stimulating cofilin/PP2A-mediated dephosphorylation [161]. Thus, the role of Cfl-1 as an important participant of various signaling pathways in HMCs requires further investigation. The contributions of Cfl-1 to the malignant phenotype are schematically presented in Figure 5. Obviously, the results of its study might be interesting in designing new approaches to early diagnostics and to rational treatment. There are few publications about the presence of Cfl-2 in HMCs and cancer tissues. The muscle isoform of Cfl-2 (Cfl-2b) is considered as a biomarker of muscle differentiation [171] and has been identified in high quantity in well-differentiated leiomyosarcomas compared to undifferentiated pleomorphic sarcomas [172]. The expression level of Cfl-2 has prognostic significance in primary leiomyosarcomas independent of the histopathological type of tumor, and its expression correlates with improved disease-specific survival [172]. Cfl-2 has also been identified in HMCs of non-muscle origin [173][174][175][176]. Cfl-2 has been overexpressed in aggressive breast cancer cell lines, and its expression has been correlated with tumor grade in primary breast cancer tissue [175]. Significant upregulation of Cfl-1 and downregulation of Cfl-2 has been observed in pancreatic adenocarcinomas There are few publications about the presence of Cfl-2 in HMCs and cancer tissues. The muscle isoform of Cfl-2 (Cfl-2b) is considered as a biomarker of muscle differentiation [171] and has been identified in high quantity in well-differentiated leiomyosarcomas compared to undifferentiated pleomorphic sarcomas [172]. The expression level of Cfl-2 has prognostic significance in primary leiomyosarcomas independent of the histopathological type of tumor, and its expression correlates with improved disease-specific survival [172]. Cfl-2 has also been identified in HMCs of non-muscle origin [173][174][175][176]. Cfl-2 has been overexpressed in aggressive breast cancer cell lines, and its expression has been correlated with tumor grade in primary breast cancer tissue [175]. Significant upregulation of Cfl-1 and downregulation of Cfl-2 has been observed in pancreatic adenocarcinomas compared to non-cancerous tissues [173]. Dstn is the third traditional member of ADF/cofilins family that has also been identified in HMCs, mainly in different adenocarcinomas [6,18,143,177]. Likewise Cfl-1, Dstn can be a potentional biomarker of resistance to platinum-based agents [143]. The structural and functional similarities of traditional cofilins (in particular, the ability to undergo phosphorylation-dephosphorylation on Ser3) suggest that Cfl-1, Cfl-2 and Dstn may also be involved in the same pathways. Overexpression of LIMK1, Cfl-1, and Cfl-2 has been associated with low expression of mitogen-activated protein kinase MAPK1 (which is involved in cell growth and proliferation), and with enhanced survival of the patients with glioblastoma multiforme [174]. Dstn like Cfl-1 promotes tumor cell migration and invasiveness, but in general the activities of Dstn and Cfl-1 are non-overlapping [6,177]. Other Actin-Depolymerizing Factor/Cofilin Superfamily Proteins Data on the expression of twinfilins in HMCs have been initially obtained using transcriptomics approaches. It has been shown that twinfilin might be a key determinant of lymphoma progression through regulation of actin dynamics. Moreover, twinfilin suppressed the action of the front-line chemotherapeutic agent vincristine in Eยต-myc lymphoma cells [178]. In prostate cancer cells, an osteoblast master transcription factor Runx2 is aberrantly expressed and promotes metastatic phenotype of cells through up-regulation of twinfilin gene and other genes with cancer associated functions [179]. TWF1 has been detected as a target for microRNA-206 (miR-206) which is referred to microRNAs, fundamental post-transcriptional regulators inhibiting gene expression. Blocking TWF1 by miR-206 in human xenograft models of breast cancer can suppress tumor invasion and metastasis by inhibiting the actin cytoskeleton dynamics [180]. Drebrins are considered as brain-specific intracellular regulators of morphogenesis [36,181]. The first report on drebrin detection in HMCs was published by Asada et al., who detected drebrin (namely, drebrin E2) in cultured neuroblastoma cells [182]. Later, data on the presence of drebrins in non-neuronal tumor tissues, especially in gliomas and malignant epithelial tumors, were published. The level of this protein in glioma cell lines varies and is equivalent or higher in comparison with the normal cells [183]. High expression level of this protein in glioma U87 cells transfected with a drebrin expression construct induces increased invasiveness and provides cell motility. On the contrary, knockdown of DBN1 in glioma cells by small interfering RNA (siRNA) leads to decrease of cell migration and invasiveness [183]. It has been demonstrated that basal cell carcinomas are rich in drebrin, while keratinocytes of normal epidermis contain almost no drebrin, and that drebrin has potential value in diagnosis of basal cell carcinomas [98]. Drebrin has also been assumed as a potential biomarker for bladder cancer [184]. This protein can be considered a prognostic marker in patients with small lung cancer [185]. Proteomic analysis of colorectal cancer cell lines revealed drebrin to be overexpressed during liver metastasis [186]. The exact role of drebrin in epithelial tumor growth and formation of invasive and metastatic cell phenotype is still unclear. In urothelial carcinoma cell lines, drebrin has been shown to be critical for progranulin-dependent activation of the Akt and MAPK pathways and to modulate motility, invasion and anchorage-independent growth of tumor [184]. The drebrin-like protein (synonyms: mAbp1, HIP-55) known as the mammalian homologue of the yeast Abp1 has been poorly studied in HMCs, and the studies provide contradictory results. It was found that mAbp1 was upregulated or downregulated in several types of tumor tissues, and the highest expression was shown in lung cancer tissues. This protein increased the viability and decreased the apoptosis of lung cancer A549 cells treated with the anticancer agent etoposide [187]. It has been also reported that mAbp1 interacts with transcription regulator FHL-2 (four and a half LIM domains protein 2) and participates in negative regulation of Rho signaling and breast cancer cell invasion [188]. The GMF-B protein was initially characterized as a protein of neural tissue of vertebrates, which is able to affect the growth of normal and malignant glial cells in vitro and in vivo [41,42,189]. The molecular effects of GMF-B on HMCs of neuronal origin are diverse and contribute to contradictory results of studies. In particular, this protein was shown to stimulate DNA synthesis and proliferation of glioma cells and hybrid cells derived from glioma and neuroblastoma (NG108-15) cells, but had no effect on neuroblastoma cells [189]. In glioma cell lines of rodent and human origin, GMF-B promoted the initial growth of cell lines, but limited the proliferation by contact inhibition at the next steps [42]. In rat glioma cells, after transfection with GMF-B the enhanced expression of neurotrophic factors including nuclear factor-ฮบ B was detected [190]. These results suggest a cytoprotective role for endogenous GMF in glial cells. In parallel, GMF-B was demonstrated to cause glioma progression via promoting neovascularization [191]. Finally, it was found that induced overexpression of GMF-B protein in neuroblastoma cells caused the cytotoxicity and loss viability via activation of glycogen synthase kinase-3ฮฒ and caspase-3 [192]. GMF-B has been also found in non-brain tumors. Screening using retroviral expression libraries allowed detection of GMF-B encoding gene among genes involved in ovarian carcinogenesis [193]. The GMF-B protein was significantly overexpressed in serous ovarian carcinoma compared to normal epithelium, benign serous adenoma and borderline serous adenoma tissues, and high expression of GMF-B was associated with poor disease-free survival and overall survival [194]. There is only one report about identification of GMF-G in HMCs. Recently, Zuo et al. showed that the high GMF-G expression correlates with poor prognosis and promotes cell migration and invasion in epithelial ovarian cancer [195]. Human coactosin-like protein (COTL-1) has not been very actively studied in HMCs. The first reports on identification of COTL-1 in HMCs were published by Nakatsura et al. [196]. COTL-1 was detected by the serological expression cloning method (SEREX) in human pancreatic adenocarcinoma cell lines among a number of other pancreatic cancer antigens. The authors assumed that peptides from COTL-1 might be appropriate vaccine candidates for peptide-based immunotherapy of prostate cancer patients [196]. Later, proteomic analysis of PaCa44 pancreatic adenocarcinoma cell line treated with a chemoterapeutic agent, 5-aza-2 -deoxycytidine (DAC), revealed the 22-fold decreased expression of COTL-1 along with silence of cofilin and profilin 1 [197]. After that, Oh et al. using 2-DE with mass spectrometric identification revealed COTL-1 as a differentiation-related cytoskeleton protein in neuroblastoma cells [198]. Hou et al. also reported detection of COTL-1 in N1E-115 neuroblastoma cells [106]. Moreover, COTL-1 may also present in poorly differentiated cells, for example, in the case of high aggressive small cell lung cancer [199]. The comparison of small cell lung cancer tissues with normal bronchial epithelium showed more than 2-fold upregulation of this protein in cancer specimens. COTL-1 was immunohistochemically detected in 93% of small cell lung cancer tissue specimens and only in 16% of non-small cell lung cancer samples. On this basis authors assumed that COTL-1 may be a biomarker or a therapeutic target for patients with small cell lung cancer [199]. To sum up, despite the definite role in tumors, the mechanisms involving twinfilins, drebrin and drebrin-like protein, GMFs, and coactosin-like protein in malignant phenotype are still unclear. Consequently, new studies are needed to clarify their roles in tumors. Conclusions Cofilin-1 is found in all vertebrates and in many other organisms and plays an essential role in actin filament dynamics and reorganization through severing actin filaments. This function of Cfl-1 is regulated by several mechanisms including phosphorylation on Ser3. Active (dephosphorylated) Cfl-1, in addition to the main function, is able to provide transport of G-actin and some other proteins to the nucleus which is accompanied by changes in gene expression. Phospho-Cfl-1, considered by many authors as inactive, has been found to have its own function, namely, direct activation of phospholipase D1. Thus, Cfl-1 can be considered as a multifunctional protein which is involved in several signaling pathways regulating cell motility and development. HMCs of different origin contain Cfl-1 as one of the most abundant proteins. The expression level of Cfl-1 is often increased in HMCs, which underlines its contribution to malignant phenotype. There are several mechanisms involving Cfl-1 in tumor proliferation, invasion, and metastasis that are realized mainly through changes in the balance of kinases, phosphatases, and other proteins involved in cofilin-regulating pathways. Cofilin phosphatase SSH1 has been found to be upregulated in various invasive cancer cells. Cofilin kinase LIMK1 has also shown to play a pivotal role in cell motility. However, some studies provide contradictory data concerning the influence of the expression level of LIMK1 on cell migration, invasion, and metastasis. The characteristic structural feature of Cfl-1 is the presence of special ADF-H domain in its structure. The ADF-H domains have also been identified in a number of other proteins that can directly or indirectly interact with actin cytoskeleton and provide its remodeling. These proteins, differing in size and functionality, are currently referred to as ADF/cofilin superfamily. Almost all of these proteins are direct or indirect regulators of cell motility. In addition, drebrin, drebrin-like protein, and glia maturation factors are characterized as regulators of cellular differentiation. Therefore, all ADF/cofilin superfamily members can contribute to malignant phenotypes of HMCs. However, available data on the functions and presence of many ADF/cofilin superfamily proteins in HMCs are still limited and conflicting. For instance, conflicting results were obtained concerning the role of mAbp1 and GMF-B in invasion and metastasis. The controversial data on the role of dephosphorylated Cfl-1, LIMK1, mAbp1, and GMF-B in cell motility, invasion and metastasis may have several possible reasons including different cell types used in the studies, intratumoral cell heterogeneity, distinct functions of studied proteins at different stages of development or tumor progression, cellular background, etc. All these factors should be taken into account, and the collective activity of cofilin-regulating pathways should be estimated for evaluation of the invasive and metastatic potential of HMCs. With due consideration of these factors, further studies of ADF/cofilin superfamily proteins in HMCs can be a very promising research direction, which may extend the understanding of the molecular basis of tumor phenotypes and provide new protein targets for molecular and clinical oncology.
Limit laws for empirical optimal solutions in random linear programs We consider a general linear program in standard form whose right-hand side constraint vector is subject to random perturbations. For the corresponding random linear program, we characterize under general assumptions the random fluctuations of the empirical optimal solutions around their population quantities after standardization by a distributional limit theorem. Our approach is geometric in nature and further relies on duality and the collection of dual feasible basic solutions. The limiting random variables are driven by the amount of degeneracy inherent in linear programming. In particular, if the corresponding dual linear program is degenerate the asymptotic limit law might not be unique and is determined from the way the empirical optimal solution is chosen. Furthermore, we include consistency and convergence rates of the Hausdorff distance between the empirical and the true optimality sets as well as a limit law for the empirical optimal value involving the set of all dual optimal basic solutions. Our analysis is motivated from statistical optimal transport that is of particular interest here and distributional limit laws for empirical optimal transport plans follow by a simple application of our general theory. The corresponding limit distribution is usually non-Gaussian which stands in strong contrast to recent finding for empirical entropy regularized optimal transport solutions. Introduction Linear programs arise naturally in many applications and have become ubiquitous in topics such as operations research, control theory, economics, physics, mathematics and statistics (see the textbooks by Dantzig, 1963;Bertsimas & Tsitsiklis, 1997;Luenberger & Ye, 2008;Galichon, 2018 and the references therein). Their solid mathematical foundation dates back to the mid-twentieth century, to mention the seminal works of Kantorovich (1939), Hitchcock (1941), Dantzig (1948) and Koopmans (1949), and its algorithmic computation is an active topic of research until today 1 . A linear program in standard form writes with (A, b, c) โˆˆ R mร—d ร—R m ร—R d and matrix A of full rank m โ‰ค d. For the purpose of the paper, the lower subscript b in (P b ) emphasizes the dependence on the vector b. Associated with the primal program (P b ) is its corresponding dual program At the heart of linear programming and fundamental to our work is the observation that if the primal program (P b ) attains a finite value, the optimum is attained at one of a finite set of candidates termed basic solutions. Each basic solution (possibly infeasible) is identified by a basis I โŠ‚ {1, . . . , d} indexing m linearly independent columns of the constraint matrix A. The bases I also defines a basic solution for the dual (D b ). In fact, the simplex algorithm (Dantzig, 1948) is specifically designed to move from one primal feasible basic solution to another while checking if the corresponding basis induces a dual feasible basic solution. Shortly after first algorithmic approaches and theoretical results became available, the need to incorporate uncertainty in the parameters has become apparent (see Dantzig, 1955;Beale, 1955;Ferguson and Dantzig, 1956 for early contributions). In fact, apart from its relevance in numerical stability issues, in many applications the parameters reflect practical needs (budget, prices or capacities) but are not available exactly. This has opened a wealth of approaches to account for randomness in linear programs. Common to all formulations is their general assumption that some parameters in (P b ) are random and follow a known probability distribution. Important contributions in this regard are chance constrained linear programs, two-and multiple-stage programming as well as the theory of stochastic linear programs (see Shapiro et al., 2021 for a general overview). Specifically relevant to this paper is the so-called distribution problem characterizing the distribution of the random variable ฮฝ(X ), where the right-hand side b (and possibly A and c) in (P b ) is replaced by a random variable X following a specific law (Tintner, 1960;Prรฉkopa, 1966;Wets, 1980). In this paper, we take a related route and focus on statistical aspects of the standard linear program (P b ) if the right-hand side b is replaced by a consistent estimator b n indexed by n โˆˆ N, e.g., based on n observations. Different to aforementioned attempts, we only assume the random quantity r n (b n โˆ’ b) to converge weakly (denoted by D โˆ’ โ†’) to some limit law G as n tends to infinity 2 . Our main goal is to characterize the asymptotic distributional limit of the empirical optimal solution x (b n ) โˆˆ arg min Ax=b n , xโ‰ฅ0 c T x (1) around its population quantities after proper standardization. For the sake of exposition, suppose that x (b) in (1) is unique 3 . The main results in Theorem 3.1 and Theorem 3.3 state that under suitable assumptions on (P b ) it holds, as n tends to infinity, that where M : R m โ†’ R d is given in Theorem 3.1. The function M in (2) is possibly random, and its explicit form is driven by the amount of degeneracy present in the primal and dual optimal solutions. The simplest case occurs if x (b) is non-degenerate. The function M is then a linear transformation depending on the corresponding unique optimal basis, so that the limit law M(G) is Gaussian if G is Gaussian. If x (b) is degenerate but all dual optimal (basic) solutions for (D b ) are non-degenerate, then M is a sum of deterministic linear transformations defined on closed and convex cones indexed by the collection of dual optimal bases. Specifically, the number of summands in M is equal to the number of dual optimal basic solutions for (D b ). A more complicated situation arises if both x (b) and some dual optimal basic solutions are degenerate. In this case, the function M is still a sum of linear transformations defined on closed and convex cones, but these transformations are potentially random and indexed by certain subsets of the set of optimal bases. The latter setting reflects the complex geometric and combinatorial nature in linear programs under degeneracy. Let us mention at once that limiting distributions for the empirical optimal solution in the form of (2) have been studied for a long time in a more general setting of (potentially) nonlinear optimisation problems; see for example Dupaฤovรก (1987), Dupaฤovรก & Wets (1988), Shapiro (1991), Shapiro (1993), Shapiro (2000), King & Rockafellar (1993). Regularity assumptions such as strong convexity of the objective function near the (unique) optimizer allow for either explicit asymptotic expansions of optimal values and optimal solutions or applications of implicit function theorems and generalizations thereof. These conditions usually do not hold for the linear programs considered in this paper. To the best of our knowledge, our results are the first that cover limit laws for empirical optimal solutions to standard linear programs even beyond the non-degenerate case and without assuming uniqueness of optimizers. However, our proof technique relies on wellknown concepts from parametric optimization and sensitivity analysis for linear programs (Guddat et al., 1974;Greenberg, 1986;Ward & Wendell, 1990;Hadigheh & Terlaky, 2006). Indeed, our approach is based on a careful study of the collection of dual optimal bases. An early contribution in this regard is the basis decomposition theorem by Walkup & Wets, (1969a) analyzing the behavior of ฮฝ(b) in (P b ) as a function of b (see also Remark 3.2). Each dual feasible basis defines a so called decision region over which the optimal value ฮฝ(b) is linear. The integration over the collection of all these regions yields closed form expressions for the distribution problem (Bereanu, 1963;Ewbank et al., 1974). Further, related stability results are also found in the work by Walkup & Wets (1967), Bรถhm (1975), Bereanu (1976) and Robinson (1977). In algebraic geometry, decision regions are closely related to cone-triangulations of the primal feasible optimization region (Sturmfels & Thomas, 1997;De Loera et al., 2010). We emphasize that rather than working with decision regions directly, our analysis is tailored to cones of feasible perturbations. In particular, we are interested in regions capturing feasible directions as our problem settings is based on the random . These regions turn out to be closed, convex cones and appear as indicator functions in the (random) function M in (2). Our proof technique allows to recover some related known results for random linear programs (see Sect. 3). These include convergence of the optimality sets in Hausdorff distance (Proposition 3.7), and a limit law for the optimal value as n tends to infinity. Indeed, (3) is a simple consequence of general results in constrained optimization (Shapiro, 2000;Bonnans & Shapiro, 2000), and the optimality set convergence follows from Walkup & Wets (1969b). Our statistical analysis for random linear programs in standard form is motivated by recent findings in statistical optimal transport (OT). More precisely, while there exists a thorough theory for limit laws on empirical OT costs on discrete spaces (Sommerfeld & Munk, 2018;Tameling et al., 2019), related statements for their empirical OT solutions remain open. An exception is Klatt et al. (2020), who provide limit laws for empirical (entropy) regularized OT solutions, thus modifying the underlying linear program to be strictly convex, non-linear and most importantly non-degenerate in the sense that every regularized OT solution is strictly positive in each coordinate. Hence, an implicit function theorem approach in conjunction with a delta method allows to conclude for Gaussian limits in this case. This stands in stark contrast to the non-regularized OT considered in this paper, where the degenerate case is generic rather than the exception for most practical situations. Only if the OT solution is unique and non-degenerate, then we observe a Gaussian fluctuation on the support set, i.e., on all entries with positive values. If the OT solution is degenerate (or not unique), then the asymptotic limit law (2) is usually not Gaussian anymore. Degeneracy in OT easily occurs as soon as certain subsets of demand and supply sum up to the same quantity. In particular, we encounter the largest degree of degeneracy if individual demand is equal to individual supply. Additionally, we obtain necessary and sufficient conditions on the cost function in order for the dual OT to be non-degenerate. These may be of independent interest, and allow to prove almost sure uniqueness results for quite general cost functions. Our distributional results can be viewed as a basis for uncertainty quantification and other statistical inference procedures concerning solutions to linear programs. For brevity, we mention such applications in passing and do not elaborate further on them, leaving a detailed study of statistical consequences such as testing or confidence statements as an important avenue for further research. The outline of the paper is as follows. We recap basics for linear programming in Sect. 2 also introducing deterministic and stochastic assumptions for our general theory. Our main results are summarized in Sect. 3, followed by their proofs in Sect. 4. The assumptions are discussed in more detail in Sect. 5. Section 6 focuses on OT and gives limit laws for empirical OT solutions. Preliminaries and assumptions This section introduces notation and assumptions required to state the main results of the paper. Along the way, we recall basic facts of linear programming and refer to Bertsimas & Tsitsiklis, (1997) and Luenberger & Ye, (2008) for details. Linear Programs and Duality. Let the columns of a matrix A โˆˆ R mร—d be enumerated by the set [d] := {1, . . . , d}. Consider for a subset I โŠ† [d] the sub-matrix A I โˆˆ R mร—|I | formed by the corresponding columns indexed by I . Similarly, x I โˆˆ R |I | denotes the coordinates of x โˆˆ R d corresponding to I . By full rank of A in (D b ), there always exists an index set I with cardinality m such that A I โˆˆ R mร—m is one-to-one. An index set I with that property is termed basis and induces a primal and dual basic solution respectively. Herein, and in order to match dimensions (a solution for (P b ) has dimension d instead of m โ‰ค d) the linear operator Aug I : R m โ†’ R d augments zeroes in the coordinates that are not in I . If ฮป(I ) (resp. x(I , b)) is feasible for (D b ) (resp. (P b )) then it constitutes a dual (resp. primal) feasible basic solution with dual (resp. primal) feasible basis I . Moreover, ฮป(I ) (resp. x(I , b)) is termed dual (resp. primal) optimal basic solution if it is feasible and optimal for (D b ) (resp. (P b )). Indeed, as long as (D b ) admits a feasible (optimal) solution then there exists a dual feasible (optimal) basic solution and vice versa for (P b ). At the heart of linear programming is the strong duality statement. and ฮป(I ) are primal and dual optimal basic solutions, respectively. We introduce the feasibility and optimality set for the primal (P b ) by respectively. Notably, in our theory to follow A and c are generally assumed to be fixed and only the dependence of these sets with respect to parameter b is emphasized. We introduce our first assumption: The set P (b) is non-empty and bounded. In view of the strong duality statement in Fact 2.1, solving a linear program might be carried out focusing on the collection of all dual feasible bases. We partition this collection into two subsets depending on their feasibility for the primal program. Remark 2.2 (Splitting of the Bases Collection) Let I 1 , . . . , I N enumerate all dual feasible bases, and let 1 โ‰ค K โ‰ค N be such that Notably, by Fact 2.1 the primal basic solution x(I k , b) is optimal for all k โ‰ค K . Recall that the convex hull C (x 1 , . . . , x K ) of a collection of points {x 1 , . . . , x K } โŠ‚ R d is the set of all possible convex combinations of them. Fact 2.3 Consider the primal linear program (P b ) and assume (A1) holds. Then for any right hand sideb โˆˆ R m either one of the following statements is correct. (i) The feasible set P b is empty. (ii) The optimality set P b is non-empty, bounded and equal to the convex hull The restriction of the convex hull to basic solutions induced by primal and dual optimal bases in Fact 2.3 is well-known. A straightforward argument is based on the simplex method that if set up with appropriate pivoting rules always terminates. If (A1) holds and there exists a unique basis (K = 1) then the primal program attains a unique solution. Uniqueness of solutions to linear programs is related to degeneracy of corresponding dual solutions. (i) If (P b ) (resp. (D b )) has a non-degenerate optimal basic solution, then (D b ) (resp. (P b )) has a unique solution. (ii) If (P b ) (resp. (D b )) has a unique non-degenerate optimal basic solution, then (D b ) (resp. (P b )) has a unique non-degenerate optimal solution. (iii) If (P b ) (resp. (D b )) has a unique degenerate optimal basic solution, then (D b ) (resp. (P b )) has multiple solutions. For a proof of Fact 2.4, we refer to Gal & Greenberg (2012)[Lemma 6.2] in combination with strict complementary slackness from Goldman & Tucker (1956)[Corollary 2A] stating that for feasible primal and dual linear program there exists a pair (x, ฮป) of primal and dual optimal solution such that either x j > 0 or ฮป T A j < c j for all 1 โ‰ค j โ‰ค d. In addition to uniqueness statements, many results in linear programming simplify when degeneracy is excluded. Related to degeneracy but slightly weaker is the assumption Indeed, if P (b) is non-empty and bounded assumption (A2) characterizes non-degeneracy of all dual basic solution. Lemma 2.5 Suppose assumption (A1) holds. Then assumption (A2) is equivalent to nondegeneracy of all dual optimal basic solutions. To see that (A1) is necessary, let D m โˆˆ R mร—m with m โ‰ฅ 2 be the identity matrix. Suppose Then there are K = 2 mโˆ’1 optimal bases defining K distinct (degenerate) dual solutions, so that assumption (A2) holds but dual degeneracy fails. Note that P (b) is unbounded and contains the optimal ray (b T , b T ). Random Linear Programs. Introducing randomness in problems (P b ) and (D b ), we suppose to have incomplete knowledge of b โˆˆ R m , and replace it by a (consistent) estimator b n , e.g., based on a sample of size n independently drawn from a distribution with mean b. This defines empirical primal and dual counterparts (P b ) and (D b ), respectively. We allow the more general case that only the first m 0 โˆˆ {0, . . . , m} coordinates of b are unknown 4 and assume the existence of a sequence of random vectors n โ†’ 0 as n tends to infinity where D โˆ’ โ†’ denotes convergence is distribution. In a typical central limit theorem type scenario, r n = โˆš n and G m 0 is a centred Gaussian random vector in R m 0 , assumed to have a non-singular covariance matrix. Assumption (B1) implies that b n โ†’ b in probability. In order to avoid pathological cases, we impose the last assumption that asymptotically an optimal Further discussions on the assumptions are deferred to Sect. 5. Main results According to Fact 2.3, in presence of (A1) any optimal solution where K is a non-empty subset of [N ] := {1, . . . , N } and ฮฑ K n โˆˆ R N is a random vector in the (essentially |K|-dimensional) unit simplex K := ฮฑ โˆˆ R N + | ฮฑ 1 = 1, ฮฑ k = 0 โˆ€k / โˆˆ K . The main result of the paper states the following asymptotic behaviour for the empirical optimal solution. Theorem 3.1 Suppose assumptions (A1), (B1), and (B2) hold, and let x (b n ) โˆˆ P (b n ) be any (measurable) choice of an optimal solution. Further, assume that for all non-empty K โŠ† [K ], the random vectors ฮฑ K n , G n converge jointly in distribution as n tends to infinity to (ฮฑ K , G) on |K| ร— R m . Then there exist closed convex cones H m 0 1 , . . . , where the sum runs over non-empty subsets K of [K ] and H m 0 Remark 3.2 Underlying Theorem 3.1 is the well-known approach of partitioning R m into (closed convex) cones. Indeed, the union of the closed convex cones is the feasibility set A + := {Ax : x โ‰ฅ 0} โŠ† R m and on each cone the optimal solution is an affine function of b (e.g., Walkup & Wets, 1969a;Guddat et al., 1974, ). The cones H k depend only on A and c. In contrast, our cones H m 0 k also depend on b and define directions of perturbations of b that keep ฮป(I k ) optimal for the perturbed problem for a given k โ‰ค K . Assume for simplicity that m 0 = m and write H k instead of H m 0 k . If b = 0, then K = N and cones coincide H k = H k , but otherwise H k is a strict super-set of H k as the corresponding representation (7) of H k requires non-negativity on all coordinates. This is also in line with the observation that there are fewer (K ) cones H k than there are H k , namely N , and the union of the H k 's is a space that is at least as large as As an extreme example, suppose that (P b ) has a unique non-degenerate optimal solution x(I 1 , b). Then K = 1 and H 1 = R m but the H k 's are strict subsets of R m unless N = 1. In Sect. 5, we discuss sufficient conditions for the joint distributional convergence of the random vector ฮฑ K n , G n . In short, if we use any linear program solver, such joint distributional convergence appears to be reasonable. If the optimal basis is unique (K = 1) with is non-degenerate, and the proof shows that H m 0 k = R m 0 . The distributional limit theorem then takes the simple form In general, when K > 1, the number of summands in the limiting random variable in Theorem 3.1 might grow exponentially in K . In between these two cases is the situation that assumption (A2) holds, which implies all dual optimal basic solutions for (D b ) are nondegenerate (see Lemma 2.5). The limiting random variable then simplifies, as the subsets K must be singletons. Theorem 3.3 Suppose assumptions (A1), (A2), and (B2) hold, and that r n with the closed and convex cones H m 0 k as given in Theorem 3.1. Remark 3.4 With respect to Theorem 3.1, assumption (B1) is weakened in Theorem 3.3 as absolute continuity of G (or G m 0 ) is not required. Indeed, it can be arbitrary, and Theorem 3.3 thus accommodates, e.g., Poisson limit distributions. The proof shows that if G is absolutely continuous (i.e., m 0 = m) then the indicator functions of G โˆˆ H m k \ โˆช j<k H m j simplify to G โˆˆ H m k , because intersections H m k โˆฉ H m j have Lebesgue measure zero. The distributional limit theorem then reads as If the optimal solution of the limiting problem is unique, Theorem 3.1 can be formulated in a set-wise sense. The Hausdorff distance between two closed nonempty sets A, B โŠ† R d is The collection of closed subsets of R d equipped with d H is a metric space (with possibly infinite distance) and convergence in distribution is defined as usual by integrals of continuous real-valued bounded functions; see for example King (1989), where the delta method is developed in this context. Recall that C stands for convex hull. Theorem 3.5 Suppose assumptions (A1), (B1), and (B2) hold, and that where H m 0 k and H m 0 K are as defined in Theorem 3.1. We conclude this section by giving two further consequences of our proof techniques: a limit law for the objective value ฮฝ(b) for (P b ), and convergence in probability of optimality sets. Since the former is well-known and holds in more general, infinite-dimensional convex programs, we omit the proof details and instead refer to Shapiro (2000), Bonnans & Shapiro (2000) and results by Sommerfeld & Munk (2018) Another consequence of our bases driven approach underlying the proof of Theorem 3.1 is that the convergence of the Hausdorff distance A different and considerably shorter argument relies on Walkup & Wets (1969b) and proves the following result. Proposition 3.7 Suppose assumptions (A1) and (B2) hold. If b n We also refer to the work by Robinson (1977) for a similar result when the primal and dual optimality sets are both bounded. Proofs for the main results To simplify the notation, we assume that all random vectors in the paper are defined on a common generic probability space ( , F , P). This is no loss of generality by the Skorokhod representation theorem. Preliminary steps. Recall from Remark 2.2 that bases I 1 , . . . , I K are feasible for (P b ) and (D b ) and hence optimal. The bases I K +1 , . . . , I N are only feasible for (D b ) but not for (P b ). For a set K โŠ† [N ] define the events, i.e., subsets of the underlying probability space By strong duality (Fact 2.1 (ii)), the set A K n is the event that the bases indexed by K are precisely those that are optimal for (P b ) and (D b ). We have A K n โŠ† B K n , and B K n โŠ† B {k} n for all k โˆˆ K. We start with two important observations, the first stating that only subsets of [K ] asymptotically matter. (ii) If assumptions (A1) and (B2) hold, then with high probability P * (b n ) is bounded and non-empty. The same inequality holds for b n if sufficiently close to b, which happens with high probability. For (ii), non-emptiness with high probability follows from assumption (B2), so we only prove boundedness. Indeed, assumption (A1) implies that the recession cone {x โ‰ฅ 0 | Ax = 0, c T x = 0} is trivial and equals {0}. This property does not depend on b n , which yields the result. The event A โˆ… n is equivalent to (P b ) being either infeasible or unbounded, and this has probability o(1) by (B2). Combining this with the previous lemma and the sets (A K n ) K forming a partition of the probability space , we deduce where 1 A (ฯ‰) denotes the usual indicator function of the set A. Defining the random vector We next investigate the indicator functions 1 A K n (ฯ‰) appearing in (6). Omitting the dependence of b n on ฯ‰, we rewrite At the last internal intersection in the above display we can, with high probability, restrict to those i in the primal degeneracy set DP k := {i โˆˆ I k | x i (I k , b) = 0}. Indeed, for i / โˆˆ I k , the inequality reads 0 โ‰ฅ 0, whereas for i โˆˆ I k \ DP k the right-hand side goes to โˆ’โˆž and the lefthand side is bounded in probability. In other words P( For where the union over k > K can be neglected by Lemma 4.1. Thus we conclude that With these preliminary statements at our disposal, we are ready to prove the main results. Proof (Theorem 3.1) The goal is to replace G m 0 n by G m 0 in the indicator function in (8) at the limit as n tends to infinity. By the Portmanteau theorem ( Billingsley, 1999, Theorem 2.1) and elementary arguments 6 it suffices to show that the m 0 -dimensional boundary of each H m 0 k has Lebesgue measure zero. This is indeed the case, as they are convex sets. Define the function T K : This function is continuous for all In particular, the continuity set is of full measure with respect to (ฮฑ K , G). As there are finitely many possible subsets K denoted by K 1 , . . . , K B , the function is continuous G-almost surely. The continuous mapping theorem together with the assumed joint distributional convergence of the random vector (ฮฑ K n , G n ) yield that which completes the proof of Theorem 3.1. Proof (Theorem 3.3) With high probability (A1) and (A2) hold for b n (by Lemma 4.1 for the former and trivially for the latter), which implies that P (b n ) is a singleton (Lemma 2.5 and Fact 2.4). Hence, regardless of the choice of ฮฑ K n , it holds that 1 A K n x (b n ) = x(I min K , b n ). In particular, we may assume without loss of generality that ฮฑ K n are deterministic and do not depend on n. Thus the joint convergence in Theorem 3.1 holds, and (6) simplifies to Since ฮป(I k ) and ฮป(I j ) are dual feasible, they must be optimal with respect to b + ฮทv. Thus it holds By (A2) the vector ฮป(I k ) โˆ’ ฮป(I j ) is nonzero and hence v is contained in its orthogonal complement, which indeed has Lebesgue measure zero. C(x(I K ), v) is Lipschitz since without loss of generality K = โˆ… and It follows that is a measurable random subset of R d . According to Fact 2.3 in presence of (A1) and the preceding computations by the continuous mapping theorem. Proof (Proposition 3.7) Let K = R d + and define the linear map ฯ„ : R d โ†’ R m+1 by ฯ„ (x) = (Ax, c t x). For each b such that the linear program is feasible, let v b โˆˆ R be the optimal objective value. If ฯ„ is injective, then the optimality sets are singletons and the result holds trivially. We thus assume that ฯ„ is not injective, and observe that Since K is a polyhedron and ฯ„ is neither identically zero (A has full rank) nor injective, we can apply the main theorem of Walkup & Wets (1969b). We obtain because the optimal values satisfy v b โˆ’ v b n = O P (r โˆ’1 n ) by Proposition 3.6. (iii) if the dual set N is non-empty and bounded then The following discussion on the assumptions is a consequence of Lemma 5.1. We first collect sufficient conditions for assumption (A1). Corollary 5.2 (Sufficiency for (A1)) The following statements hold. (i) If N is non-empty and P(b) is bounded for some b โˆˆ M then assumption (A1) holds for all b โˆˆ M. (ii) If N is non-empty, bounded and P (b) is bounded for some b โˆˆ R m then assumption (A1) holds for all b โˆˆ R m . Certainly, if P (b) = โˆ… then (A1) is equivalent to P (b) being bounded. The latter property is independent on b and equivalent to the set x โˆˆ R d | Ax = 0, x โ‰ฅ 0, c t x = 0 being empty. A sufficient condition for that is boundedness of P(b) that can be easily checked in certain settings. Lemma 5.3 (Sufficiency for P(b) bounded) Suppose that A has non-negative entries and no column of A equals 0 โˆˆ R m . Then P(b) is bounded (possibly infeasible) for all b โˆˆ R m . It is noteworthy that if the dual feasible set N is non-empty and bounded, then P (b) = โˆ… for all b โˆˆ R m , but P(b) is necessarily unbounded (Clark, 1961). Thus, N is unbounded under the conditions of Lemma 5.3. We emphasize that assumption (A2) is neither easy to verify nor expected to hold for most structured linear programs. Indeed, under (A1) assumption (A2) is equivalent to all dual basic solutions being non-degenerate (Lemma 2.5). However, degeneracy in linear programs is often the case rather the exception (Bertsimas & Tsitsiklis, 1997). Notably, if (A2) and (A1) are satisfied the set P (b) is singleton. The assumption (B1) has to be checked for each particular case and can usually be verified by an application of the central limit theorem (for a particular example see Sect. 6). Assumption (B2) is obviously necessary for the limiting distribution to exist. If the dual feasible set N is non-empty and bounded and (B1) holds then (B2) is always satisfied. A more refined statement is the following. In particular, if the dual feasible set N is non-empty and (B1) holds then both conditions (i) and (ii) are sufficient for (B2). Joint convergence. Our goal here is to state useful conditions such that the random vector ฮฑ K n , G n jointly converges 9 in distribution to some limit random variable ฮฑ K , G on the space |K| ร— R m . By assumption (B1), G n โ†’ G in distribution, and a necessary condition for the joint distributional convergence of (ฮฑ K n , G n ) is that ฮฑ K n has a distributional limit ฮฑ K . There is no reason to expect ฮฑ K n and G n to be independent, as discussed at the end of this section. We give a weaker condition than independence that is formulated in terms of the 8 The feasible set P(b 0 ) contains a positive element x โˆˆ (0, โˆž) d . 9 Recall that the ฮฑ K n represent random weights (summing up to one) for each optimal basis I k , k โˆˆ K for the case that A K n occurs, i.e., that several bases yield primal optimal solutions and hence any convex combination is also optimal. conditional distribution of ฮฑ K n given G n (or, equivalently, given b n = b + G n /r n ). These conditions are natural in the sense that if b n = g, then the choice of solution x (g), as encapsulated by the ฮฑ K n 's, is determined by the specific linear program solver in use. Treating conditional distributions rigorously requires some care and machinery. Let Z = Z K = |K| ร— R m and for ฯ• : We say that ฯ• is bounded Lipschitz if it belongs to BL(Z) = {ฯ• : Z โ†’ R | ฯ• BL โ‰ค 1}. The bounded Lipschitz metric is well-known to metrize convergence in distribution of (probability) measures on Z Dudley (1966) [Theorems 6 and 8]. According to the disintegration theorem (see Kallenberg, 1997[Theorem 5.4], Dudley, 2002[Section 10.2] or Chang & Pollard, 1997 for details), we may write the joint distribution of ฮฑ K n , b n as an integral of conditional distributions ฮผ K n,g that represent the distribution of ฮฑ K n given that b n = g. More precisely, g โ†’ ฮผ K n,g is measurable from R m to the metric space of probability measures on |K| with the bounded Lipschitz metric, so that for any ฯ• โˆˆ BL(Z) it holds that where ฯˆ n : R m โ†’ R is a measurable function. The joint distribution of (ฮฑ K n , G n ) is determined by the collection of expectations Our sufficient condition for joint convergence is given by the following lemma. It is noteworthy that the spaces R m and |K| can be replaced with arbitrary Polish spaces, and even more general spaces, as long as the disintegration theorem is valid. Lemma 5.5 Let {ฮผ K g } gโˆˆR m be a collection of probability measures on |K| such that the map g โ†’ ฮผ K g is continuous at G-almost any g, and suppose that ฮผ K n,g โ†’ ฮผ K g uniformly with respect to the bounded Lipschitz metric BL. Then (ฮฑ K n , G n ) converges in distribution to a random vector (ฮฑ K , G) satisfying for any continuous bounded function ฯ• โˆˆ BL(Z) (this determines the distribution of the random vector (ฮฑ K , G) completely). Moreover, if L denotes the distribution of a random vector, then the rate of convergence can be quantified as where L := sup g 1 =g 2 BL(ฮผ K g 1 , ฮผ K g 2 )/ (g 1 โˆ’ g 2 ) โˆˆ [0, โˆž]. The supremum with respect to g can be replaced by an essential supremum. The conditions of Lemma 5.5 (and hence the joint convergence in Theorem 3.1) will be satisfied in many practical situations. For example, given b n and an initial basis for the simplex method, its output is determined by the pivoting rule (for a general overview see Terlaky & Zhang, 1993 and references therein). Deterministic pivoting rules lead to degenerate conditional distributions of ฮฑ K n given b n = g, whereas random pivoting rules may lead to non-degenerate conditional distributions. In both cases these conditional distributions do not depend on n at all, but only on the input vector g. In particular, the uniform convergence in Lemma 5.5 is trivially fulfilled (the supremum is equal to zero). It is reasonable to assume that these conditional distributions depend continuously on g except for some boundary values that are contained in a lower-dimensional space (which will have measure zero under the absolutely continuous random vector G). On a finite space X = {x 1 , . . . , x N } equipped with some underlying cost c : X ร— X โ†’ R OT between two probability measures r , s โˆˆ N := {r โˆˆ R N | 1 T N r = 1, r i โ‰ฅ 0} is equal to the linear program where c i j = c(x i , x j ) and the set (r , s) denotes all non-negative matrices with row and column sum equal to r and s, respectively. OT comprises the challenge to find an optimal solution termed OT coupling ฯ€ (r , s) between r and s such that the integrated cost is minimal among all possible couplings. We denote by (r , s) the set of all OT couplings. The dual problem is In our context reflecting many practical situations (Tameling et al., 2021), the measures r and s are unknown and need to be estimated from data. To this end, we assume to have access to independent and identically distributed (i.i.d.) X -valued random variables X 1 , . . . , X n โˆผ r , where a reasonable proxy for the measure r is its empirical versionr n := 1 n n i=1 ฮด X i . As an illustration of our general theory, we focus on limit theorems that asymptotically (n โ†’ โˆž) characterize the fluctuations of an estimated coupling ฯ€ (r n , s) around ฯ€ (r , s). For the sake of readability, we focus primarily on the one sample case, where only r is replaced byr n but include a short account on the case that both measures are estimated. A few words regarding the assumptions from Sect. 2 in the OT context are in order. Assumption (A1) always holds, since (r , s) โŠ† [0, 1] N 2 is bounded and contains the independence coupling rs T . Assumption (A2) that according to Lemma 2.5 is equivalent to all dual feasible basic solutions for (DOT) being non-degenerate, however, does not always hold. Sufficient conditions for (A2) to hold in OT are given in Subsect. 6.1. Concerning the probabilistic assumptions, we notice that (B2) always holds as for any (possibly random) pair of measures (r n , s) the set (r n , s) is non-empty and bounded. Assumption (B1) is easily verified by an application of the multivariate central limit theorem. Indeed, the multinomial process of empirical frequencies โˆš n(r โˆ’r n ) converges weakly to a centered Gaussian random vector G(r ) โˆผ N (0, (r )) with covariance Notably, (r ) is singular and G(r ) fails to be absolutely continuous with respect to Lebesgue measure. A slight modification allows to circumvent this issue. The constraint matrix in OT, has rank 2N โˆ’ 1. Letting r โ€  = r [N โˆ’1] โˆˆ R N โˆ’1 denote the first N โˆ’ 1 coordinates of r โˆˆ R N and A โ€  โˆˆ R (2N โˆ’1)ร—N 2 denote A with its N -th row removed, it holds that The limiting random variable for โˆš n(r โ€  โˆ’r โ€  n ), as n tends to infinity, is equal to G(r โ€  ) following an absolutely continuous distribution if and only if r โ€  > 0 and r โ€  1 < 1. Equivalently, r is in the relative interior of N (denoted ri( N )), i.e., 0 < r โˆˆ N . Under this condition (A1), (B1) and (B2) hold and from the main result in Theorem 3.1 we immediately deduce the limiting distribution of optimal OT couplings. Remark 6.2 The two sample case presents an additional challenge. By the multivariate central limit theorem we have for min(m, n) โ†’ โˆž and m n+m โ†’ ฮป โˆˆ (0, 1) that nm n + m with G 1 (r โ€  ) and G 2 (s) independent and the compound limit law following a centered Gaussian distribution with block diagonal covariance matrix, where the two blocks are given by (10), respectively. However, the limit law fails to be absolutely continuous. Nevertheless, the distributional limit theorem for OT couplings remains valid in this case and there exists a sequence ฯ€ n,m (r , s) โˆˆ (r , s) such that nm n + m ฯ€ (r n ,ล m ) โˆ’ ฯ€ n,m (r , s) We provide further details in Appendix 1. We emphasize that once a limit law for the OT coupling is available, one can derive limit laws for sufficiently smooth functionals thereof. As examples let us mention the OT curve (Klatt et al., 2020) andOT geodesics (McCann, 1997). The details are omitted for brevity and instead, we provide an illustration of the distributional limit theorem (Theorem 6.1). Example 6.3 We consider a ground space X = {x 1 < x 2 < x 3 } โŠ‚ R consisting of N = 3 points with cost c = (0, |x 1 โˆ’ x 2 |, |x 1 โˆ’ x 3 |, |x 2 โˆ’ x 1 |, 0, |x 2 โˆ’ x 3 |, |x 3 โˆ’ x 1 |, |x 3 โˆ’ x 2 |, 0) โˆˆ R 9 for which OT then reads as min ฯ€ โˆˆR 9 c T ฯ€ s.t. A โ€  ฯ€ = r โ€  s , ฯ€ โ‰ฅ 0 with constraint matrix A โ€  โˆˆ R 5ร—9 . A basis I is a subset of cardinality five out of the column index set {1, . . . , 9} such that (A โ€  ) I is of full rank. For OT it is convenient to think of a feasible solution in terms of a transport matrix ฯ€ โˆˆ R 3ร—3 with ฯ€ i j encoding mass transport from source i to destination j. For instance, the basis I = {1, 2, 3, 5, 9} corresponds to the transport scheme where each possible non-zero entry is marked by a star and specific values depend on the measures r and s. In particular, to basis I corresponds the (possibly infeasible) basic solution ฯ€(I , (r โ€  , s)) = (A โ€  ) โˆ’1 I (r โ€  , s) that we illustrate in terms of its transport scheme by where r = (r โ€  , 1 โˆ’ r โ€  1 ) โˆˆ R 3 and the second equality employs that r and s sum up to one. Obviously, ฯ€(I , (r โ€  , s)) is feasible if and only if s 2 โ‰ฅ r 2 and s 3 โ‰ฅ r 3 . Suppose that the Then all dual basic solutions are non-degenerate. In particular, (A2) holds and the optimal OT coupling is unique for any pair of measures r, s โˆˆ N . We are unaware of an explicit reference for condition (15) that is reminiscent to the wellknown cyclic monotonicity property (Rรผschendorf, 1996). Further, (15) can be thought of as dual to the condition of Klee & Witzgall (1968) that guarantees every primal basic solution to be non-degenerate. Notably, (15) is satisfied for OT on the real line with cost c(x, y) = |x โˆ’ y| p and measures with at least N = 3 support points if and only if p > 0 and p = 1. If the underlying space involves too many symmetries, such as a regular grid with cost defined by the underlying grid structure, it typically fails to hold. An alternative condition that ensures (A2) is the strict Monge condition that the cost c satisfies possibly after relabelling the indices (Dubuc et al., 1999). This translates to easily interpretable statements on the real line. Then assumption (A2) holds. The first statement follows by employing the Monge condition (see also McCann, 1999[Proposition A2] for an alternative approach). The second case is more delicate, and indeed, the description of the unique optimal solution is more complicated (see Appendix 1). In fact, in both cases the unique transport coupling can be computed by the Northwest corner algorithm (Hoffman, 1963). Typical costs covered by Lemma 6.5 are c(x, y) = |x โˆ’ y| p for any p > 0 with p = 1. Indeed, for p = 0 or p = 1, uniqueness often fails (see Remark 6.7). In a general linear program (P b ), the set of costs c for which (A2) fails to hold has Lebesgue measure zero (e.g., Bertsimas & Tsitsiklis, 1997, ). Here we provide a result in the same flavour for OT. Proposition 6.6 Let ฮผ and ฮฝ be absolutely continuous on R D , with D โ‰ฅ 2, and let c(x, y) = x โˆ’ y p q , where p โˆˆ R \ {0} and q โˆˆ (0, โˆž] are such that if p = 1 then q / โˆˆ {1, โˆž}. For probability vectors r , s โˆˆ N define the probability measures r (X) = N k=1 r k ฮด X k and s(Y) = N k=1 s k ฮด Y k with two independent collections of i.i.d. R D -valued random variables X 1 , . . . , X N โˆผ ฮผ and Y 1 , . . . , Y N โˆผ ฮฝ. Then (15) holds almost surely for the optimal transport (OT). In particular, with probability one for any r, s โˆˆ N and pair of marginals r (X) and s(Y), the corresponding optimal transport coupling is unique. A Omitted proofs Proof (Lemma 2.5) Dual non-degeneracy obviously implies (A2), so we only show the converse (in presence of (A1)). Suppose that ฮป(I j ) is degenerate 1 โ‰ค j โ‰ค K . Then the index set L of active constraints in the dual, i.e., the set of indices such that [ฮป T (I j )A] l = c l , is such that I j โŠ‚ L. Let Pos j โŠ† I j be the set of positive entries of the optimal primal basic solution x(I j , b). Then Since the columns of A I j form a basis of R m , each other column a z writes a z = iโˆˆPos y z i a i + sโˆˆI j \Pos y z s a s , z โˆˆ L \ I j . Suppose there exists some index z โˆˆ L \ I j and s โˆˆ I j \ Pos such that y z s = 0. Then we can define a new basis I := I j \ {s} โˆช {z} such that ฮป( I ) = ฮป(I j ) and as Pos j โŠ† I we conclude that x( I , b) = x(I j , b). This contradicts (A2). Hence y z s = 0 for all s โˆˆ I j \ Pos j in (18). Now suppose that y z i > 0 for some i โˆˆ Pos j , so that i 0 โˆˆ arg min i|y z i >0 x i y z i is well defined and the minimum is strictly positive. Expressing a i 0 as a linear combination of A {z}โˆชPos j \{i 0 } , we find that for some proper choice of x i . By definition of i 0 we find that x i are non-negative, so that I is a primal and dual optimal basis. Moreover, ฮป( I ) = ฮป(I j ) that again contradicts (A2). We deduce for the representation in (18) By definition w โ‰ฅ 0, Aw = 0. and c T w = 0, so that w = 0 is a primal optimal ray, in contradiction of (A1). In total we see that if any basis I j for 1 โ‰ค j โ‰ค K yields a degenerate dual basic solution we can modify basis I j to some I i with i = j and 1 โ‰ค i โ‰ค K such that ฮป(I j ) = ฮป(I j ). It is in principle possible that ฮป(I l ) is dual optimal K + 1 โ‰ค l โ‰ค N but x(I l , b) is not primal optimal. Let us show that this cannot happen under assumption (A2). Consider any optimal primal basic solution x(I j , b) for 1 โ‰ค j โ‰ค K and denote by Pos j its positivity set. Optimality of ฮป(I l ) implies that its active set L contains I l โˆช Pos j . As I l is not a primal optimal basis, it holds that Pos j I l , so that |L| > m and ฮป(I l ) is degenerate. But then we can modify basis I l to some primal and dual optimal basis I i for 1 โ‰ค i โ‰ค K such that ฮป(I i ) = ฮป(I l ) is degenerate, in contradiction with (A2). Hence, any optimal dual basic solution is non-degenerate and induced by some primal and dual optimal basis I . Proof (Lemma 5.5) We need to show that for any ฯ• โˆˆ BL(Z) we have that vanishes as n โ†’ โˆž. To bound the first term notice that for any fixed g it holds that Hence, we find E|ฯˆ n (G n ) โˆ’ ฯˆ(G n )| โ‰ค sup g BL(ฮผ K n,g , ฮผ K g ) that tends to zero by assumption. Notice that the supremum can be an essential supremum, i.e., taken on set of full measure with respect to both (G n ) and (G) instead of the whole of R m . For the second term observe that ฯˆ โˆž โ‰ค ฯ• โˆž and that Hence, we conclude that Dividing ฯˆ by its bounded Lipschitz norm, we find This completes the proof for the quantitative statement. Joint convergence still follows if g โ†’ ฮผ K g is only continuous G-almost surely (but not Lipschitz). In fact, ฯˆ is still continuous and bounded G-almost surely so that Eฯˆ(G n ) โ†’ Eฯˆ(G). Therefore, Eฯ•(ฮฑ K n , G n ) โ†’ Eฯ•(ฮฑ K , G) for all ฯ• โˆˆ BL(Z), which implies that (ฮฑ K n , G n ) โ†’ (ฮฑ K , G) in distribution. B Optimal transport Proof (Theorem 6.1, two-sample) The only part where absolute continuity of G = (G 1 (r โ€  ), G 2 (s)) was required is when showing that the boundaries of the cones defined in (7) have zero probability with respect to G. We shall show that this is still the case, despite the singularity of G 2 (s). The cones under consideration take the form is viewed as an N 2 -dimensional vector), and their boundaries satisfy It suffices to show that for any pair of sets R โŠ† {1, . . . , N โˆ’ 1} and S โŠ‚ {1, . . . , N } that are not both empty, the support could contain countably many intervals as long as there is "clear" starting point a 0 below; but M could be infinite. Proof There is nothing to prove if ฮผ = ฮฝ = 0, so we assume ฮผ = ฮฝ. It follows from the assumptions that there exists a finite sequence of M + 1 โ‰ฅ 3 real numbers โˆ’โˆž โ‰ค a 0 < a 1 < a 2 < a 3 < ยท ยท ยท < a M โ‰ค โˆž We now claim that in any optimal coupling ฯ€ between ฮผ and ฮฝ, the ฮผ-mass of [a 0 , a 1 ] must go to [a 1 , a ]. Indeed, suppose that a positive ฮผ-mass from [a 0 , a 1 ] goes strictly beyond a . Then some mass from the support of ฮผ but not in [a 0 , a 1 ] has to go to [a 1 , a ]. Such a coupling gives positive measure to the set for some > 0. Strict monotonicity of the cost function makes this sub-optimal, since this coupling entails sending mass from x 1 to y 1 and from x 2 to y 2 with x 1 < y 2 < min(x 2 , y 1 ), (see Gangbo & McCann, 1996[Theorem 2.3] for a rigorous proof). Hence the claim is proved. Let ฮผ 1 be the restriction of ฮผ to [a 0 , a 1 ] and ฮฝ 1 be the restriction of ฮฝ to [a 1 , a ] with mass m 0 , namely ฮฝ 1 (B) = ฮฝ(B) if B โŠ† [a 1 , a ), ฮฝ 1 ({a }) = m 0 โˆ’ ฮฝ ([a 1 , a )) and ฮฝ(B) = 0 if B โˆฉ [a 1 , a ] = โˆ…. By definition of a , ฮฝ 1 is a measure (i.e., ฮฝ 1 ({a }) โ‰ฅ 0) and ฮฝ 1 and ฮผ 1 have the same total mass m 0 . Each of these measures is supported on an interval and these intervals are (almost) disjoint. Strict concavity of the cost function entails that any optimal coupling between ฮผ 1 and ฮฝ 1 must be non-increasing (in a set-valued sense). Since there is only one such coupling, the coupling is unique. By the preceding paragraph and the above claim, we know that ฯ€ must be non-increasing from [a 0 , a 1 ] to [a 1 , a ], which determines ฯ€ uniquely on that part. After this transport is carried out, we are left with the measures ฮฝ โˆ’ ฮฝ 1 and ฮผ โˆ’ ฮผ 1 , where the latter is supported on one less interval, namely the interval [a 0 , a 1 ] disappears. If instead ฮผ 0 ([a 0 , a 1 ]) > ฮฝ([a 1 , a 2 ]), we can use the same construction with 1 , a 2 ]) then both the intervals [a 0 , a 1 ] and [a 1 , a 2 ] disappear when considering ฮผ โˆ’ ฮผ 1 and ฮฝ โˆ’ ฮฝ 1 . In all three cases we can continue inductively and construct ฯ€ in a unique way. Since there are finitely many intervals, the procedure is guaranteed to terminate. Thus ฯ€ is unique. on which f is defined can be partitioned into finitely many (less than 6 n D ) open connected components U 1 , . . . , U L according to the signs of x k โˆ’ y k , e i and x k โˆ’ y kโˆ’1 , e i . On each such component f |U i is analytic. It follows that P (X, Y) โˆˆ f โˆ’1 |U l ({0}) = 0 unless f |U l is identically zero Dang, (2015)[Lemma 1.2]. To exclude the latter possibility, consider for any point (x, y) โˆˆ U l and โˆˆ R the function f |U l ( ) = f |U l (x 1 + e i , x 2 , . . . , x n , y 1 , . . . , y n ) with derivative at = 0 given by where x i j denotes the jth entry of the ith vector. If this derivative is nonzero, then clearly f is not identically zero. If the derivative is zero then we shall show that there exists another point in U l for which this derivative is nonzero. Since U l is open, we can add ฮดe j to y n for small ฮด and any 1 โ‰ค j โ‰ค D. If p = q then, taking j = i (which is possible because D โ‰ฅ 2) only modifies the term x 1 โˆ’ y n in (20), and for small ฮด the derivative will not be zero. If p = q = 1 then the norms do not appear in (20) and taking j = i would yield a nonzero derivative. Hence, if p and q are not both equal to one, f is not identically zero on each piece U l , which is what we needed to prove. A similar idea works in case q = โˆž and p = 1. The argument only depends on the positions of the random support points of the probability measures r = n k=1 r k ฮด X k and s = n k=1 s k ฮด Y k and hence is uniform in their probability weights. Recall further Proposition 2.4 that if the dual problem admits a non-degenerate optimal solution the primal optimal solution is unique. We conclude that almost surely the optimal transport coupling is unique.
Remediation Technologies of Ash and Soil Contaminated by Dioxins and Relating Hazardous Compounds Dioxins are the common term describing polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs). Other important dioxin-related compounds are polychlorinated biphenyls (PCBs). These compounds are classified as the highly toxic polychlorinated aromatic compounds (PCAs) group. PCDD/Fs and PCBs have a number of particular individual members known as congeners. The PCDDs and PCDFs have 75 and 135 congeners, respectively. Only 7 of 75 PCDDs and 10 of 135 PCDFs congeners are regarded as toxic congeners. They have chlorine substitutions in at least 2, 3, 7 and 8 positions. The most toxic congener in PCDD/Fs is 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8-TCDD). For PCBs, 13 of the 209 congeners are also regarded to have dioxin-like toxicity. They are PCBs which have four or more chlorines with just one or no substitution in the ortho positions. These compounds show a flat configuration with two rings in the same plane and are called โ€œco-planar PCBsโ€. The properties of these compounds are semi-volatile, non-polar, chemically stable, lipophilic (lipid soluble) and hydrophobic (poorly water soluble). For PCBs, the properties also include non-flammability, resistance to oxidation and low electrical conductivity. With these properties, PCBs have been used in many applications such as dielectric fluid in power transformers and various lubricating fluids. It can be also pointed out that PCBs are generally contains PCDD/Fs as trace impurities. The structural formula and typical properties of the compounds are shown in Fig. 1 and Table 1. PCDD/Fs existed in the environment are originated from various โ€œprimaryโ€ and โ€œsecondaryโ€ sources (see Table 2). โ€œPrimaryโ€ source represents PCDD/Fs directly formed within the process. โ€œSecondaryโ€ source represents PCDD/Fs remobilized or recycled from the primary sources. The origin of secondary source of PCDD/Fs may be from single or multiple sources. By considering these sources, the following pathways of PCDD/Fs contamination into soil can be pointed out: (i) leakage from industrial waste ISIJ International, Vol. 40 (2000), No. 3, pp. 266โ€“274 (ii) distribution of chemicals and fertilizers containing PCDD/Fs as impurities into agricultural soil (iii) atmospheric deposition of small particles and aerosols containing PCDD/Fs mainly emitted from combustion/incineration processes Except for an accident, contamination from wastes of chemical industry using chlorine is unlikely to occur, because of regulations that control their waste disposal. 3) Contamination in agricultural soil may be caused mainly by pesticides containing a trace amount of PCDD/Fs, such as 2-4-5-trichlorophenoxyacetic acid or 2-4-5-T. 4) Such pesticides are also suspected to form PCDD/Fs due to photo-decomposition or microorganism. Because of its harmful effect, however, this type of pesticides has already been prohibited for all use. Besides that, sewage sludge which has been often used as fertilizer also contains PCDD/Fs. 5) Actually, it is pointed out that sewage sludge is a secondary source of PCDD/Fs to the environment. Its origins of the PCDD/Fs may be attributed to several different sources such as atmospheric deposition, transformation of chlorinated organic precursors, like chlorophenols, during treatment of wastewater and industrial dumps from textile, pulp and paper industries. In combustion/incineration processes, PCDD/Fs are thought to form at temperatures between 250-650ยฐC, mainly in the post combustion zone, except for the case of significantly incomplete combustion. The mechanisms of PCDD/Fs formation are usually classified into the following two main pathways [6][7][8][9] : (a) formation through small organic molecules (precursors), such as propene, benzene, phenol, chlorophenols, chlorobenzenes and PCBs during incomplete combustion with heterogeneous reactions and (b) direct formation from macromolecular carbon (called de novo synthesis), such as activated carbon, charcoal and residual carbon with the presence of organic or inorganic chlorine in the fly ash matrix. Deposition of PCDD/Fs onto soil occurs under both of wet and dry depositions. 10,11) In the wet deposition, PCDD/Fs associated with sub-micron particles and in vapor phase are removed by rain. In the dry deposition, however, PCDD/Fs associated with sub-micron particles deposit by gravitational settling. These atmospheric depositions contribute to the widespread contamination rather than only in industrial area. Historical accumulation of PCDD/Fs has been studied by using agricultural soils sampled before 1900, which had never been treated by pesticides. 12) This study represented a model of soil contamination in industrialized countries where the inputs are only from atmospheric deposition. PCDD/Fs concentration in soil from 1893 to 1986 increases about 3 times (31 to 92 ng/kg-soil). Some regulations have been established for waste incinerators in several countries, such as Germany, Sweden, Netherlands and Japan, to reduce PCDD/Fs emission below 0.1 ng-TEQ/m 3 (s.t.p.) (TEQ: Toxicity Equivalent Quantity). 13,14) However, it appears that PCDD/Fs contamination and accumulation through atmospheric deposition and other possible sources are so persistent. Therefore, it is urgently required to develop an efficient remediation technology, e.g., removal, neutralization and/or detoxification of the compound, to clean up the soil. The present paper gives an overview of the remediation technologies for ash and soil contaminated by PCDD/Fs and relating hazardous compounds. Remediation Technologies To clean up the ash and soil from PCDD/Fs and PCBs contamination, several technologies have been proposed. Some of them utilize separation or extraction operation followed by a specific treatment of the contaminants. The others utilize destruction or decomposition reaction of the contaminants. According to the dominant mechanisms in separation/extraction and destruction/decomposition processes, they are classified into biological, physical/chemical and thermal remediation. Biological Remediation (Bioremediation) Bioremediation utilizes microorganisms, e.g., bacteria, as well as fungi, e.g., white-rot fungi, to decompose the contaminant. The microorganisms or fungi grow in the contaminated area and utilize the contaminant as an energy source and food. Compared to the other remediation technologies, bioremediation has some inherent complexities. The combination control of oxygen, nutrient, soil condition and temperature is a key to enhance the decomposition. Microbial Remediation Historically, bioremediation using microorganism has been successfully used to remediate soils, sludges, and ground water contaminated by petroleum hydrocarbons, solvents, pesticides and other organic chemicals. For the remediation of PCDD/Fs and PCBs contaminated soil, however, there are many possible candidates of microorganisms, which are still in the experimental stage and needed to further study for the applications to actual conditions. The bioremediation technologies may be classified into two processes, i.e., biostimulation and bioaugmentation. Biostimulation utilizes microorganism, which originally inhabits the contaminated soil. The process stimulates the growth and metabolic activity of microorganism to degrade and transform the contaminant. It is conducted by injection of key nutrients, oxygen, vitamins, enzyme preparations and other chemicals into the soil. On the other hand, bioaugmentation is the process that introduces mixture of foreign microorganisms into soil at that site (in situ) or in a bioreactor (ex situ) to initiate and sustain their decomposition activities. Several bacteria have been intensively investigated for their degradation ability of PCBs such as Pseudomonas, Alcaligenes and Rhodococcus species. 15) These bacteria require oxygen environment (aerobic) to decompose PCBs. Another study for the bioremediation of PCBs contaminated soil has been undertaken by Fava et al. 16,17) Their study utilized native bacteria which inhabited in the PCBs contaminated waste site. The result showed that the remediation of PCBs contaminated soil was achieved in a bioreactor by the stimulation of native bacteria through addition of oxygen and biphenyls. Inoculation of the other bacteria, such as Pseudomonas and Alcaligenes species, together with native bacteria also enhances the bioremediation of PCBs. 16) Besides that, addition of cyclodextrin, i.e., chemical substances which increase water solubility of organic compounds, can also enhance aerobic biodegradation of PCBs by native bacteria. 17) Both studies give only qualitative results of depletion percentage of gas chromatogram peaks, as evidences for decomposition of PCBs. An integrated chemical-biological remediation process has been developed to degrade PCBs in soil and sediment system. 18) This treatment consists of the chemical pretreatment of soil using reagents (1 vol% H 2 O 2 and 1 mol/m 3 FeSO 4 ) followed by inoculation of Pseudomonas and Alcaligenes species. The degradation ratio of PCBs in the soil was 39 % after treatment for 500 h. Fungal Remediation White-rot fungi are organisms which can degrade lignin. Lignin is a very complex structure polymer which is found in woody plants. This complexity and irregularity of lignin make it resistant to absorption and degradation by intracellular enzymes. Because of low levels of key sources of carbon, nitrogen or sulfur nutrients, white-rot fungi produce enzyme into extra-cellular environment to degrade lignin. That is called lignin peroxidase. This mechanism also gives the fungi the ability to degrade various environmental contaminants. 19,20) In laboratory scale, Takada et al. successfully investigated white-rot fungi for remediation of PCDD/Fs. 21) Two species of white-rot fungi, i.e., Phanerochaete sordida YK-624 and Phanerochaete chrysosporium, were compared under stationary low-nitrogen medium to improve their growth. The degradation by Phanerochaet sordida YK-624 ranged from 40 to 76 % for PCDDs and from 45 to 70 % for PCDFs. Physical/chemical Remediation In the physical/chemical remediation technologies, treatment of contaminated ash and soil is undertaken by separation and/or decomposition through extraction and/or chemical reactions with respect to PCDD/Fs and PCBs. Supercritical fluid extraction and solvent extraction utilize a separation mechanism to remove contaminant. However, others directly decompose PCDD/Fs and PCBs through chemical reactions, such as supercritical water oxidation and base catalyzed decomposition. Supercritical Fluid Extraction (SFE) and Su- percritical Water Oxidation (SWO) Supercritical fluids show characteristics between liquid and gas phases. Their attractive feature is an ability to extract organic compounds with heavy molecular weights. 22) The schematic phase diagram of supercritical fluid is shown in Fig. 2. 23) Above the critical temperature and in a specific pressure, the fluid becomes a supercritical phase, which can be applied to the extraction of contaminants. By reducing the pressure from critical condition, the contaminant can be precipitated from the fluids. CO 2 is often preferred to be utilized in SFE, since it is non-toxic, non-flammable, relatively inexpensive and that its critical temperature is relatively low. Pure CO 2 becomes a supercritical phase at 31ยฐC and 7.50 MPa. PCBs in soil have been tried to remove by using SFE-CO 2 . 24) Supercritical fluid of CO 2 flows through the fixed bed of the soil at 40ยฐC and 10.1 MPa. In this condition, over 90 % of PCBs in soil have been removed. In addition, the result showed that the removal rate in dry soil is higher than the wet soil. The effect of water content in soil has been comprehensively investigated by Chen et al. 25) Their results show that the application of the drying process of soil prior to the SFE-CO 2 remediation was recommended. It was reported that the existence of organic matters in soil decreases the extractability of SFE-CO 2 . However, they also reported that the addition of about 5 mol% methanol as cosolvent enhanced the extraction of PCBs. Another study of SFE-CO 2 with acetone as a co-solvent has been carried out to remediate PCBs contaminated soil. 26) Although spiked soil was used, the study gave a promising result in terms of fractional extraction of PCBs, more than 95 %. In the SWO, oxidizers are applied to oxidize the organic compounds in supercritical water. The critical temperature and pressure of water are 374ยฐC and 22.1 MPa, respectively. The oxidizers are usually air, pure oxygen and hydrogen peroxide. In a laboratory-scale test, SWO has successfully decomposed PCDD/Fs from fly ash through oxidation. 27) The temperature, pressure and time applied are 400ยฐC, 30 MPa and 30 min, respectively. The addition of hydrogen peroxide up to 2.0 mass% increases the maximum decom- position ratio up to 99.7 %. They also concluded that the effect of oxidizers becomes stronger in the order: air, pure oxygen and hydrogen peroxide. Solvent Extraction The pilot-scale test of solvent extraction to the PCBs contaminated soil in a hazardous waste site has been reported. 28) In the process, liquefied propane is used to dissolve contaminants over ranges of temperature from 49 to 60ยฐC and pressure from 1.32 to 2.84 MPa, respectively. Figure 3 describes the schematic flow diagram of the process. The process consists of three basic operation: extraction, solidliquid separation and solvent recovery. After sieving and removing oversize materials (ฯพ6.4 mm), the soil and liquid propane are then mixed in an extractor at the optimum mass ratio, 1 : 1.5. The extraction equipment has a pressure vessel with a high-speed rotary mixer. At the end of extraction process, solid-liquid separation is performed by static settling. This extraction/separation cycle is repeated by adding the same amount of liquid propane to the extractor. When the last extraction cycle is finished, water of one or two times of the soil mass is added to make residual propane float up in the extractor. Liquid propane containing extracted organic compounds is withdrawn from the pressurized vessel. Solid-water slurry is then filtered by vacuum filtration. Recovery of liquid propane is proceeded by transferring the propane-contaminant mixture to the expansion tank. In the tank, propane is vaporized in a gas-liquid system under a reduced pressure. The remained fluid, which contains contaminants and water, is drained from the tank. The purified propane is reused after re-liquification in a compressor. The result of the test showed that removal efficiency of PCBs in soil reached about 91.4 to 99.4 %. These values were obtained when applying three extraction/separation cycles. Base Catalyzed Decomposition (BCD) BCD remediation is also the technology to decompose and degrade PCDD/Fs and/or PCBs in soils. [29][30][31] The decomposition and degradation reactions are promoted by the presence of hydrogen donors at 300 to 350ยฐC. NaHCO 3 is used and decomposed into CO 2 , H 2 O and Na 2 CO 3 in the process. During heating, hydrogen donors generated from organic compounds in soil play a role in dechlorination reaction. 29) Figure 4 illustrates a schematic flow chart of the process. After sieving and weighing, soil was mixed with 3 % NaHCO 3 , for PCDD/Fs remediation. The mixture is heated up to 350 to 400ยฐC in nitrogen atmosphere. After heating for about one hour, the fractional removal of PCDFs was more than 99 % and PCDDs were not detected in the treated soil. For remediation of PCBs contaminated soil, a pilotscale test has been conducted. 31) The optimum reaction temperature and concentration of NaHCO 3 were 330ยฐC and 3 %, respectively. The fractional removal of PCBs was about 99.9 % under this condition. Thermal Remediation Thermal remediation technologies utilize heat to enhance vaporization of the contaminants and their decomposition. Vaporization of PCBs is used in thermal blanket and microwave remediation technologies. PCDD/Fs and PCBs decompose in the incineration and ash-melting processes at relatively high temperatures. Other technologies use heat to promote decomposition reactions, such as thermal dechlorination process and microwave process with an alumina bomb. Rotary Kiln Incinerator A type of thermal remediation technology is the socalled Hybrid Thermal Treatment System (HTTS). 32) Figure 5 illustrates the flow diagram of the pilot-scale facilities of the HTTS, which is applied to hazardous waste sites. 32,33) This is based on the direct combustion mechanism using natural gas as fuel. A transportable module with capacity of 10 to 15 Mg/h consists of incinerator, air pollution control system, such as wet, dry or dry & wet system, and other site-specific auxiliary system. The incinerator is divided into two sections, i.e., a rotary kiln furnace and secondary combustion chamber for off-gases from the rotary kiln. The soil is charged in the rotary kiln furnace. Operation temperature of the kiln is about 620ยฐC. The treated-soil is then quenched, while the flue gas is transported to the secondary combustion chamber after cool down and separation from the ash. In the secondary chamber, flue gas is heated at a temperature higher than 800ยฐC and for a residence time more than 2 s. Water quench operation and alkali scrubber are applied to avoid further PCDD/Fs formation through de novo synthesis. Concerning the air pollution control, wet system is the most effective to reduce PCDD/Fs. The removal efficiency of trace organic compounds such as PCDD/Fs and PCBs is enhanced with turbulent gas flow applied in the secondary combustion chamber. For PCBs contaminated soil, the decomposition ratio reaches more than 99.99 % with the emission of PCDD/Fs lower than the required standard criteria. Thermal-blanket System Another thermal remediation for surface of soil contaminated by PCBs is known as the thermal-blanket system. 34) The pilot-test flow diagram is shown in Fig. 6. The technique utilizes heating element assembly covered by a ceramic insulation and an impermeable silicone fiberglass blanket. Contaminated soil is covered by a thermal-blanket. Operation temperature is set for heating the soil surface ranging from 815 to 925ยฐC. The rate of decomposition of PCBs is dependent on the temperature and soil depth. In the test, target of the soil-depth for removal of PCBs was 15 cm. Temperature of soil surface reached over 800ยฐC with temperature gradient around 32ยฐC/cm. After heating for 24 h, fractional removal of PCBs reached more than 98 %. The downward migration of PCBs through soil was not detected. Vaporization plays an important role in the removal of PCBs during the process. PCBs traveled upward in the soil were removed at the surface by applying air flow. Off-gas passes through a thermal oxidizer to decompose PCBs and other hydrocarbon compounds. This flameless oxidizer operates at temperatures between 875 and 925ยฐC with residence time about 0.5 s. In addition, granular activated carbon is used as an adsorbent to back-up the functions of the oxidizer. Application of Microwave Energy The application of microwave energy to soil remediation has been attempted. 35,36) In a pilot-scale facility (see Fig. 7), cylindrical container with the capacity of 1 Mg soil was used to remove trichloroethylene (TCE) as a model contaminant. 35) Principally the microwave energy (100 MHz to 300 GHz) emitted from a generator penetrates the soil and vaporizes water and contaminants. In the container, microwave is generated from an antenna inside the perforated Fig. 6. Schematic diagram of a pilot-scale facility of the thermal-blanket system. 34) polytetrafluorethylene (PTFE) tube placed vertically at the center of the soil layer. Vapors formed from the soil were sucked through the perforated PTFE tube by a vacuum pump. More than 99 % of TCE were removed from the soil by 75 h irradiation. In a laboratory-scale experiment (see Fig. 8), Abramovitch et al. also have attempted to decompose PCBs in soil by using microwave energy. 36) Cu 2 O or aluminum powder and 10 kmol/m 3 NaOH as reductive dechlorination reagents are mixed with soil sample. The mixture is then placed in a sealed quartz container tube, protected by machineable ceramic alumina, which is called alumina bomb. Then alumina bomb is placed inside a household type microwave oven. The reaction in the alumina bomb is promoted by heat generated by microwave energy. The operation temperature can reach 1 200 to 1 300ยฐC. The decomposition ratios of PCBs were 97.3 % for the case using aluminum powder/10 kmol/m 3 NaOH and 95.3 % for the case using Cu 2 O/10 kmol/m 3 NaOH. Melting Treatment The melting treatment with high temperature is generally applied to decomposition of PCDD/Fs in fly ash from municipial waste incinerators. There are several types of furnaces to treat fly ash, such as plasma-melting furnace, 37) rotating type melting furnace 38) and swirling-flow furnace. 39) Each furnace has characteristic that makes very competitive with each other. In the plasma-melting furnace, ash is melted by plasmatorch of a Cu-electrode non-transfer type. Furnace temperature is more than 1 500ยฐC and the decomposition ratio of PCDD/Fs exceeds 99 %. This furnace can treat ash at a capacity of around 2 Mg/h under a standard condition. Figure 9 shows the flow diagram of a plasma melting process. The rotating type surface melting furnace has a double cylindrical structure (see Fig. 10). The burning and melting of fly ash occur in the upper space of furnace, which is called the primary combustion chamber. The outer cylinder and furnace bed rotate at a rate of one revolution per hour and its shape is determined by the angle of repose of the feed material. The ash melts from the surface of the bed and then falls to the secondary combustion chamber. The temperature in the secondary chamber furnace reaches 1 300ยฐC and the decomposition ratio of PCDD/Fs is more than 99 %. This furnace fueled by a mixture of air and kerosene. Its process capacity is 600 to 670 kg/h. Figure 11 describes the process flow diagram of the swirling flow furnace and relating facilities which can treat fly ash at a capacity of 1.2 Mg/h. The mixture of butane and air is used as fuel. During operation, temperature in the furnace achieves 1 300ยฐC. The result showed that about 98.4 % of PCCD/Fs are decomposed. All of these melting furnaces are equipped with an air pollution control system to avoid further formation of PCDD/Fs and other gaseous pollutants in the post-combustion zone. The plasma melting furnace employs calcium hydrate injection and catalytic denitrification for the flue gas treatment. Gas cooler, scrubber and electrostatic precipitator are used in the surface rotating furnace. For the swirling furnace, the flue gas is treated in secondary combustion ยฉ 2000 ISIJ Fig. 11. Flow diagram of swirling-flow furnace and relating facilities. 39) chamber before transported to the cooling chamber and dust separator. Thermal Dechlorination A full-scale facility to remove PCDD/Fs in fly ash has been developed using thermal dechlorination reaction at a low temperature. 40) Figure 12 shows the schematic diagram of the process flow. Important process parameters are temperature and retention time of fly ash. They should be 350 to 400ยฐC and 1 to 3 h, respectively. In order to prevent further formation of PCDD/Fs discharging temperature should be set below 60ยฐC. The main equipment which has a capacity of 500 kg/h consists of a reactor and a cooler. Fly ash from municipal waste incinerator is transported into the reactor and heated up by electric heaters. It is then cooled down in a water cooler and discharged to a cement solidification process. A low oxygen condition is maintained by supplying N 2 gas. With the heating temperature at 350ยฐC and residence time in 1 h, the decomposition ratio of PCDD/Fs reached 99.7 %. Summary Tables 3 and 4 summarize the development states and Table 3. Remediation technologies of ash and soil contaminated by PCDD/Fs, PCBs and related compounds. Table 4. Limitation and advantages of remediation technologies. Fig. 12. Flow diagram of the low temperature thermal dechlorination process. 40) limitations/advantages of the remediation technologies. Most of the thermal remediation technologies seem to be in more practical stage as compared to the others. There are many options in the thermal remediation technologies to be selected. Physical/chemical remediation technologies may need further efforts to scale up, since there are still many limitations to be broken through such as capacity, corrosion problem and especially relatively higher cost. The development of bioremediation appears to be still in a progressing stage to find out the most suitable bacteria strains and appropriate values of other process parameters. Although, bioremediation seems to produce no harmful waste, required time for the remediation is longer than the other technologies. Table 4 also shows some requirements to develop for the remediation technologies, e.g., faster process with larger capacity and lower cost without further emissions of pollutants. These requirements will be substantial criteria to develop and/or select remediation technologies.
Agro-meteorological indices, phenological stages and productivity of durum wheat ( Triticum durum Desf.) influenced by seed soaking and foliar spray of stress mitigating bio-regulators under conserved moisture condition An experiment was conducted at Agricultural Research Station, Anand Agricultural University, Dhandhuka during the rabi seasons of the years 2017-18 and 2018-19 to determine the response of meteorological indices on seed soaking and foliar sprays of stress mitigating bio-regulators under conserved soil moisture conditions in durum wheat. The experiment comprised of four treatments of seed soaking viz ; No seed soaking (S 0 ), Seed soaking with water (S 1 ), Seed soaking with 100 ppm Salicylic acid (S 2 ) and Seed soaking with 500 ppm Thiourea (S 3 ) and four treatments of foliar spray viz ; No spray(C 0 ), Water spray (C 1 ), 100 ppm Salicylic acid spray (C 2 ) and1000 ppm Thiourea spray (C 3 ) of stress mitigating bioโ€“regulators on durum wheat var. GADW 3 was conducted in Randomized Block Design (Factorial). The results showed significant response for growing degree days (GDD), Helio thermal units (HTU), Helio thermal use efficiency (HTUE) to attain different phenological stages (emergence, CRI, tillering, flag leaf, heading, milking, dough and maturity). On the pooled basis, significantly higher heat use efficiency of grain yield recorded under seed soaking with salicylic acid (100 ppm) (S 2 ) and foliar spray of salicylic acid (100 ppm) (C 2 ) was 1.31 & 1.28 (kg/ha/ 0 C day) respectively, over no seed soaking (S 0 ) and no foliar spray (C 0 ). However, it was found at par with seed soaking with 500 ppm thiourea (S 3 ) as well as 1000 ppm foliar spray of thiourea spray (C 3 ). Introduction Durum wheat (Triticum durum Desf.) is the second most important species globally as well as nationally grown after bread wheat (Triticum aestivum L.). In fact durum wheat was the predominantly grown in Central India, particularly in the Malwa region in Madhya Pradesh, Bhal and coastal agro-climatic zone in Gujarat, Southern Rajasthan and Bundel khand region of Uttar Pradesh. Total durum wheat production in India is sharing about 4 to 10 percent of total wheat production (Anon., 2017) [5] . Besides moisture stress, heat stress is also a very important factor that affects the agricultural production worldwide due to climate change (Hall, 2011) [9] . A recent simulation study has shown that a rise in temperature by 1 0 C can lead to a decline in wheat production by 250 kg/ha in Rajasthan and by 400 kg/ha in Haryana (Kalra et al., 2008) [11] . The duration of each phenol-phases determines the dry matter accumulation and its partitioning into different organs. Anand Kumar et al. (2017) [4] reported that the duration of growth stage of any particular species was directly related to temperature and it could be predicted using the sum of daily air temperature. The day and night temperature played vital role for the completion of the primary requirement of degree day. Quite high value of minimum temperature was promoted the higher respiration and ultimately increases the water requirement and lowers the assimilation rate. Changes in climatic variables like rise in temperature and decline in rainfall may be more frequent in future as suggested by the Intergovernmental Panel on Climate Change (IPCC, 2007) [10] . Temperature is an important environmental factor that influences the growth and development, phenology and yield of crops (Bishnoi et al., 1995) [6] . Pre-anthesis and post-anthesis high temperature and heat may have huge impacts upon wheat growth, and the stress reduced the photosynthetic efficiency of crop (Wang et al., 2011) [19] . Soaking of seeds and /or spraying the crop with water and / or some bio-regulators like salicylic acid and thiourea are one of the proven technologies to mitigate the ill-effects of moisture and heat stress on productivity of durum wheat by enhancing proper germination and developing heat stress tolerance in the crop. Bio-regulator is a group of naturally occurring growth promoting phyto-hormones, which regulates several physiological processes like cell division, cell elongation, and synthesis of nucleic acid and proteins (Mandava, 1988) [13] . Such studies are done little for durum wheat; therefore, an experiment was conducted to determine the response of different meteorological parameters to seed soaking and foliar application of bio-regulators in durum wheat. Materials and Methods A field experiment was conducted at Agricultural Research Station, Anand Agricultural University, Dhandhuka, Bhal and Coastal Zone of Gujarat in Ahmedabad district during rabi season 2017-18 and 2018-19. The climate of this region is semi-arid and sub-tropical. Monsoon commences by the second week of June and retreats by middle of September with an average rainfall of 625.5 mm received entirely from the south-west monsoon currents. The rains are sporadic in this region. The maximum temperature ranged between 27.1 to 42.5 ยบC and minimum temperature ranged between 8.0to 22.2 ยบC during the crop season of the year of 2017-18, while in the year 2018-19, maximum temperature ranged between 27.9 to 40.8 ยบC and minimum temperature ranged between 7.4 to 19.4 ยบC. The other parameters viz., relative humidity and bright sun shine hours/day were normal during both the years. During both the years of experimentations, there was no rainfall recorded ( Fig.1 and 2). In general, weather condition was found congenial during crop season of both the years. Crop was sown under conserved soil moisture condition which is received during rainy season. A total of sixteen treatments comprised of four treatments of seed soaking viz; No seed soaking (S0), Seed soaking with water (S 1 ), Seed soaking with 100 ppm Salicylic acid (S 2 ) and Seed soaking with 500 ppm Thiourea (S 3 ) and four treatments of foliar spray viz; No spray(C 0 ), Water spray (C 1 ), 100 ppm Salicylic acid spray (C 2 ) and 1000 ppm Thiourea spray (C 3 ) of stress mitigating bio-regulators for durum wheat var. GADW 3 were tested in Randomized Block Design (Factorial) with four replications. The line-to-line spacing was kept as 30 cm with a seed rate of 60 kg/ha. Seed were soaked for one hour and then dried in a shadow for 2 to 3 hours before sowing, while foliar spray were applied twice at tillering and ear emergence growth stages. The number of days to attain various phenolphases was observed from randomly selected five plants in all the plots visually by the number of days taken from the sowing date to attain respective phenol-phases up to maturity. Maximum and minimum temperatures used for study were taken from agro-meteorological observatory that is near from the experimental site. To describe the relationship of meteorological indices on grain yield (y) as a function of the simple effect of seed soaking and foliar spray of bioregulators on correlation and regression study was under taken. Different meteorological indices were calculated as per formula given below. Growing degree days (GDD) The GDD concept assumes that the amount of heat would be more or less same for a crop to reach a particular phenological stage or maturity. The GDD were calculated as the difference between the daily mean temperature and growth base temperature (Nuttonson, 1955) [15] . Base temperature of 5 0 C was considered for wheat crop Where T max = Daily maximum temperature (ยฐC), T min = Daily minimum temperature (ยฐC) T base = Minimum threshold/base temperature (ยฐC) Helio thermal units (HTU) Helio thermal units (HTU) were computed by following methods given by Chakavarty and Sastry, 1985. The product of the growing degree days and the maximum bright sunshine hours accumulated over a given period is the helio thermal units (HTU) and expressed as O C days -1 hours. Helio thermal use efficiency (HTUE) Helio thermal use efficiency was calculated by dividing the total dry matter recorded at respective days by the accumulated helio thermal units and expressed as g O C days -1 hrs -1 . Helio thermal use efficiency was calculated as (Chakavarty and Sastry, 1985). Heat use efficiency (HUE) Heat use efficiency for grain and biological yields was calculated as (Pandey et al., 2010) [16] . Results and Discussion (A) Effect of Seed soaking Days to attain different phenological stages It is clearly evident from data presented in Table 1 that different seed soaking treatments were found to vary in number of days for achieving phenological stages. Treatment S 2 (Salicylic acid @ 100 ppm), being at par with S 3 (Thiourea@500 ppm), found to be superior for days to emergence (6.99), CRI (26.50), tillering (41.10), flag leaf (69.30), heading (74.53), milking (95.71), dough (111.30) and maturity (d 117.00) stages on pooled analysis over no seed soaking (S 0 ) and seed soaking with water (S 1 ). On pooled basis, the increase due to seed soaking with salicylic acid @100 ppm in number of days taken to attain emergence stage was to the tune of 0.82 and 0.64 days, for CRI stage 2.41 and 1.97 days, for tillering 2.66 and 2.00 days, for flag leaf 5.11 and 3.25 days, for heading 5.21 and 3.24 days, for milking 6.26 and 3.99 days, for dough 5.71 and 4.43 days, and for maturity 7.75 and 6.26 days, over no spray and water spray, respectively. Amrawat et al. (2014) [3] observed that an increase in mean temperature during reproductive phase by 1 ~ 1140 ~ The Pharma Innovation Journal http://www.thepharmajournal.com 0 C reduces the reproductive phase by 5 days. The crop duration was drastically increased on account of longer vegetative and reproductive phases. Seed soaking with salicylic acid @ 100 ppm took maximum number of days to attain maturity. This might provide more opportunity time to the crop for more photosynthetic activity, which might in turn ensued higher yield. These results were in close conformity of Solanki and Muhal, 2015 [17] . Growing degree days (GDD) at different phonological stages The data pertaining to growing degree days (GDD) at different phenological stages ( Table 2) indicated that GDD varied to attain various phenological stages with different seed soaking treatments and seed soaking with salicylic acid @ 100 ppm (S2) treatment observed higher accumulated GDD at emergence (156.69 0 C day), CRI (560.99 0 C day), tillering (840.60 0 C day), flag leaf (1284.64 0 C day), heading (1358.14 0 C day), milking (1680.17 0 C day), dough (1929.28 0 C day) and maturity (2045.81 0 C day) stage which was significantly superior over no seed soaking and seed soaking with water on pooled basis. However, it remained at par with seed soaking with thiourea @ 500 ppm (S 3 ) at all the stage of GDD. It is an established fact that crop phenology are largely dependent on genetic and environmental factors viz. temperature, relative humidity, sun shine hours, rainfall etc. (Venkataraman and Krishnan, 1992) [18] . The heat unit or GDD concept was proposed to explain the relationship between growth duration and temperature. This concept assumes a direct and linear relationship between growth and temperature (Nuttonson. 1955) [15] . Helio thermal units (HTU) It is clearly evident from data furnished in Table 3 that phenol-phase wise Helio thermal units (HTU) varied significantly under different seed soaking treatments in pooled analysis. On pooled basis, seed soaking with salicylic acid @ 100 ppm(S2) reported maximum HTU to attain emergence (1253.50 0 C day-hr), CRI (4487.95 0 C day-hr), tillering (6724.80 0 C day-hr), flag leaf (10277.10 0 C day-hr), heading (9506.96 0 C day-hr), milking (15121.52 0 C day-hr), dough (17363.48 0 C day-hr) and maturity (18413.46 0 C day-hr) stage which was significantly superior over no seed (S 0 ) soaking and seed soaking with water (S 1 ), however it remained at par with seed soaking with thiourea @ 500 ppm (S 3 ) during all the phenol-phases. The cumulative value of HTU in wheat differed among the salicylic acid. Salicylic acid @ 100 ppm had increased the helio thermal unit's consumption at all the phenol-phases. Application of salicylic acid @ 100 ppm registered significantly increase in heat use efficiency over no spray and spray with water. Amrawat et al. (2013) [3] also reported the similar results. Helio thermal use efficiency (HTUE) It is clearly evident from the data presented in Table 4 that seed soaking had marked influence on HTUE. Seed soaking with salicylic acid @ 100 ppm (S 2 ) recorded significantly maximum value of HTUE (0.0146kg/ha/ 0 C day) as compared to no seed soaking (S 0 ) and seed soaking with water (S 1 ) in pooled analysis. However, it remained at par with seed soaking with thiourea @ 500 ppm (S 3 ). Heat use efficiency (HUE) Data presented in Table 5 further shown that seed soaking treatments had noticeable influence on HTU on grain and biological yield basis during both the years and in pooled data. Seed soaking with salicylic acid @ 100 ppm (S 2 ) recorded significantly maximum value of HUE on grain (1.31 kg/ha/ 0 C day) and biological yield basis (3.19 kg/ha/ 0 C day) as compared to no seed soaking (S 0 ) and seed soaking with water (S 1 ) in pooled analysis, though it remained at par with seed soaking with thiourea @ 500 ppm (S 3 ). These results were in close conformity of Solanki and Muhal (2015) [17] who reported that the number of days taken to attain physiological maturity was significantly higher under 100 ppm salicylic acid spray compared to water spray and it registered significantly higher GDD and higher HUE which had proportionate impact on seed yield. This could be ascribed to significantly higher grain and biological yields of wheat under salicylic acid @ 100 ppm over no spray and water spray. Similar results in wheat were also reported by Khichar and Niwas (2007) [12] . Yield Similar trend was observed for grain and straw yield of wheat (Table 6), wherein, seed soaking with salicylic acid @ 100 ppm (S 2 ), produced significantly higher grain yield (2008 kg/ha) and straw yield (2895 kg/ha) over no seed soaking (S 0 ) and seed soaking with water (S 1 ) in pooled analysis, which recorded an increment to the tune of 27.74 and 23.27 per cent for grain and 31.83 and 30.00 percent for straw yield over no seed soaking and seed soaking with water, respectively. Seed soaking with 500 ppm thiourea (S 3 ) being at par with salicylic acid @ 100 ppm (S 2 ), also produced significantly higher grain (1936 kg/ha) and straw (2786 kg/ha) yield over no seed soaking (S 0 ) and seed soaking with water (S 1 ) on the pooled mean basis. An increment of 23.16 and 18.85 per cent for grain yield and 26.86 and 25.10 per cent for straw yield was recorded with seed soaking with 500 ppm thiourea over no soaking and water soaking, respectively. On understanding the diverse effect of salicylic acid on crop growth and development, it may be inferred that increase in yield obtained with salicylic acid @ 100 ppm seed soaking was most probably due to increased crop photosynthesis favoured by both improved photosynthetic efficiency and source to sink relationship. This may be attributed due to the proportionate distribution of dry matter at later stage of crop growth (Ahmad et al., 2018). Higher days to attain different phonol stages might improve the GDD, HTU, HTUE and HUE which might in turn improve the photosynthetic activity in the plant. Giauaint (1976) reported that the bio-regulatory effect of salicylic acid was chiefly through mobilization of dry matter and translocation of photosynthates to sink which ultimately significantly improved the grain yield. Thus, it is highly probable that in the present investigation, salicylic acid with its sulphydryl group not only favoured the green photosynthetic surface, but might have also improved the activity of starch synthetase and hence, the effective filling of seeds. (B) Effect of Foliar spray Days to attain different phenological stages A reference to data from Table1 indicated that different sprays of stress mitigating bio-regulators had their significant effect on number of days taken to reach to different phenological stages during both the years and in pooled results. The foliar spray of salicylic acid spray @ 100 ppm (C2), being at par with thiourea spray @ 1000 ppm (C 3 ), Growing degree days (GDD) at different phonological stages Similarly Data also exhibited that crop sprayed with salicylic acid @ 100 ppm (C2) accumulated higher GDD at emergence (156.89 0 C day), CRI (575.59 0 C day), tillering (847.04 0 C day), flag leaf (1281.41 0 C day), heading (1366.60 0 C day), milking (1690.28 0 C day), dough (1929.36 0 C day) and maturity (2061.78 0 C day)stage which was found significantly superior over no crop sprayed (C 0 ) and water spray (C 1 ), but remained at par with foliar spray with thiourea @ 1000 ppm (C 3 ) on pooled basis. Nandini and Sridhara (2019 also recorded significantly higher GDD, to attain different phenological stages viz., germination, tillering, 50 per cent panicle initiation, 50% flowering, grain formation and physiological maturity. Solanki and Muhal (2015) [17] reported that the number of days taken to attain physiological maturity was significantly higher under 100 ppm salicylic acid sprays compared to water spray and it registered significantly higher GDD. Helio thermal use efficiency (HTUE) A reference to data (Table 4) indicated that different foliar sprays had their significant impact on HTUE. The crop sprayed with salicylic acid @ 100 ppm (S 2 ), being at par with foliar spray with thiourea @ 1000 ppm (C 3 ), recorded significantly higher HTUE (0.0142 kg/ha/ 0 C day) proved superior over no spray (C 0 ) and water spray (C 1 ) in pooled mean, respectively. Heat use efficiency (HUE) Data (Table 4) indicated that different foliar spray had their significant effect on HUE during both the years and in pooled basis. The crop sprayed with salicylic acid @ 100 ppm (C 2 ) recorded significantly higher HUE on grain (1.28 kg/ha/ 0 C day) and biological yield basis (3.13 kg/ha/ 0 C day) and being at par with foliar spray with thiourea @ 1000 ppm (C 3 ) proved superior over no spray (C 0 ) and water spray (C 1 ) in pooled analysis. The phenological studies of wheat revealed that the increase in salicylic acid from control to foliar spray with thiourea @ 1000 ppm had increased significantly the number of day's required for different phenol-phases. Foliar spray with salicylic acid @ 100 ppm, the crop duration was reduced on account of shorter vegetative and reproductive phase. The cumulative value of HTU (4.13) in wheat differed among the salicylic acid. Increase in salicylic acid @ 100 ppm had increased the helio thermal unit's consumption at all the phenol-phases and in both crop seasons. Application of salicylic acid @ 100 ppm registered significantly increase in heat use efficiency over no spray and spray with water. These results were in close conformity of Solanki and Muhal (2015) [17] who reported that the number of days taken to attain physiological maturity was significantly higher under 100 ppm salicylic acid spray compared to water spray and it registered significantly higher GDD and higher HUE which had proportionate impact on seed yield. This could be ascribed to significantly higher grain and biological yields of wheat under salicylic acid @ 100 ppm over no spray and water spray. Similar results in wheat were also reported by Khichar and Niwas (2007) [12] and Amrawat et al. (2013) [3] . Yield A close perusal of the data ( Table 5) pointed out that foliar spray with 100 ppm salicylic acid spray (C 2 ), being at par with foliar spray with 1000 ppm thiourea (C 3 ), generated considerably higher grain (1984 kg/ha) and straw (2859 kg/ha) yield over no spray (C 0 ) and water spray (C 1 ), on pooled basis. The pooled gain for grain yield obtained under salicylic acid spray were recorded to the tune of 23.69 and 20.90 per cent for grain yield and 28.26 and 26.17 per cent for straw yield over no spray (C 0 ) and water spray (C 1 ), respectively. While for thiourea spray the increase was recorded to the tune of 28.26 and 26.17 per cent for grain yield and 19.51 and 16.82 per cent for straw yield over no spray (C 0 ) and water spray (C 1 ), respectively. An increase in yield attributes and yield obtained with salicylic acid was most probably due to augmented crop photosynthesis favoured by both enhanced photosynthetic efficiency and source to sink relationship resulting due to higher days to attain different phenol stages and thereby, improving GDD, HTU, HTUE and HUE. Effect of salicylic acid was chiefly through mobilization of dry matter and translocation of photosynthates to sink which ultimately significant improved the seed yield. The straw yield enhancement might be attributed to the higher nutrient uptake throughout the crop growth period due to increase days to attain different phenol stages which in turn increased the plant height, dry matter production and number of tillers /m 2 resulting in higher straw yield (Amin et al., 2008) [2] . Relationship between yields attributes, GDD and HTU (X) and durum wheat seed yield (Y) Simple correlation coefficients (r) were computedto study the relationship between durum wheat grain yield and straw and biological yield of wheat as well as meteorological indices namely, GDD and HTU. It is obvious from the data that seed yield of durum wheat was significantly and positively correlated with all these yield attributes and meteorological indices ( Table 6). As such, the increase or decrease in these The Pharma Innovation Journal http://www.thepharmajournal.com characters was found to be associated with a similar increase or decrease in seed yield of wheat. Pooled results showed that every unit increase in straw yield (r=0.998 ** ), biological yield (r=0.999 ** ), GDD at maturity (r=0.964 ** ) and HTU at maturity (r=0.964 ** ) increased the seed yield of wheat by 0.59, 0.37, 2.26 and 0.25 kg/ha, respectively, in pooled analysis. The regression equation also showed that unit change in meteorological indices brought similar changes in seed yield on pooled basis.
Epigenetic Programming Effects of Early Life Stress: A Dual-Activation Hypothesis Epigenetic processes during early brain development can function as โ€˜developmental switchesโ€™ that contribute to the stability of long-term effects of early environmental influences by programming central feedback mechanisms of the HPA axis and other neural networks. In this thematic review, we summarize accumulated evidence for a dual-activation of stress-related and sensory networks underlying the epigenetic programming effects of early life stress. We discuss findings indicating epigenetic programming of stress-related genes with impact on HPA axis function, the interaction of epigenetic mechanisms with neural activity in stress-related neural networks, epigenetic effects of glucocorticoid exposure, and the impact of stress on sensory development. Based on these findings, we propose that the combined activation of stress-related neural networks and stressor-specific sensory networks leads to both neural and hormonal priming of the epigenetic machinery, which sensitizes these networks for developmental programming effects. This allows stressor-specific adaptations later in life, but may also lead to functional mal-adaptations, depending on timing and intensity of the stressor. Finally, we discuss methodological and clinical implications of the dual-activation hypothesis. We emphasize that, in addition to modifications in stress-related networks, we need to account for functional modifications in sensory networks and their epigenetic underpinnings to elucidate the long-term effects of early life stress. Epigenetic modifications in neural networks likely contribute to the life-long consequences of early life stress on mental and physical health [13,33]. Epigenetic mechanisms are able to form relatively stable molecular adaptations of the chromatin. They are sensitive to environmental factors including psychosocial stress and affect gene transcription. This makes them the ideal candidates for the observed programming effects. However, it is still unclear how stress exposure early in life interacts with the epigenetic machinery in neural networks to program developmental trajectories. The existing reviews discussing epigenetic programming effects of early life stress clearly point towards the heterogeneity of findings, which complicates identification of molecular mechanisms [33,35,36]. Differences across studies have been explained by differences in stressor, time-point of exposure, the intensity of exposure and species specificity [33,37]. These reviews focus on the mechanism through which ELS impacts the stress system and how this leads to epigenetic programming effects at stress-related genes. Here, we ask whether the observed vulnerabilities, especially those related to cognitive function and psychopathology, solely result from modifications in stress-related networks, or whether parallel epigenetic modifications in sensory networks may also contribute to the long-term effects of ELS. Sensory systems function as primary mediating systems between the perception of a stressor and the stress response. They also interact with the functional organization of neural networks underlying the development of higher cognitive functions [38]. Emerging evidence suggests that the effects of ELS on brain development extend to sensory systems [39]. Moreover, the stress system participates in early finetuning processes of sensory networks necessary for sensory and cognitive development [40,41]. We propose that the underlying neuroepigenetic pathways established in early sensory development contribute to the long-term effects of ELS. Our 'dual-activation hypothesis' states that concerted activity in both central regulatory networks of the HPA axis and developing sensory networks leads to the establishment and consolidation of epigenetic modifications which underlie the long-lasting programming effects of environmental stressors. NEUROEPIGENETICS: DEFINITION AND FUNC-TIONAL CONTEXTS Molecular epigenetics is defined as "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence" [42]. However, the heritability criterion is not useful for neuroepigenetic studies concerned with epigenetic modifications in nondividing neurons [43]. Accordingly, Day & Sweatt define neuroepigenetics as a potential subfield of epigenetics that deals with the unique mechanisms and processes allowing dynamic experience-dependent regulation of the epigenome in nondividing cells of the nervous system, along with the traditionally described developmental epigenetic processes involved in neuronal differentiation and cell-fate determination [43][44][45]. At the same time, Iles warns that a broad understanding of epigenetic mechanisms can lead to an unrealistic 'hype' of epigenetics in the study of brain development, development of behavior, and psychopathologies [46]. A broad definition would confuse transient changes in gene expression related to neural activity with long-term epigenetic modifications. To avoid confusion, we distinguish epigenetic mechanisms in neuronal cells according to their functional context. We differentiate structural (on the maintenance of genomic struc-ture oriented) epigenetic mechanisms, synaptic epigenetic mechanisms, and developmental epigenetic mechanisms [38] (see Box 1). The distinction between transient mechanisms involved in short-term synaptic adaptations and long-term epigenetic programming mechanisms with developmental impact is indispensable in the context of ELS. To identify developmental epigenetic mechanisms in contrast to synaptic epigenetic mechanisms, we need to demonstrate the relative stability of epigenetic modifications and impact on developmental trajectories. This requires longitudinal studies and experimental settings, which allow measuring epigenetic modifications at a minimum of two time-points. In addition, identified epigenetic mechanisms must be linked to structural and/or functional changes in neural networks. Here, focusing on critical/sensitive periods of neural network formation or reconsolidation is the most fruitful starting point to link epigenetic modifications with developmental trajectories. Finally, developmental epigenetic mechanisms may be the result of "Systems Heritability" (Day & Sweatt [43]), that means transient epigenetic mechanisms in one brain area may induce long-lasting epigenetic modifications in another brain area via neural activity. For the study of developmental epigenetic mechanisms, as in the case of ELS programming effects, we, therefore, need to analyze not only modifications in primary target systems, e.g. the stress system but also in interconnected neural networks, e.g. the sensory systems as well as those for cognitive and emotional processing. regulation of adult hormonal responses and behavior. A variety of animal studies demonstrated that differences in stress responsivity and behavior following ELS are associated with epigenetic differences at stress-related genes including those coding for the glucocorticoid receptor (Gr), the neuropeptide arginine vasopressin (Avp), the corticotropin-releasing factor (Crf), and the FK506 binding protein 5 (Fkbp5) [19,[47][48][49]. For some of the findings, corresponding results could be found in human tissue (in brain tissue: [50,51]; in blood: [52][53][54]; in saliva [55]). Taken together, results point towards a combination of common, species-specific, sex-specific, and stressor-specific regulatory mechanisms, which seem to be sensitive to timing, quality, and intensity of the stressor [33,13]. In the following, we exemplarily discuss findings, which indicate epigenetic programming effects of stressrelated genes Gr, Avp, Crf, and Fkbp5 in the hippocampus and the hypothalamus, two brain regions functionally involved in the central regulation of the HPA axis. These findings indicate that the epigenetic regulation of the HPA axis contributes to the long-term programming effects. However, the findings also show that epigenetic regulation of the HPA axis likely results via several parallel pathways. DNA Methylation Changes at the Gr in Hippocampus and PVN Several studies found differences in DNA methylation in the promoter region of the Gr following ELS. Pioneer studies by Meaney and colleagues in Long-Evans rats discovered an epigenetic programming effect of ELS at the Gr in the hippocampus stably affecting HPA axis function and behavior [26,[56][57][58]. Low levels of maternal licking and grooming (LG) during early life increased HPA axis responsivity to stress. This was associated with persistent DNA hypermethylation at specific CpG dinucleotides within the hippocampal Gr exon 1 7 promotor and increased histone acetylation facilitating binding of the transcription factor nerve growth factor-inducible protein A (Ngfia), which increased Gr expression [56]. In two human post-mortem studies, the same group found increased DNA methylation at the promoter of the Gr in hippocampus tissue of suicide completers with a history of childhood maltreatment compared to suicide completers without such a history [50,51]. This included the exon 1 F , the human orthologue of the exon 1 7 in rats. In contrast, a study using a different maternal stress paradigm and a different species of rats did not find changes in the DNA methylation of the Gr exon 1 7 promoter region in the hippocampus [27]. Of note, in a later analysis, Meaney and colleagues identified that the level of 5-hydroxymethylcytosine (5hmc) of the Gr exon 1 7 promotor was three times higher in the hippocampus of low compared with high LG offspring [59]. The bisulfite sequencing method used in the study by Weaver et al. [56] to detect DNA methylation does not differentiate between 5hmc and 5-methylcytosine (5mc) [49]. While 5mc repeatedly associated with long-term repression of gene expression [60], 5hmc is the first oxidative product in active demethylation of 5mc by the ten-eleven translocation family of proteins [61]. In the CNS, the functional role of 5hmc is still unclear. First studies link 5hmC to DNA de-methylation in memory formation [62] and to the regulation of neuroplas-ticity genes in the hippocampus in response to acute stress [63]. Bockmรผhl et al. investigated in mice whether ELS programs DNA methylation and gene expression of the Gr in the paraventricular nucleus (PVN) of the hypothalamus [64]. Although they found no differences in the proximal Gr promotor including the mouse orthologue of the rat exon 1 7 , they found hypermethylation at CpG sites in the shore region of a more distal CpG dense island in the Gr promotor. At one of these CpG sites, hypermethylation was robustly maintained over three months. In addition, they report an increase in overall hypermethylation and age-related increases in Gr mRNA transcripts in the PVN in ELS mice indicating a functional role of this more distal shore region of the Gr promotor in Gr regulation across the lifespan. In contrast, an earlier study stressing pregnant mice found increased DNA methylation at the mouse orthologue of the rat exon 1 7 in the hypothalamus of the adult male offspring associated with a heightened HPA axis response to acute stress [65]. These exemplary results support the notion that ELS has a long-lasting impact on HPA axis responsivity via DNA methylation of the promoter region of the Gr in brain regions involved in stress-regulation, especially in the hippocampus and the hypothalamic PVN. The loci of modified DNA methylation at the Gr and the effects on expression vary across species and brain regions. Some of the studies also found that ELS affects the overall level of DNA methylation [50,51,64], which represents a broad and unspecific epigenetic modification. Such general epigenetic modifications may be detectable in peripheral tissue and, thus, candidates for potential biomarkers. For example, Radtke et al. found that the interaction between childhood maltreatment and Gr methylation in lymphocytes strongly correlated with an increased vulnerability to psychopathology [66]. However, as Palma-Gudiel et al. point out, the heterogeneity in stressors and targets in DNA methylation analysis of the Gr make it difficult to integrate the existing findings in a coherent functional model [37]. DNA Methylation Changes at Avp and Crf in the Hypothalamus Further genes involved in the regulation of the stress system showing epigenetic modifications due to ELS include Avp and Crf in the hypothalamic PVN. In mice, maternal separation induced sustained life-long expression of hypothalamic Avp in the parvocellular subpopulation of neurons in the PVN due to reduced levels of DNA methylation at CpG sites in the enhancer region of Avp. This was associated with increased corticosterone secretion, heightened endocrine responsiveness to stress and altered feedback inhibition of the HPA axis [67]. In subsequent studies, Murgatroyd et al. identified that the neural activation due to ELS led to phosphorylation of methyl-CpG binding protein 2 (Mecp2), and that this resulted in a reduced ability of Mecp2 to bind to the enhancer of Avp and recruit DNA methyltransferases. This then led to DNA hypomethylation at this particular genomic site, which further inhibits transcriptional repression and gene silencing [68]. Overall, the mechanism described by Murgatroyd et al. represents one example of synaptic activation of epigenetic programming effects following ELS. A study stressing pregnant mice, which found increased DNA methylation at the GR promoter in the hypothalamus of male offspring also found hypomethylation at CpG sites of the corticotropin-releasing factor (Crf) promoter region in the hypothalamus [65]. Rice et al. observed decreased mRNA expression of Crf in chronically stressed immature mice [69]. McIlwrick et al. compared mouse lines selected for HPA axis reactivity and report decreased mRNA expression of Crf in the PVN and increased Crf expression in the dorsal hippocampus of mice with high HPA axis reactivity exposed to ELS [70]. McIlwrick et al. conclude that during the hypo-responsive period of the HPA axis early in development, the HPA axis is not able to downregulate the abnormally high levels of corticosterone induced by ELS in HPA axis hyper-responsive individuals. The abnormally high levels of corticosterone then set of an epigenetic programming cascade that modulates lifetime HPA axis sensitivity [70]. In contrast, Bockmรผhl et al. found no differences between neither ELS mice nor ELS mice exposed to chronic unpredictable stress and controls in hypothalamic Crf expression [64]. In chicken, postnatal heat conditioning led to a resilient or sensitized response of the thermo-regulation system, depending on the ambient temperature during conditioning, and this was associated with changes in the expression level of Crf mRNA in the PVN after a subsequent heat challenge one week later [41]. These heterogeneous findings show that, in addition to species, stressor, and timing specific variation, epigenetic modifications targeting Crf expression may not only generally affect the stress response but also function as stressorspecific fine-tuning mechanism. In the case of heat conditioning in chicken, the stressor-specific modifications also contain information about the stressor quality -e.g. low or cold ambient temperature -allowing for a highly adaptive response later in life. Here, ELS is not only associated with a general adaptation of the stress response but also with stressor-specific fine-tuning mechanisms reflecting the physical properties of the stressor. DNA Methylation Changes at Fkbp5 in Blood, Hippocampus, and Hypothalamus Another target of interest in the study of epigenetic programming effects following ELS is Fkbp5. Fkbp5 impacts the stress response via an indirect regulation of Gr sensitivity [71,72]. Genetic variation in Fkbp5 as well as changes in Fkbp5 mRNA expression and DNA methylation have been associated with extreme trauma [73], vulnerability to PTSD [54], and chronic stress [74]. Yehuda et al. even reported tentative evidence of transgenerational effects of differential DNA methylation at the Fkbp5 intron 7 in blood cells of Holocaust survivors and their offspring compared to controls, possibly indicating a long-term programming effect across generations [73]. Also in blood cells, Klengel et al. found reduced DNA methylation at CpGs associated with glucocorticoid response elements in the intron 7 of Fkbp5 in Fkbp5 rs1360780 AG/AA allele carriers with a history of child abuse compared to controls [54]. They validated this finding in hippocampal progenitor cells, and found that the same CpGs in intron 7 of Fkbp5 showed strongest DNA methylation after treatment with dexamethasone [54]. In contrast, McIlwrick et al. observed reduced baseline mRNA expression levels of Fkbp5 in the PVN of adult HPA hyperresponsive mice that were exposed to ELS compared to controls, but no differences in other brain areas [70]. In sum, individuals exposed to ELS show diverse patterns of DNA methylation at stress-related genes in stressregulating brain areas (hippocampus and hypothalamus) associated with heightened HPA axis reactivity later in life. This indicates an epigenetic programming mechanism in stress-related neural networks during a critical period of stress response development (developmental window) [75,76]. Lifetime stability of epigenetic changes, type of stress response, and behavioral patterns, together with the timedependence of the induction -early in life -indicate an evolutionary and developmental function of the underlying mechanisms. Programming effects seem to result from several parallel pathways including synaptic activation (see 4) and hormonal activation via the global effects of glucocorticoids on the epigenetic machinery (see 5). Accordingly, the epigenetic programming effects seem to result from a combination of general epigenetic modifications, such as overall DNA methylation, and stressor, species, and tissue-specific modifications. Modifications even may partly contain information of the stressor's physical properties [41], pointing towards a role as stressor-specific fine-tuning mechanism. The pathways through which ELS affects epigenetic mechanisms in neural networks are still investigated. With our dual-activation hypothesis we state that such highly adaptive, stressor-specific modifications likely depend on additional activation mechanisms based on the sensorial quality of the stressor and that these interact with the stress system. ACTIVITY INDUCED EPIGENETIC MODIFICA-TIONS IN NEURAL NETWORKS In neurons, epigenetic mechanisms regulate the differentiation of neuronal stem cells into neurons, astrocytes, and oligodendrocytes [77][78][79]. However, some of the enzymatic processes, which regulate epigenetic mechanisms of neuron development, are interrelated with enzymatic mechanisms that establish and maintain synaptic function [80,81]. This indicates a possible epigenetic fine-tuning mechanism in neurogenesis sensitive to neuronal activity. In addition, neurons undergo significant epigenetic modifications during post-natal brain development [82], and some of these can be induced by synaptic activity [83][84][85][86]. Chromatin modifications are involved in the regulation of axon and dendrite growth [87], and DNA demethylation is discussed as activity-dependent mechanism of adult neurogenesis [80]. Together these findings indicate that epigenetic mechanisms function as mediators, which coordinate neural and genetic activity in the developing brain by modifying the spatial and biochemical structure of DNA binding sites and their reactivity to transcription factors. Epigenetic modifications also contribute to the molecular underpinnings of neural plasticity and neural network formation [88,89]. For example, neuronal diversity, resulting from activity-dependent spatiotemporal differentiation, is associated with epigenomic differences [90]. DNA methylation and histone modifications facilitate and maintain synaptic plasticity and function [88,91], e.g. via differential methylation of Bdnf promoter regions [92]. Initial studies point to an additional role for RNA interference, the interaction of micro RNA molecules with DNA, messenger RNA, or enzymes regulating protein synthesis, in synaptic plasticity [93]. Furthermore, epigenetic mechanisms sensitive to synaptic signals, and especially the interplay of histone modifications and DNA de/methylation, seem to function as potential molecular underpinnings of memory formation [45,[94][95][96]. First studies revealed a role of DNA methylation and demethylation as well as histone modifications in memory formation by affecting several neuroplasticity genes [97][98][99][100][101][102][103]. Thus, synaptic activity is one pathway through which epigenetic modifications are induced in neural networks. In the context of ELS, the best described mechanism for a synaptic induced epigenetic modification is the programming of stress-related genes and HPA axis function, most importantly Avp in the hypothalamic PVN and Crf in the hippocampus as well as the proopiomelanocortin (Pomc) gene in the pituitary gland, via the Mecp2 pathway [67,68,104]. Neural activity following ELS induces Mecp2 (S241) phosphorylation [68]. This specific type of phosphorylation is associated with reduced binding of Mecp2 to the DNA at the enhancer region of Avp in the PVN. The lack of Mecp2 facilitates DNA demethylation due to decreased recruitment of DNA methyltransferases (Dnmts) during development and subsequently increased transcriptional activity of Avp in the long-term. Zimmerman et al. report a similar mechanism for the reduced Mecp2 binding at the promoter region of Crf in the hippocampus [104]. Remarkably, the effect was highly specific in timing (early in life), neural circuit, and gene loci; for example, neither Crf in the PVN nor Avp in the hippocampus were similarly affected [104]. This points towards a highly specific coordination of neural activity and expression of stress-related genes on the epigenetic level. The example of the Mecp2 pathway indicates that such coordination when taking place during critical periods of neural network formation and methylome reconfiguration, can result in long-term epigenetic programming. The stress sensitivity of Mecp2 even seems to have transgenerational effects. Franklin et al. report, among other epigenetic modifications, an increase of Mecp2 DNA methylation and a decrease of mRNA expression in the cortex of the offspring of mice exposed to a maternal separation and maternal stress paradigm [28]. Mecp2 is critically involved in activity-dependent neuronal plasticity and transcription in the developing brain [60], and the loss of its ability to recognize DNA methylation and repress transcription in mutations of Mecp2 has been identified to cause Rett syndrome [105]. Also, a role of Mecp2 phosphorylation in learning processes is discussed [104], with first results pointing in this direction [106]. Hereby, different forms of Mecp2 phosphorylation have been reported to be associated not only with demethylation and enhanced transcription but also increased binding of Mecp2 to the DNA, increased methylation, and enduring transcriptional repression [104]. Consequently, Zimmermann et al. discuss Mecp2 as general epigenetic programming protein linking neural activity with DNA de-/methylation and changes in gene transcription [104]. Another pathway could involve the epigenetic regulation of the brain-derived neurotrophic factor (Bdnf), since Bdnf gene expression seems to be affected by ELS. Rats reared in hostile postnatal environment showed hypermethylation and reduced expression of Bdnf in the prefrontal cortex [107]. In a prenatal stress paradigm, the offspring of stressed rat dams showed decreased Bdnf expression in the amygdala and hippocampus and increased DNA methylation at Bdnf exon IV [108]. This finding was also observed in an ELS paradigm using caregiver maltreatment during infancy. Female but not male rats showed increased levels of DNA methylation at the Bdnf promoter exon IV in the ventral hippocampus and amygdala [109]. A study in chicken showed that dynamic changes of DNA methylation are involved in the regulation of Bdnf expression during postnatal thermotolerance acquisition [110,111]. In a follow-up study, Kisliouk and Meiri found dynamic changes in H3K27 di-methylation (me2) levels in the hypothalamus including the promoter of Bdnf following postnatal heat conditioning [112]. Also, chickens exposed to fasting stress at 3-days-of-age showed modified histone methylation (di-and tri-methylation) at lysine 27 of histone 3 (H3K27) in a promoter region of Bdnf, and these were associated with corresponding changes in Bdnf expression levels [113]. In general, epigenetic regulation of Bdnf has been linked to neuroplasticity and function [98], and has been reported to interact with neural activity [114]. In a mouse model, over-expression of Bdnf prevented stressinduced reductions of dendritic branching in the hippocampus [115]. In addition, Bdnf seems to reverse reduced excitability in hippocampal neurons induced by stress levels of corticosterone [116,117], and additional activation of neuronal activity in a stressful situation increases Bdnf synthesis, probably to buffer Bdnf degradation caused by stress [118]. showed in a rat model of depression that Mecp2 controls Bdnf expression in the hippocampus via interactions with micro RNA-132 [121]. According to our dual-activation hypothesis, these synaptic induced epigenetic modifications in stress-related neural networks contribute to the long-lasting effects of ELS. However, we suggest that the observed effects result from the interaction of synaptic induced modifications with epigenetic programming effects due to glucocorticoid exposure. EPIGENETIC EFFECTS OF GLUCOCORTICOID EXPOSURE In the case of ELS, the existing evidence indicates that activity-induced modifications act in concert with hormonal epigenetic programming. Together they function as 'developmental switch' by establishing long-lasting epigenetic modifications during critical developmental periods. For example, sex differences support a hormonal influence on epigenetic programming effects through ELS. They indicate that sex-specific hormonal signatures in the brain modulate the hormonal effect of glucocorticoids. Mueller and Bale observed in mice that prenatal stress early in gestation increased long-term HPA axis sensitivity in male but not female offspring, and this was accompanied by decreased DNA methylation in the promoter region of Gr and Crf as well as higher Gr and Crf expression in the hypothalamus [65]. Doherty et al. reported an increase in global DNA methylation and decrease in genome-wide hydroxymethylation in the dorsal hippocampus and amygdala of the adolescent male, but not female rats exposed to repeated caregiver maltreatment [109]. Furthermore, the critical periods during which the stress response has been shown to be most sensitive for epigenetic programming effects -namely prenatal development, early childhood, and adolescence -are characterized by developmental relevant hormonal changes [36]. Glucocorticoids have a major effect on organ maturation during embryonic development. Especially their ability to accelerate lung development is well studied and glucocorticoids are used as ante-as well as postnatal treatment in cases of preterm labor [122]. Other organ systems, including the liver, pancreas, kidney, and heart show similar effects under glucocorticoid treatment [123]. These maturation processes mainly result in developmental changes of gene transcription and protein synthesis, e.g. of the synthesis of surfactant proteins in the lung. First studies show that this is accompanied by functional relevant modifications of DNA methylation [124,125]. Thus, glucocorticoids seem to have an organizational role during organ development, probably mediated through developmental epigenetic mechanisms. The impact of ELS on epigenetic mechanisms could in part mimic such an effect of glucocorticoids during organ maturation. Possible is either a global effect of glucocorticoids with a subsequent indirect impact on the expression of stress-related (Crf, Avp, Fkbp5) and other genes, or a target-specific interaction. For both mechanisms, first evidence is accumulating. Genome-wide Effects of Glucocorticoids Some of the findings in ELS studies already point towards a global effect of glucocorticoids on epigenetic mechanisms, reporting genome-wide differential DNA hyper-and hypomethylation in human blood and brain tissue of individuals with a history of ELS [51,126]. Mychasiuk et al. found global DNA hypomethylation in the hippocampus and frontal cortex of Long-Evans rats exposed to strong prenatal stress and DNA hypermethylation in the hippocampus of rats exposed to mild prenatal stress [127]. They also found sex differences in DNA methylation for the mild prenatal stress group in the frontal cortex, with males showing DNA hypermethylation and no effects in females compared to controls indicating hormonal buffering of this effect [127]. Mechanistically, Dnmt1, a methyltransferase enzyme involved in maintaining DNA methylation, could mediate a global impact of glucocorticoids. Yang et al. observed a dose-dependent decrease of Dnmt1 expression in pituitary adenoma cells (cell line AtT-20) following dexamethasone treatment as well as a similar decrease in the hippocampus of corticosterone-treated mice [128]. Furthermore, ELS seems to affect histone modifiers actively involved in histone modifications. In a mouse model of ELS (maternal separation), Pusalkar et al. found a significant decrease of the expression of histone acetyltransferases (HAT), histone lysine methyltransferases, and histone deacetylases (Hdacs) in the medial prefrontal cortex [129]. Some of these modifications persisted over 15 months, thus indicating a relatively stable mechanism. Target-specific Effects of Glucocorticoids A considerable number of genes are directly affected by glucocorticoids. In the context of ELS, Bockmรผhl et al. report that corticosterone injections in ELS mice led to increased transcription of GC-responsive genes, such as Fkbp5, Dusp1, and Sgk1 [64]. Especially Fkbp5 seems to be involved in the differential regulation of Gr expression following ELS as well as chronic stress. Accordingly, Lee et al. showed that glucocorticoid administration persistently decreased DNA methylation and increased transcription of Fkbp5 in brain and blood cells of mice [130,131]. Yang et al. observed decreased DNA methylation of the intronic enhancer region of Fkbp5 in the dentate gyrus compared to whole hippocampal tissue in mice treated with glucocorticoids [128]. Posttranslational interactions of glucocorticoids may also contribute to the long-term effects of ELS. For example, glucocorticoids interact in several ways with Bdnf, with interactions depending on brain area and presence of other hormones or neurotransmitters. In cortical neurons, glucocorticoids interact with Bdnf through tyrosine kinase receptor TrkB. Binding of glucocorticoids to Grs downregulates phospholipase Cฮณ-dependent pathways and Bdnf-mediated release of glutamate [132]. Furthermore, Jeanneteau et al. found that impairment of Gr function in the PVN resulted in enhanced Crf expression, up-regulated hypothalamic levels of Bdnf, and disinhibition of the HPA axis [133]. Their findings indicate that Bdnf induces Crf expression via the TrkB-Creb signaling pathway. The authors also demonstrated that differential regulation of Crf in the PVN depends on the cAMP response-element binding protein coactivator Crtc2, which interacts with Bdnf and glucocorticoids to regulate Crf [133]. These are first hints for genome-wide effects of glucocorticoids on DNA methylation and histone modifications in different brain areas and for additional target-specific effects within the central regulation of the stress response. Together, they demonstrate the high potential of glucocorticoids to induce epigenetic modifications as well as modulate posttranslational mechanisms affecting the stress system. In combination with the accelerating effect of glucocorticoids in organ maturation, this indicates a functional relevant effect of glucocorticoids on developmental epigenetic mechanisms as well as a heightened glucocorticoid sensitivity of these mechanisms during critical periods of brain maturation. This hormonal impact on the epigenetic machinery probably represents another pathway, which contributes to the longlasting effects of ELS. We argue that in critical periods of brain development this hormonal activation interacts with the observed synaptic activation of epigenetic programming effects. These parallel pathways probably prime the epigenetic machinery for long-lasting modifications, which then function as developmental epigenetic mechanisms contributing to divergent developmental pathways. The combined synaptic and hormonal activation may not only contribute to programming effects of the stress system but also other neu-ral networks. Likely candidates are the developing sensory networks. STRESS AND SENSORY DEVELOPMENT The developing brain is highly sensitive to sensory input. During critical periods, sensory input is necessary for the development of sensory systems and the underlying neural networks (for example, for visual development [134,135]; for auditory development [136,137]; for tactile development [138]). On a neural and physiological level, some clinical and animal studies indicate a functional role of the stress system in experience-dependent sensory network formation across different sensorial modalities. For example, in their study of somatosensory development in preterm infants, Maitre et al. observed that the neural response to touch depended on brain maturation and the sensorial quality of the stimulus with supportive touch inducing a stronger response compared to painful touch [139]. A possible explanation could be that the stress system buffers the neural response to negative sensations, especially pain. Moreover, this indicates that the stress response could function as supporting structure for the long-term integration of sensory experiences and their qualitative nature during early neural network formation. Bock et al. found that, in male White-Wistar rats, repeated maternal separation-induced decreased dendritic spine density in the anterior cingulate cortex when the ELS treatment took place prior to the hyporesponsive period of the HPA axis. In contrast, dendritic spine density increased for pups exposed to maternal separation during the hyporesponsive period of the HPA axis. In addition, spine density increased in the somatosensory cortex independent of timepoint of exposure to maternal separation [30]. These findings indicate that during the somatosensory integration of the stressor, the emotional evaluation and valence acquisition depends on the developmental period of the stress system. Several studies by Teicher and colleagues showed stressor-specific structural and functional effects of ELS (childhood maltreatment) on developing neural networks underlying sensory processing [39]. For example, parental verbal abuse was associated with increased grey-matter density in the primary auditory cortex within the left superior temporal gyrus [140] and with alterations in fiber integrity of language processing pathways [141]. In contrast, visually witnessing repeated interparental domestic violence during childhood was associated with reduced grey matter volume in the visual cortex [142] and decreased the integrity of the left inferior longitudinal fasciculus, a visual-limbic pathway [143]. In a sample of adult women including individuals with and without a history of child abuse, Heim et al. found that childhood sexual abuse was associated with cortical thinning of the primary somatosensory cortex, specifically in areas of genital representation [144]. In contrast, emotional abuse during childhood was associated with cortical thinning of precuneus and left cingulate cortex, regions associated with self-awareness and self-evaluation [144]. In addition, Zimmerman et al. point out that the period during which they observed long-term effects of ELS on stress response and behavior in mice is a period of sensory-driven cortical network formation, and that this is compatible with a role of Mecp2 in the modulation of synaptic function in these networks [104]. This points towards a bidirectional interaction, with stress participating in the sensory development and sensory input participating in the development of the stress response, although the molecular mechanisms remain to be elucidated. For some sensory networks, bidirectional interaction with the stress system is well established. For example, the temperature regulation system is strongly interconnected with the stress system [145]. In chicken, Tzschentke et al. demonstrated that mild thermo-stimulation during the last four days prior to hatching -a critical period of thermo-regulation development -improved physical performance and induced long-lasting changes in thermo-sensitivity of hypothalamic neurons [146][147][148]. During this critical period, established feedback mechanisms are optimized and adapted to environmental conditions [148]. The species-specific sensorial finetuning mechanism also interacts with the stress response. Yahav et al. found that 3-day-old chickens exposed to mild thermal stimulation during late embryogenesis (E16-18) exhibited significantly lower plasma corticosterone levels than controls when exposed to a thermal challenge [149]. Together with the observed dynamic changes of DNA methylation and histone modifications in the promoter of Bdnf during postnatal thermotolerance acquisition in chicken [110-112; see above], it is very likely that epigenetic modifications underlie these sensory and stress-related fine-tuning mechanisms. Here, Bdnf seems to contribute to the coordination of critical periods in neural networks. In a mouse model, Bdnf levels have been shown to regulate the critical periods for visual cortex plasticity sensible for deprivation [150,151]. Changing Bdnf levels have been also associated with the onset of other critical periods including the period for sensitivity to variations in early maternal care [152,153]. First evidence also indicates that the interaction between stress-related and sensory networks during critical periods affects basic cognitive functions. Sui et al. observed improved performance in a passive avoidance test in 1-day-old chicks exposed to a 12/12h light circle at the last days prior to hatching (E19-21) compared to chicks raised in complete darkness [154]. In a second study, they could show that chicks raised in darkness showed similar improvement in the test performance when they received corticosterone injections during the same developmental period [40]. The authors conclude that the effects of light exposure on memory performance are mediated by HPA axis activity. In a final follow-up study, they demonstrated that exposure (at E20) to a steroid receptor antagonist (RU486) or a protein synthesis inhibitor (anisomycin) reverses the effects of light exposure or corticosterone injections on memory performance [155]. Again, the effect falls into a critical period of sensorial and cognitive development. Johnston et al. demonstrated that light exposure three days prior hatching (at E19) effects the development of lateral specialization and cognitive performance, including imprinting memory formation [156]. The long-term stability of these effects as well as the molecular, probably epigenetic, underpinnings remain to be determined. However, the contribution of the stress response to the sensor specific fine-tuning in sensory networks very likely constitutes one pathway underlying the epigenetic programming effects of ELS. In rare, species-specific cases of envi-ronmental adaptations, such an integration via epigenetic programming seems also to take place in adult individuals. Dias and Ressler report a case of epigenetic programming for an olfactory receptor in mice and a fear conditioning paradigm [157]. Fear conditioning with acetophenone resulted in CpG hypomethylation at the Olfr151 gene compared to controls and enhanced neuroanatomical representation of the Olfr151 pathway in the olfactory epithelium [157]. Changes in DNA methylation at Olfr151 were also found in the sperm of the F0 males exposed to acetophenone and the sperm of their naive F1 male offspring indicating an adaptive function across generations [157]. DUAL-ACTIVATION OF STRESS-RELATED AND SENSORY NETWORKS: AN INTEGRATIVE PERSPECTIVE ELS afflicts the organism during a critical developmental period early in life, when the stress system undergoes adaptive changes according to the available environmental input. Epigenetic mechanisms in neural networks have the potential to establish long-lasting adaptations of these networks to environmental signals. The existing evidence, mainly from rodent models, supports the notion that such a programming mechanism contributes to the long-term effects of ELS. Among the most likely candidates for epigenetic programming of the stress response are hippocampal Gr and Fkbp5 as well as hypothalamic Avp and Crf. Nevertheless, causal pathways linking epigenetic modifications of the stress response following ELS to physiology and behavior later in life remain to be established. One difficulty in establishing definite pathways is the heterogeneity of findings. Epigenetic modifications following ELS vary across species and tissue as well as stressor type, intensity, time-point, and duration [13,33,37]. Several etiological models have been proposed to integrate the existing data. For example, Chattarji et al. emphasize that stress effects on neural activity and function differ vastly across hippocampus, prefrontal cortex, and amygdala [34]. Although they are concerned with chronic stress, their argument that the behavioral correlates likely to result from a combination of the area-specific effects also holds for ELS. Specifically for ELS, Bock et al. emphasize that the impact of a stressor and the molecular pathways through which the stress signal gets integrated into the epigenome depend on the respective period of species-specific neural development [33]. According to their model, the most sensitive phase is the neonatal and juvenile phase of neural differentiation and synaptogenesis. Furthermore, they point out that hormonally induced and activity-dependent epigenetic modifications interact in differentiating neurons, while in neuronal precursor cells, epigenetic modifications occur only via hormonal influences independent of synaptic activity [33]. Zannas and Chourous argue for cumulative effects of stress-induced epigenetic modifications over the lifespan [36]. Different stressful experiences during critical and sensitive periods result in genome-wide or target specific epigenetic modifications, which add to the overall lifetime vulnerability for stressrelated diseases [36]. The broad impact of ELS on epigenetic mechanisms affecting brain development and neural network formation together with the diversity of developmental outcomes including cognitive impairment and affective disorders suggests that we must consider heterogeneous mechanisms on the molecular level. The occurrence of genome-wide and target-specific epigenetic modifications and the specificity of modifications not only in regard to brain areas but also genetic targets clearly indicates an interplay of multiple pathways at work. Some of the epigenetic modifications induced by ELS even integrate information about intensity (strong vs. mild, [109]) and the physical quality of the stressor (high vs. low ambient temperature, [41]). In addition, structural and functional alterations in stressor-specific sensorial networks depend on the sensorial quality of the stressor [39,144], while alterations in stress and emotion processing related networks seem to depend on the developmental period of the stress response. We can integrate these findings and existing models, when we assume a dual-activation of the epigenetic machinery underlying the observed programming effects after ELS (Fig. 1). According to this dual-activation hypothesis, epigenetic programming is induced via combined synaptic and hormonal activation: In critical periods of neural network formation, stimulus-induced neural activation initiates modifications in the underlying epigenetic regulatory systems. Parallel induction of acute HPA axis activity leads to hormonal priming of the epigenetic machinery for long-term programming effects via the release of glucocorticoids. Thus, during critical developmental periods, hormonal, and synaptic induced epigenetic modifications act in concert to establish the observed programming effects with developmental impact. The summarized evidence suggests that ELS induces these programming effects in stress-related networks, resulting in the diverse epigenetic modifications at stress-related genes observed in ELS studies. However, it is very likely that additional programming effects occur in connected networks, and that these contribute to the life-long effects of ELS. Here, sensory networks are the most likely candidate. Not only are they activated during stressor perception, they also partly share a critical developmental period with the stress system early in life. Furthermore, the role of glucocorticoids in organ maturation and in experience-dependent sensory development clearly indicates a distinct influence of the stress system on sensorial networks. In addition, the findings of stimulus-specific epigenetic programming effects [109,41] and stimulus-specific variations in neural network functionality [39,144] also point towards the participation of sensory networks. Therefore, we assume that synaptic and hormonal activation of the epigenetic machinery together result in stressor-specific epigenetic programming within the activated stress-related and sensory neural networks. Alterations may not only occur in the stress response but also in sensorial and related cognitive and emotional processing (Fig. 2). This allows stressor-specific adaptations, but may also lead to functional mal-adaptations, depending on timing and intensity of the stressor. Findings in clinical populations support stressorspecific outcomes and a role of sensory systems in symptomatology and functional impairment. For example, patients suffering from mental illness with a history of ELS differ in Fig. (1). During critical periods of neuronal development, early life stress leads to synaptic and hormonal activation of epigenetic programming mechanisms in stress related and sensory networks. Dual-activation sensitizes the epigenetic machinery for long-term modifications and potentially allows for stressor-specific adaptations. Two exemplary pathways within stress-related networks are shown in detail: (1) Neural activity in stress related networks induces functional specific phosphorylation of Mecp2. This leads to reduced recruitment of Dnmts and Hdac and subsequently reduced DNA methylation at the specific gene site during development [68]. (2) Increased glucocorticoid exposure in critical periods leads to Gr induced target-specific or global hypo-or hypermethylation (only global hypomethylation depicted, [51,126,127]). The Gr potentially interacts via several pathways with epigenetic mechanisms; the exact mechanisms are still unknown. Fig. (2). In critical periods of neural network formation, early life stress activates the HPA axis via stressor specific sensory networks. The induced synaptic activation initiates modifications in the underlying epigenetic regulatory systems of stress-related and stressor specific sensory networks. In addition, acute HPA axis activity induces further epigenetic modifications via the release of glucocorticoids. Neural activity and hormonal background act in concert to maintain stimulus specific functional modifications in stress-related and sensory neural networks. Due to the bidirectional interaction of stress-related and sensory networks, long-term alterations not only occur in the stress response but also in sensorial and related cognitive and emotional processing. symptomatology and diagnosis according to the type of ELS [7]. In addition, differences in sensory processing are associated with symptom severity, treatment outcome, and functional impairment across a wide range of affective and psychotic disorders [158,159]. Although it is not clear, whether these sensory dysfunctions are cause or effect of the disorder, their contribution to symptomatology and course of illness indicate a potential vulnerability in the underlying neural networks. These likely do not differ much between specific diagnostic categories, but instead, underlie a symptomatic spectrum [160]. Further elucidation is needed how these differences in sensory processing relate to higher cognitive functions and emotional processing. Some indications for a bidirectional relationship can be drawn from clinical intervention studies. For example, Dale et al. showed that auditory training enhanced some of the cognitive dysfunction in schizophrenia patients, and that this was driven by plasticity in auditory cortical areas [161]. With our dualactivation hypothesis, we assume that the interplay between epigenetic programming effects in stress-related and sensory networks during critical developmental periods of these networks contributes to the diverse symptomatology following ELS and should be included in bottom-up characterizations of clinical diagnoses. The dual-activation hypothesis has clinical implications for the treatment of mental disorders following ELS, stressrelated sensory dysfunction, and in the treatment of preterm neonates. Assuming an interplay between ELS and the sensorial quality of the stressor, treatment of mental disorders following a history of ELS should not focus on the stress system alone. Instead, stressor-specific treatment could enhance therapeutic outcomes. Of note, Weaver et al. reported that cross-fostering to high LG rat dams reversed the epigenetic effects of ELS in the offspring of low LG dams [56]. Although in humans, the sensorial quality of ELS might be a little more complex, clinicians should assess cognitive and sensory dysfunctions and eventually address them in therapy. The functionality and relevance of such an approach have already been demonstrated for deficits in auditory perception in schizophrenia patients [162]. The cognitive training targeting auditory and verbal learning improved sensory responses in the auditory cortex and engagement of prefrontal regions, and the improvement was associated with better executive functions [161]. Furthermore, linking stressrelated and sensorial networks during early brain development can lead to the discovery of pathways and mechanisms underlying stress-induced sensorial dysfunction, such as tinnitus, hearing loss, loss of sight, and psychosomatic pain. A growing body of evidence shows that chronic pain is a common symptomatology in individuals with a history of early life stress [163]. Finally, the dual-activation hypothesis supports therapeutic approaches in the treatment of preterm neonates, which include positive sensorial stimulation such as skin contact as part of the postnatal parent-child interaction. Experimental approaches further elucidating the mechanisms underlying the dual-activation hypothesis need to combine animal studies, longitudinal studies following children exposed to early life stress, and clinical intervention studies. Longitudinal studies following children exposed to early life stress, which register sensory deficits in addition to cognitive and behavioral development, emotional processing, and potential psychiatric symptoms later in life, would be highly informative. Also, intervention studies addressing potential deficits in sensory perception and processing in clinical populations would further elucidate the role of sensory systems in mental illness. One example is the mentioned auditory training in schizophrenia patients. Both types of human studies could include neuroimaging studies addressing functional differences in ELS populations [39]. However, the identification of underlying epigenetic mechanisms depends on appropriate animal studies. Such animal studies would need to differentiate between different sensorial qualities of potential stressors. The currently most frequently used stress paradigm in rodent modelsmaternal separation with or without additional maternal stress -represents a multi-sensory stressor including at least temperature changes, lack of tacit stimulation, and stress vocalization. In contrast, studies should also focus on stressors with only one sensory modality, e.g. temperature, (predator) odor, acoustic stressors (stress vocalizations by other individuals or predator sounds), etc. Differences in behavioral outcomes would indicate stressor-specific developmental pathways, and thus modulation via sensory networks. Furthermore, potential epigenetic modifications need to be analyzed not only in stress-related networks but also in the respective sensory networks. It might be even more likely to identify adaptive changes in sensory networks as their adaptation is specific to the stressor. For example, Dias & Ressler [157] could demonstrate such an epigenetic modification in the olfactory system using a fear conditioning paradigm, which is known for its interaction with the HPA axis [164]. In addition, studies would be of strong interest, which contrast stress exposure during critical periods of the stress system and of the targeted sensory systems as well as during an overlapping developmental window. This is in accordance with the suggestion of Bock et al. that the pathways, through which epigenetic modifications are induced, depend on the developmental period [33]. CONCLUSION Overall, the observed epigenetic programming effects of ELS are very likely not limited to the modulation of the stress response. Parallel programming effects probably occur in sensory networks activated by the stressor and depending on its sensorial quality. In addition, sensory and stressrelated networks both participate in cognitive and emotional processing and in memory formation. Neural projections from sensory and stress-related networks to prefrontal and limbic areas may initiate epigenetic programming in the respective system and aligns with the notion of 'systems heritability' (see above) likely adding to the long-term effects of ELS. Future research needs to further elucidate these potentially bidirectional epigenetic programming effects in stressrelated and other neural networks using experimental models able to differentiate between stressors of different sensorial modalities. CONFLICT OF INTEREST The author declares no conflict of interest, financial or otherwise.
Parkinson Network Eastern Saxony (PANOS): Reaching Consensus for a Regional Intersectoral Integrated Care Concept for Patients with Parkinsonโ€™s Disease in the Region of Eastern Saxony, Germany As integrated care is recognized as crucial to meet the challenges of chronic conditions such as Parkinsonโ€™s disease (PD), integrated care networks have emerged internationally and throughout Germany. One of these networks is the Parkinson Network Eastern Saxony (PANOS). PANOS aims to deliver timely and equal care to PD patients with a collaborative intersectoral structured care pathway. Additional components encompass personalized case management, an electronic health record, and communicative and educative measures. To reach an intersectoral consensus of the future collaboration in PANOS, a structured consensus process was conducted in three sequential workshops. Community-based physicians, PD specialists, therapists, scientists and representatives of regulatory authorities and statutory health insurances were asked to rate core pathway-elements and supporting technological, personal and communicative measures. For the majority of core elements/planned measures, a consensus was reached, defined as an agreement by >75% of participants. Additionally, six representatives from all partners involved in the network-design independently assessed PANOS based on the Development Model for Integrated Care (DMIC), a validated model addressing the comprehensiveness and maturity of integrated care concepts. The results show that PANOS is currently in an early maturation state but has the potential to comprehensively represent the DMIC if all planned activities are implemented successfully. Despite the favorable high level of consensus regarding the PANOS concept and despite its potential to become a balanced integrated care concept according to the DMIC, its full implementation remains a considerable challenge. Introduction There is an increasing awareness of the importance of integrated care concepts (ICCs) for vulnerable patient populations of elderly patients with chronic diseases [1]. Patients with Parkinson's disease (PD) are a prime example of such a vulnerable patient population. The disease often has a decade-long disease course, along which patients experience increasing and evolving symptoms with varying responsiveness to available therapeutic options, often accompanied by the need for complex therapies such as deep brain stimulation (DBS) or continuous infusion therapies [2]. In addition, PD is the second most common neurodegenerative disease and patient numbers are expected to double within 25 years [3,4]. Due to this, there is an increasing number of national and international ICC initiatives for PD patients [5]. PD care in Germany involves a large number of different healthcare providers and is hindered by fragmentation, lack of communication and coordination [6]. Although defined as a standard of care, the access to specialized treatment is highly limited, aggravating imbalances in care access and hampering the efficient delivery of individualized, multiprofessional care [7]. Eastern Saxony is a German region with approximately 50% of its population of 1.9 million living in rural areas and only one major city with a university hospital (Dresden, 530,000 inhabitants) [8,9]. In addition, the region is in a demographic transformation process with the oldest population of all German regions (mean age 46.2 years) [9]. Depending on the district, up to 40% of PD patients do not have regular access to neurologists or PD specialists, and up to 56% of all PD patients admitted to Dresden University Hospital are emergency cases (own calculations based on hospital admission information and based on secondary health data from the biggest regional statutory health insurer, AOKPLUS). This has led to a situation where core objectives in PD care are not met, such as timely and equal access to PD specialists and neurologists or the avoidance of disease-related complications and emergency admissions. Since 2017, a multiprofessional team consisting of community-based physicians, PD specialists, medical therapists, patients, experts for design and evaluation of ICCs, representatives of statutory health insurances (SHI) and local medical authorities has collaboratively developed a concept for a multimodal intersectoral ICC for PD patients, named Parkinson Netzwerk Ostsachsen (Parkinson Network Eastern Saxony; PANOS). To locate the region and its participating centers please refer to Figure 1. Theoretical frameworks on ICCs and practical experiences stress the importance of a multidimensional strategy that addresses various aspects ranging from the definition of care delivery and quality standards, roles and tasks, to education and engagement measures for patients and healthcare providers [10,11]. The PANOS concept incorporates several of these multidimensional aspects and follows a sequential implementation strategy adopted to the regional healthcare context. J. Clin. Med. 2020, 9, x FOR PEER REVIEW 3 of 25 It starts with the implementation of an intersectoral care pathway as the core and basis for standardized healthcare delivery (for details on the care pathway, see Table 1 and Figure 2). This care pathway is to organize and standardize the collaborative work of physicians at three specialized hospital-affiliated outpatient centers located in Dresden, MeiรŸen and Hetzdorf and of communitybased neurologists and general practitioners (GPs). Figure 1. Characteristics of the intervention region Eastern Saxony. Germany is shown in light blue, the state Saxony in dark blue, and the intervention region Eastern Saxony in red. Within the intervention region, the three specialized hospital-affiliated outpatient centers are shown that will serve as the structural backbone of Parkinson Network Eastern Saxony (PANOS). The table on the right side gives population characteristics of the six districts within the intervention region. Eastern Saxony has a population of 1.9 million people, of which approximately 15,000 have Parkinson's disease (PD). * General population numbers were taken from public statistical resources (https://www.statistik.sachsen.de/html/bevoelkerungsstand-einwohner.html). PD cases were calculated based on secondary health data from the biggest local statutory health insurer (AOKPLUS). Criteria were: International Classification of Diseases 10 th revision (ICD10) G20.x and prescription of dopaminergic medication as a validation criterion. The resulting prevalence of 786.69/100,000 matches the one of another recent German-wide epidemiologic study based on secondary health data [12,13] and was the basis for the ยง calculation of the number of patients per general practitioner (GP)/community-based neurologist. # Number of general practitioners (GPs) and number of neurologists was provided by the Association of SHI Physicians. Row "Total" gives the summed numbers for all six districts in the intervention region. Whereas the average number of PD patients per GP varies little between urban and rural districts (14 patients/GP), there was a huge variation in the average number of PD patients per neurologist between rural areas (up to 360 patients/neurologist in Mittelsachsen) and the city of Dresden (126 patients/neurologist). The implementation process will be accompanied by several supportive technical, professional and communicative measures (Table 1 and Figure 2). In order to reduce implementation complexity, the ICC will be restricted to the region of Eastern Saxony during the first phase ( Figure 1) and will focus on the coordination of care-providing physicians and case managers. The structured integration of further involved healthcare providers (e.g., physiotherapists) will be addressed after a successful implementation of the core elements described here. PANOS will focus on the most vulnerable PD subpopulations, defined as: 1. patients in the transition phase from the early, uncomplicated disease stage (honeymoon) to advanced disease stages 2. patients in advanced disease stages 3. patients with an unsecured diagnosis and in need of specialist involvement in diagnosis finding, irrespective of the disease stage. Characteristics of the intervention region Eastern Saxony. Germany is shown in light blue, the state Saxony in dark blue, and the intervention region Eastern Saxony in red. Within the intervention region, the three specialized hospital-affiliated outpatient centers are shown that will serve as the structural backbone of Parkinson Network Eastern Saxony (PANOS). The table on the right side gives population characteristics of the six districts within the intervention region. Eastern Saxony has a population of 1.9 million people, of which approximately 15,000 have Parkinson's disease (PD). * General population numbers were taken from public statistical resources (https: //www.statistik.sachsen.de/html/bevoelkerungsstand-einwohner.html). PD cases were calculated based on secondary health data from the biggest local statutory health insurer (AOKPLUS). Criteria were: International Classification of Diseases 10 th revision (ICD10) G20.x and prescription of dopaminergic medication as a validation criterion. The resulting prevalence of 786.69/100,000 matches the one of another recent German-wide epidemiologic study based on secondary health data [12,13] and was the basis for the ยง calculation of the number of patients per general practitioner (GP)/community-based neurologist. # Number of general practitioners (GPs) and number of neurologists was provided by the Association of SHI Physicians. Row "Total" gives the summed numbers for all six districts in the intervention region. Whereas the average number of PD patients per GP varies little between urban and rural districts (14 patients/GP), there was a huge variation in the average number of PD patients per neurologist between rural areas (up to 360 patients/neurologist in Mittelsachsen) and the city of Dresden (126 patients/neurologist). It starts with the implementation of an intersectoral care pathway as the core and basis for standardized healthcare delivery (for details on the care pathway, see Table 1 and Figure 2). This care pathway is to organize and standardize the collaborative work of physicians at three specialized hospital-affiliated outpatient centers located in Dresden, MeiรŸen and Hetzdorf and of community-based neurologists and general practitioners (GPs). To assure timely and equal access for all eligible patients, registration has to be low-threshold with easy-to understand clinical registration criteria and registration rights to all community-based neurologists, GPs and patients themselves. 2 Pre-consultation patient self-monitoring Prior to a consultation at one of the specialized centers ("Parkinson center"), patients will receive standardized self-monitoring packages to ensure availability of relevant patient information. Triage Based on the pre-consultation self-monitoring, patients will be triaged according to the criteria of urgency and expected therapeutic complexity. Specialist consultation Tasks and responsibilities will be clearly defined and assigned among center staff to allow physicians to focus on the medical core aspects. 5 Individualized ongoing intersectoral care plan Following a consultation, specialists are to plan the relative contribution of the Parkinson center and the treating community-based physician on an individual patient-to-patient basis. 6 Repetitive patient self-monitoring All patients will receive self-assessment monitoring packages at quarterly intervals to allow for timely detection of changes in condition. 7 Consultation with community-based physician As planned by the individualized ongoing intersectoral care plan, patients are seen by their community-based physician whose responsibilities are defined by the care pathway. If indicated, the physician can prompt changes in the treatment plan, e.g., by asking for an intensified contribution of the Parkinson center. Supportive personal, technical and communicative measures 1 Electronic health record (EHR) All patient-related information will be recorded in a collaborative electronic patient management/documentation platform that all involved healthcare providers have access to. 2 Intersectoral specialized case management A team of case managers who specialize in PD care will be the personal backbone of the Parkinson Network Eastern Saxony (PANOS). They will serve as an individual patient's care coordinator and as the first contact person to the patient and all involved healthcare providers. In PANOS, they will be additionally responsible for network management activities, carrying out the structured patient school and support physicians in Parkinson centers and private practices. Active network management Ongoing mobilizing initiatives to promote the motivation of community-based physicians to become an active collaborator. Concept Development Overview The PANOS integrated care concept was developed in a sequential interprofessional collaborative effort. This was realized by a series of six workshops (WS) in addition to numerous smaller workgroup meetings. The first three workshops were conducted as semistructured organized conferences to develop the core concept. The subsequent three workshops were executed as a structured consensus process in compliance with recommended scientific methodology [14]. Structured Consensus Process Three iterative structured workshops meetings were conducted between January and June 2020, of which two were face-to-face and one was an online meeting due to the SARS-CoV-2 lockdown restrictions. Meetings were free-of-charge, and travel and other expenses were covered by project funds. To ensure an equal level of information, participants were provided in advance with content material about PANOS through a condensed illustrative summary that was developed in the three The implementation process will be accompanied by several supportive technical, professional and communicative measures (Table 1 and Figure 2). In order to reduce implementation complexity, the ICC will be restricted to the region of Eastern Saxony during the first phase ( Figure 1) and will focus on the coordination of care-providing physicians and case managers. The structured integration of further involved healthcare providers (e.g., physiotherapists) will be addressed after a successful implementation of the core elements described here. PANOS will focus on the most vulnerable PD subpopulations, defined as: 1. patients in the transition phase from the early, uncomplicated disease stage (honeymoon) to advanced disease stages 2. patients in advanced disease stages 3. patients with an unsecured diagnosis and in need of specialist involvement in diagnosis finding, irrespective of the disease stage. Since the consortium understands the acceptance of the concept as an indispensable prerequisite for a successful implementation, a modular-structured consensus process was organized, engaging representatives of all healthcare professionals and institutions that will participate in the implementation process. The aim of the present paper is to describe the PANOS concept as the result of its sequential development process, including the structured consensus process. In addition, PANOS was evaluated according to a validated theoretical framework of integrated care (Development Model for Integrated Care; DMIC) [1]. The DMIC is an expert-consensus based holistic model of components relevant to the practical implementation of ICCs and allows for an assessment of both the comprehensiveness and the maturation stage of a single ICC. Concept Development Overview The PANOS integrated care concept was developed in a sequential interprofessional collaborative effort. This was realized by a series of six workshops (WS) in addition to numerous smaller workgroup meetings. The first three workshops were conducted as semistructured organized conferences to develop the core concept. The subsequent three workshops were executed as a structured consensus process in compliance with recommended scientific methodology [14]. Structured Consensus Process Three iterative structured workshops meetings were conducted between January and June 2020, of which two were face-to-face and one was an online meeting due to the SARS-CoV-2 lockdown restrictions. Meetings were free-of-charge, and travel and other expenses were covered by project funds. To ensure an equal level of information, participants were provided in advance with content material about PANOS through a condensed illustrative summary that was developed in the three former semistructured workshops. Preceding online surveys were carried out to retrieve participants' broader perceptions and expectations on the status quo, challenges and potential benefits of an ICC. Workshops were conducted following a three-stage procedure: 1. An initial input session in plenum provided participants with detailed information about the specific aspects/modules of the PANOS concept to be consented. A short discussion round followed to clarify potential misunderstandings. 2. To facilitate low-threshold, in-depth discussions, the panel was then split into three small moderated discussion rounds. The discussion rounds were guided by previously specified open questions, covered the content of the current workshop and also included the nonconsented aspects from the preceding workshop. 3. In the final session, Tele-Dialog votings (TED votings) were conducted to reach a consensus on relevant aspects of PANOS. Most TED questions were formulated prior to the workshops, but new aspects from the discussion rounds were included at the discretion of the organization committee. The TED voting questions for the structured consensus purposely focused on those aspects of PANOS that were rated as the most relevant for the later intersectoral and multiprofessional collaborative work. Thus, these aspects were covered more intensively than other areas, and deemed less relevant to collaborative work (e.g., work organization within the centers). Data Analysis and Presentation Based on the guidelines for structured consensus processes by the Association of the Scientific Medical Societies in Germany, a consensus was defined as an agreement of >75% [14]. The results were analyzed using descriptive statistics. All TED voting results are presented in the present paper. All items were assigned acronyms to match the respective consented item in the tables to its reference in the text (Table 2). Due to space restrictions, a selection of the total 165 online voting results was included, as selected by an independent three-level rating by the authors (for all online voting results, see Supplementary Table S1). Group discussion rounds were protocolled during the session, audio recorded and transcribed. Application of the Developmental Model for Integrated Care (DMIC) Six key representatives from all project partners actively involved in the conceptual design process did an independent online-rating of the PANOS according to the DMIC. The results were analyzed according to the established analysis protocol of the DMIC [1]. Structured Consensus Process The group size was comparable in workshop 1 and 2 (34 and 39 participants) and was smaller in workshop 3 (29 participants) due to the current SARS-CoV-2 situation and special event format. A total of 228 items was presented to participants in the course of all three workshops, 165 thereof in preceding surveys and 62 in TED votings. For the domains of the PANOS concept covered, please refer to Table 2. As part of the online votings, participants were asked to rate the challenges they perceive in daily usual care (Figure 3). A timely access to specialized care, as one primary goal of PANOS, was rated as the most relevant current barrier, followed by an insufficient education of healthcare professionals and by the lack of interprofessional cooperation and communication, associated with a lack of appropriate technical infrastructure for collaborative work. Most of these regionally recognized problem areas are in line with both patient-and expert-based perceptions of relevant areas of improvement in PD care throughout the world [15][16][17]. in workshop 3 (29 participants) due to the current SARS-CoV-2 situation and special event format. A total of 228 items was presented to participants in the course of all three workshops, 165 thereof in preceding surveys and 62 in TED votings. For the domains of the PANOS concept covered, please refer to Table 2. As part of the online votings, participants were asked to rate the challenges they perceive in daily usual care ( Figure 3). A timely access to specialized care, as one primary goal of PANOS, was rated as the most relevant current barrier, followed by an insufficient education of healthcare professionals and by the lack of interprofessional cooperation and communication, associated with a lack of appropriate technical infrastructure for collaborative work. Most of these regionally recognized problem areas are in line with both patient-and expert-based perceptions of relevant areas of improvement in PD care throughout the world [15][16][17]. In the following sections, the workshop results are clustered according to the main domains of the PANOS concept (for an overview, see Table 1 and Figure 2). A short description is provided, followed by the related results from online surveys (if applicable) and consensus results. In the following sections, the workshop results are clustered according to the main domains of the PANOS concept (for an overview, see Table 1 and Figure 2). A short description is provided, followed by the related results from online surveys (if applicable) and consensus results. Patient registration (REG) In order to allow for a timely and equal access, easy-to understand, low-threshold clinical registration criteria were formulated under consideration of the international expert-based consensus processes for the definition of advanced PD [18] (Table 3; Table 4, REG2). However, these criteria were adopted to facilitate an easy understandability for both patients and GPs, and to be more sensitive to patients in the early transition phase from early to advanced PD. Since up to 40% of PD patients in the region of Eastern Saxony do not have access to a community-based neurologist, registration must also be possible for GPs (Table 4, REG5). Inscribing physicians must provide information about urgency. A professional registration will be carried out via the web-based electronic health record (EHR) ( Table 4, REG1), but an additional paper-based registration and a patient self-assessment tool will be provided ( Table 4, REG3, 4). Since the varying motivation of >1.300 potentially registering physicians in Eastern Saxony could present a considerable bottle-neck, self-registration may exceptionally be undertaken by patients themselves (Table 4, REG3, 4). However, patient self-registration raised concerns and was only consented after more detailed explanations were provided. The major concern was that it could present an uncontrollable bias to physician-based eligibility selection. The following procedural steps enabled the consensus: the case manager reviews all (self-) registrations and always informs the treating community-based physician in case one of his/her patients chooses self-registration. In case of dissent about eligibility, indicated urgency, or self-registration, the case manager will try to achieve an interprofessional consensus (collaborative consensus principle of PANOS) ( Table 4, REG6). Registered patients are to be distributed between the three Parkinson centers according to their zip codes. Deviation from this principle might occur in case a strong imbalance in patient load develops between the three centers ( Table 4, REG6). 2. Pre-consultation patient self-monitoring and baseline information collection (PCM) Within PANOS, patients are to assume an active role in their own healthcare. In order to prepare patients for a more active role, a standardized patient education will be offered to promote self-management capacities (see below). One important self-management competence is self-monitoring [19]. Before getting their first consultation in a Parkinson center, patients will be asked to fill in a standardized self-monitoring package at home, containing validated self-assessment tools for motor and non-motor symptoms and for psychosocial health domains (Table 4, PCM1). In addition, patients will be asked to contribute as much as possible to the gathering of their medical history as the baseline for the ongoing longitudinal care within PANOS. Patients will be supported by their individually assigned case manager and several iterations might take place, as long as the patient's condition and the urgency as indicated by the inscribing physician permit this. The rational for this approach is to both involve the patient as an active partner and to allow for an efficient collection of standardized health-related information. Self-monitoring packages will first be provided as machine-readable paper-based questionnaires in order to not exclude patients without sufficient digital competence. However, electronic patient self-monitoring (e.g., app-based), as well as sensor-based monitoring, are envisaged as the proximal expansion stages of PANOS. Returned information will be processed semiautomatically and integrated into the EHR. Triage (TRI) Based on the comprehensive information gathered by the patient-based pre-consultation monitoring, all consultations will be organized based on a triage system with the criteria of urgency (emergency, urgent or regular) and expected complexity and associated time need (Table 4, TRI1). Although a clear goal of PANOS is to promote outpatient care, inpatient hospital admissions can be initiated if required by the patient's condition. Structured specialist consultation Consultations with PD specialists in Parkinson centers will be based on a standardized process with clearly defined responsibilities among the center staff in order to ensure efficient workflows. The consultation duration and agenda are determined by the preceding triage process and the information available due to the pre-consultation patient self-monitoring. Consultations will be divided into a non-medical visit with the case manager personally assigned to an individual patient and a subsequent medical consultation with the PD specialist. The case manager complements missing monitoring information together with the patient and performs additional professional tasks as assigned. The specialist can then base his/her consultation on this prior work contributed by the patient and his/her case manager. The entire workflow and all required documentations will be reflected by the EHR. Work organization during structured consultations within the Parkinson centers, albeit discussed in the workshops, was not a subject matter of the structured consensus because of the deliberate focus on intersections relevant to intersectoral collaborative work organization. 5. Individualized ongoing intersectoral care plan (ICP) At the end of each center consultation, specialists are to suggest an individualized ongoing care plan for the following 12 months to all other involved healthcare providers and the patient ( Table 4, ICP1). As part of this care plan, the following aspects have to be determined: (a) Frequency, time frames and relative professional contribution (community-based physician vs. specialist) to future scheduled outpatient consultations within PANOS. Depending on the individual patient's condition, all distributions are possible, ranging from 100% care provision by community-based physicians (for patients still in the early transition phase) to a 100% care provision by Parkinson centers (for patients with complex therapeutic needs or important complications) (b) Structured eligibility assessment for inpatient rehabilitation programs (c) Recommendations for frequency and therapeutic objectives of active therapies (physiotherapy, occupational or speech therapy) (d) Individualization of content or frequency of the predefined packages of the repetitive patient self-monitoring This individualized ongoing care plan is understood to be an important instrument to allow for efficient resource allocation. The low-threshold clinical registration criteria imply that patients with important variations in therapeutic needs will be treated in PANOS. Without an element for individualized need-adjusted care intensity within the standards of the care pathway, an economic care delivery would be severely compromised. Care plans as suggested by PD specialists will be shared via the EHR. In case of dissent, all involved healthcare providers can suggest changes until a mutual consensus is reached. 6. Repetitive patient self-monitoring (MON) All patients registered in PANOS will be asked to complete quarterly standardized self -monitoring packages. In addition to the general arguments given above for self-monitoring, the quarterly repetitions are to function as a safeguard for the early detection of relevant changes in patient conditions, independent of the individualized ongoing care plans. Different content volumes will be defined for every 3, 6, and 12 months. Both frequency and volume can be individualized to exceed the predefined minimal monitoring standard if a patient's specific situation warrants this. The monitoring results are recorded in the EHR and can be accessed by all relevant EHR-users. Having repetitive detailed information about a patient's condition means a gain in responsibility to take timely action. In order to assure this and not to overcharge community-based physicians, the main responsibility will be taken by the staff of the Parkinson centers (Table 4, MON3). Community-based physicians need explanations of the instruments and on the interpretation of results that will be displayed on the EHR (Table 4, MON2). The option for community-based physicians to adjust the monitoring packages was not consented (Table 4, MON1). These results are in line with preceding split-group discussions. GPs especially expressed the concern of potentially being overcharged by more detailed insights into a patient's condition as provided by the repetitive monitoring. 7. Structured consultation with community-based physicians (CBP) For all patients who do not require regular specialist consultations, the individualized ongoing care plan might envisage all or the majority of ongoing care to be provided by the collaborating community-based physician (e.g., patients in the late stages of their honeymoon phase, or early stages of transition phase). In order to enable collaborative work, all healthcare providers will have full access to the EHR, and their tasks will be defined by a structured workflow integrated into the EHR. Since both the extent of standardized responsibilities and the related design of data visualization will have an impact on the willingness to become an active contributor to PANOS, both aspects received substantial coverage. Even if registered in PANOS, not all patients will have access to community-based neurologists, and therefore some GPs will become active long-term contributors in collaboration with the Parkinson centers. It is therefore of high importance to understand and meet the needs of a diverse group of potential active contributors. As part of the online surveys, participants were asked to prioritize the monitoring of different PD-related symptoms, and of functional and psychosocial health-related domains ( Figure 4). By rating motor symptoms as the most, and PD-related quality of life (QoL) as the second most, important aspect, participants recognized the importance of monitoring different levels of health in PD, ranging from symptoms, over functions to the overall impact on the patient's QoL [20]. In general, a lower relevance was attributed to psychosocial health-related domains such as role or emotional functioning. Regarding the work distribution between Parkinson centers and community-based physicians, the maintenance of the medication plan was consented to be a responsibility of community-based physicians (Table 4, CBP1), but not the conduction of standardized clinical tests, such as the MoCA (Table 4, CBP2). This was rather agreed to be a responsibility of the center-affiliated case managers (see below at intersectoral specialized case management). Additional time requirements of community-based physicians Additional time requirements for community-based physicians are to be expected due to participation in PANOS. In order to account for this, there will be a supplementary payment of EUR 25-35. Acceptable additional time requirements were discussed in the split-group discussion rounds, and averaged acceptable durations from the split-group discussions were included in subsequent TED votings. As it already became evident in the discussion rounds, there was a huge variability in the time spent on an individual PD patient's community-based care, accompanied by a large variation in the acceptable additional time requirements. Therefore, no overall consensus could be achieved (Table 4, ATR1-7). However, under consideration of the supplementary payment, it was consented that each neurologist consultation could be extended by an additional 15 min (Table 4, ATR6), and that there could be up to one additional quarterly consultation (Table 4, ATR7). Electronic health record (EHR) An EHR tailored to the specific use case of the PANOS is regarded as a crucial basis for efficient and truly collaborative structured intersectoral care. The EHR will be a web-based application that visualizes not only all relevant clinical information (e.g., medical reports, diagnostic test results, results of the repetitive patient self-monitoring), but will also define workflows and associated tasks as part of the standardized care pathway. Where relevant and feasible, the interoperability to other EHRs is planned (e.g., the online communication and billing system by the German Association of SHI physicians KV-SafeNet). Both the definition of acceptable work packages and a high-quality user Regarding the work distribution between Parkinson centers and community-based physicians, the maintenance of the medication plan was consented to be a responsibility of community-based physicians (Table 4, CBP1), but not the conduction of standardized clinical tests, such as the MoCA (Table 4, CBP2). This was rather agreed to be a responsibility of the center-affiliated case managers (see below at intersectoral specialized case management). 8. Additional time requirements of community-based physicians Additional time requirements for community-based physicians are to be expected due to participation in PANOS. In order to account for this, there will be a supplementary payment of EUR 25-35. Acceptable additional time requirements were discussed in the split-group discussion rounds, and averaged acceptable durations from the split-group discussions were included in subsequent TED votings. As it already became evident in the discussion rounds, there was a huge variability in the time spent on an individual PD patient's community-based care, accompanied by a large variation in the acceptable additional time requirements. Therefore, no overall consensus could be achieved (Table 4, ATR1-7). However, under consideration of the supplementary payment, it was consented that each neurologist consultation could be extended by an additional 15 min (Table 4, ATR6), and that there could be up to one additional quarterly consultation (Table 4, ATR7). Electronic health record (EHR) An EHR tailored to the specific use case of the PANOS is regarded as a crucial basis for efficient and truly collaborative structured intersectoral care. The EHR will be a web-based application that visualizes not only all relevant clinical information (e.g., medical reports, diagnostic test results, results of the repetitive patient self-monitoring), but will also define workflows and associated tasks as part of the standardized care pathway. Where relevant and feasible, the interoperability to other EHRs is planned (e.g., the online communication and billing system by the German Association of SHI physicians KV-SafeNet). Both the definition of acceptable work packages and a high-quality user interface (UI) will have an important impact on efficiency, quality of care and on the professional motivation to become a reliable contributor. The relevance of different medical information to be provided in the EHR was assessed, with the medication plan ranking the highest ( Figure 5). Despite the overall acceptance of the concept of repetitive monitoring, the provision of the respective results had the lowest priority of all the medical information assessed. J. Clin. Med. 2020, 9, x FOR PEER REVIEW 13 of 25 interface (UI) will have an important impact on efficiency, quality of care and on the professional motivation to become a reliable contributor. The relevance of different medical information to be provided in the EHR was assessed, with the medication plan ranking the highest ( Figure 5). Despite the overall acceptance of the concept of repetitive monitoring, the provision of the respective results had the lowest priority of all the medical information assessed. All contributing community-based physicians will actively use the EHR as the basis for standardized care and documentation ( Table 5, EHR 1-4). โ€ฆ "cognitive function" (e.g., memory) 93% Yes EHR7 โ€ฆ "physical function" (motor abilities) 86% Yes EHR8 โ€ฆ "physical function" (nonmotor abilities) 86% Yes EHR9 โ€ฆ "emotional function" (e.g., depression) 86% Yes EHR10 โ€ฆ"pain" 86% Yes All contributing community-based physicians will actively use the EHR as the basis for standardized care and documentation ( Table 5, EHR 1-4). Largely corresponding to those health-related dimensions where the availability of information was valued as the most meaningful (Figure 4), community-based physicians were also expected to actively contribute to their collection (Table 5, . In line with this, a contribution of community-based physicians was consented in functional domains (Table 5, EHR 5-9), in contrast to the contribution in psychosocial domains (Table 5, EHR 11,14). An exception was the information about QoL, valued as the second highest relevance (Figure 4), but where a contribution to information collection by community-based physicians was not consented (Table 5, EHR 12, 13). The assessment of QoL was rather regarded as a responsibility of case managers. 2. Intersectoral specialized case management (ICM) Case managers are pivotal for PANOS as they hold the major linkage between the outpatient sector and Parkinson centers. The team of case managers will represent the personal backbone of PANOS with an array of core responsibilities ( Table 6, ICM 1-7, Table 4, MON 3): โ€ข To be the long-term one-spot contact person for individually assigned patients and for all of their healthcare providers; โ€ข To perform patient home consultations and home-based social assessments; โ€ข To assure the structured availability of required clinical information with a special focus on the repetitive patient self-monitoring; โ€ข To assure timely reactions of Parkinson centers in case of relevant changes in the monitoring results; โ€ข To plan and execute measurements for the active network management; โ€ข To execute the patient education program after adequate training (train-the-trainer principle); โ€ข To execute continuous quality control according to the quality management concept (see below). Case managers will be prepared in a modular training program before taking up the above-mentioned diverse tasks. In order to support community-based physicians in their work in PANOS, services can be requested from case management ( Table 6, ICM 6). 3. Active network management (ANM) Mobilization and motivation of community-based physicians is not only an essential prerequisite to assure an equal and timely access of eligible patients, but also for the collaborative concept of an intersectoral care provision partnership. This will be addressed by several strategies including: the structured acquisition of participants due to personal on-site consultations; releasing information updates about the network status; organization of educational symposiums; project discussion groups; topic-specific workgroups; stakeholder meetings and regional quality circles (Table 7, ANM 1,5,6). No consensus could be achieved on an adequate frequency of plenary meetings. Can you imagine taking part in your own regional PANOS quality circle, i.e., actively participating in working groups in the network? 83% Yes Consensus was achieved if there was >75% approval. Green font and background color: Consensus achieved; Red font and background color: Consensus not achieved. * ANM: Active network management. Structured patient education program according to self-management concept (EDU) Structured patient education is deemed as an essential core measure within PANOS. It not only gives PD patients self-management education the highest priority when asked about expectations in the context of ICCs [15,17], but also deems expert panels self-management education measures as indispensable for healthcare delivery to chronic long-term conditions [21]. This especially holds true for concepts such as PANOS, where patients are assigned an active role in their own care. The more empowered patients are to do this, the better the chance that they can become a meaningful active contributor to their own care [22]. The self-management concept implies that PD-related knowledge is only one of several skills a patient has to master in order to induce and maintain health-promoting self-management behaviors. Developing beliefs regarding a sufficient self-efficacy as a core mediator for self-management behaviors, as well as additional skills such as action planning or adequate resource utilization, are indispensable for this [23]. Generally, a multi-modular, small-group, in-class setting is used for implementation. Informal caregivers are mostly included, since most patients rely on them in order to perform self-management behaviors successfully [24]. In Sweden, a sustainable nationwide implementation of a structured program for PD patients according to the self-management concept has been achieved in the recent years, including some evidence for its efficacy [25,26]. In order to allow for a timely implementation within PANOS, the concept of the Swedish National Parkinson School will be adopted together with the Swedish program initiators. Participants were asked to rate the importance of potential curriculum topics. The highest relevance was given to a knowledge domain (adverse drug events), followed by self-management behaviors such as coping with emotional impact (Figure 6). Both the general concept of a structured program according to the self-management concept and several knowledge domains were consented (Table 8, EDU 1-7). A consensus was achieved if there was >75% approval. Green font and background color: Consensus achieved; Red font and background color: Consensus not achieved. * EDU: Structured patient education program according to self-management concept. ** Open to patients and caregivers, 7 units of 90 min each for psychoeducation lessons based on self-management approach conducted case managers or psychologists. *** On patient's internal will to live, power of attorney, applications for Both the general concept of a structured program according to the self-management concept and several knowledge domains were consented (Table 8, EDU 1-7). Patients and caregivers will be provided with patient letters in lay language about clinical and treatment statuses and the care plan. Patient letters will be created automatically based on data recorded in the EHR using preset text modules, tested for patient-orientated comprehensibility. Structured professional education curriculum Both community-based neurologists and GPs are an integral part of PANOS. To enable participating physicians to perform their tasks in patient selection and in participating in ongoing care, a structured education curriculum for all professional care providers will be established. The physician education curriculum is to focus on both the timely recognition of patients in the transition phase and in need of specialized care, as well as on the knowledge and skills needed to become a productive active long-term collaborator. Given the special importance of GPs in care provision to PD patients in Eastern Saxony, an education module is planned to specifically target the needs of GPs ("PD for GPs"). Assessment of PANOS according to the Developmental Model for Integrated Care (DMIC) The DMIC is based on an expert-based Delphi-consensus about 89 components relevant to the practical implementation of ICCs [1]. These components are grouped into nine clusters and located on a cluster map with four axes (organization of care, quality care, results and effective collaboration) ( Figure 7A). The model also considers four developmental stages ranging from phase 1-the initiative and design phase, to phase 4-the consolidation and transformation phase. It was validated with 84 different ICCs for a variety of diseases and settings, and it has already been applied in transcultural contexts (Netherlands, Canada). Taken together, the DMIC allows a structured assessment of ICCs regarding their representation of the multidimensional clusters and their developmental stage. Six key representatives from project partners actively involved in the conceptualization of the care pathway independently assessed PANOS according to the DMIC theoretical framework. The assessments showed that PANOS is still at an early maturation state, with only low percentages of elements being fully implemented ( Figure 7B, rated as "present", shown in red). On average, the respondents evaluated the network with the highest scores of present elements on the clusters "Interprofessional teamwork" (67%), "Quality Care" (40%) and "Delivery System" (39%). The scores on the clusters "Patient-centeredness", "Performance Management" and "Result-focused learning" are all 0%. However, if all "planned" and "present" activities are considered ( Figure 7B), rated as "planned or present", shown in blue, the PANOS concept represents the nine clusters in a balanced manner and to a high degree ranging from 63% (performance management) to 100% (roles and tasks, interprofessional teamwork and quality care). Ratings of the six representatives were more homogenous about planned elements than about the elements already present (not shown), indicating a stronger consensus regarding what is planned than about what has already been achieved. The conceptualization and implementation strategy of PANOS appears to be in compliance with the four sequential maturation stages of the DMIC. In total, 50% of elements of phase 1 (initiative and design phase) have already been implemented, followed by 10% of phase 2 (experimental and execution phase), and 0% for the more mature phases 3 and 4 ( Figure 7C). Up to 90% of elements in phase 1 and 2 are already planned, followed by 60% and 40% for elements in phases 3 and 4, respectively. Thus, both the current maturation state ("present" elements) and the body of elements foreseen but yet to be executed ("planned" elements) appear to be well aligned with the four hierarchical developmental stages of the DMIC. Results of the Structured Consensus Process All critical core elements and principles of the PANOS care pathway could be consented by a structured consensus process in compliance with the guidelines of the Network of Scientific Medical Societies in Germany (AWMF) [14]. Namely, the registration process and inclusion criteria, the concept of (repetitive) patient self-monitoring as an integral part, the triage concept and the concept Results of the Structured Consensus Process All critical core elements and principles of the PANOS care pathway could be consented by a structured consensus process in compliance with the guidelines of the Network of Scientific Medical Societies in Germany (AWMF) [14]. Namely, the registration process and inclusion criteria, the concept of (repetitive) patient self-monitoring as an integral part, the triage concept and the concept of an individualized ongoing care plan with a variable distribution between community-based care and specialist care clearly met the consensus criteria of >75% agreement. Non-consented aspects, such as certain responsibilities for information collection in some relevant health-related domains, are not crucial to the core concept and can be addressed during the future in-depth planning process. The consensus process thus confirmed high levels of support for the core principles and provided valuable information for future planning and decisions during the implementation process. PANOS in Comparison to Other PD-Specific Integrated Care Concepts There is a growing number of ICCs for PD patients worldwide and in Germany [5]. This is prompted by the high relevance attributed to ICCs in assuring an adequate healthcare delivery for vulnerable patient populations [1] and by the urgency to find adequate solutions meeting the complex therapeutic needs of a rising number of PD patients [4,27]. However, there is no standardized agreement on the most relevant elements for an ICC for PD patients or for a sequential implementation strategy. A recent expert-based consensus sums 30 recommendations about crucial elements for PD ICCs, but this still leaves it open to regional initiatives to prioritize relevant elements [16]. In addition, all ICCs have to consider the regional context and region-specific needs and expectations of both patients and implementation partners [15,28]. Despite universal recommendations to ICC implementation, there will always be the challenge to interpret their relevance for the specific regional context. Existing PD networks differ both in their maturity and in their implementation strategy. ParkinsonNet in the Netherlands played a pioneering role in implementing a community-based multidisciplinary network in 2004 [29]. The network focuses on the empowerment of patients and healthcare professionals through education, training procedures and evidence-based practice guidelines for almost all care disciplines. It includes a number of IT solutions to facilitate coordination among healthcare professionals and patients. There is evidence for the effectiveness and cost-effectiveness of some areas of activity, and for strengthened multidisciplinary collaboration [30,31]. Probably the most important advantage is the generalizability of the approach, which now has reached nationwide coverage in the Netherlands with 70 regional networks [32]. In the last couple of years, several PD networks were established in Germany. In 2018, Parkinsonnetzwerk Mรผnsterland+ (PNM+) was established in Mรผnsterland, a rural area in the north-west of Germany, based on a regionally modified concept similar to ParkinsonNet. The network involves inpatient care PD specialists, community-based physicians, and non-medical healthcare providers (e.g., different therapists) and focuses on collaborative network activities, as well as on the provision of comprehensive and easy-to apply care standards for all providers. In Cologne, an ICC has been implemented based on regular PD specialist and PD nurse consultations to community-based neurologist practices. The network also offers patient education and video therapy. Another example is the Parkinson Netzwerk Allianz Marburg (PANAMA) (mid-west of Germany), that encompasses an array of initiatives, ranging from professional to patient-orientated education, specialized consultations in regional partnering hospitals and community-based practices, a modular intervention to strengthen patient's emotional awareness to scientific projects on telehealth innovations. In spite of similarities in some components, such as communicative network-promoting measures or professional and patient education, PANOS differs substantially. The most prominent difference is its focus on structured standardized care provision on the basis of an intersectoral care pathway and its inclusion of three hospital-affiliated outpatient specialist centers. This requires a substantially more technical and personal infrastructure than for the less formal approaches of the other above-mentioned ICCs. PANOS Evaluated by the DMIC and Comparison to Patient and Expert-Based Recommendations Key members from six different partners involved in the design, implementation and evaluation stages assessed the overall concept of PANOS (planned and present elements) to be a balanced concept according to the DMIC, incorporating components of all the four main axes of the model. However, only a few elements have been implemented (present elements). This implies that an array of design and implementation processes should be carried out simultaneously and that they increase the implementation complexity. Even if PANOS currently has a low maturation state (achieved 50% phase 1 elements, and 10% of phase 2), its sequential implementation concept is well aligned with the developmental stages with most of the elements planned in phases 1 and 2 (90% each), 60% for phase 3 and 40% for phase 4, respectively. The DMIC provides practically applicable information for the future implementation process: it shows what level of planning still needs to be done for the future phases 3 and 4, and it provides information on which clusters need attention from the project team, e.g., it could be worthwhile to put a bigger emphasis on clusters where there is little consensus on the implementation status already achieved. From a patient's perspective, PANOS strives to address core requirements: according to a Dutch study carried out in compliance with the voice of customer (VoC) approach [17], the following are the most important patient requirements: (1) desire for self-management, (2) better collaboration between healthcare providers, (3) more time for discussing the future, (4) one healthcare provider who can act as a personal case manager, (5) more knowledge of the disease, (6) more support from my pharmacist, (7) increased focus on the needs of my spouse, (8) more contact with other patients, (9) more provision of information and (10) less fragmentation of healthcare. PANOS addresses aspects 1, 3, 5, 8 and 9 by incorporating a structured patient education program according to the self-management concept, aspects 2 and 10 by the establishment of a structured care pathway and aspect 4 by the provision of such a personal case manager. In the light of a recent expert-based consensus on the recommended components of PD ICCs, PANOS considers several of the 30 recommended components [16]: follow-up consultations should be scheduled according to individual patients' needs, a first point of contact should be provided, efficient interprofessional communication should be facilitated, support for self-management should be provided, and there should be a central care coordinator. In addition, PANOS also tries to contribute to digital innovations by developing a disease-specific EHR for intersectoral care. Taking the consistency of the PANOS concept with patients' needs, expert consensus-based recommendations and with the clusters of the DMIC, it could be postulated that core elements of the concept could also be valuable in other regional contexts with similar healthcare challenges as described for Eastern Saxony. However, careful adaptations to the specific implementation context will always be of outstanding importance, no matter which existing ICC might be considered as a starting point. Implementation Risks and Perspectives The need to alter existing healthcare structures in order to deliver care by an alternative (integrated) care concept implies significant risks to a successful implementation. A clear risk to the PANOS concept is that the implementation of a structured care pathway, albeit met by a high level of interprofessional acceptance, requires important supportive infrastructure, above all by a custom-made EHR. This is accompanied by the requirement to simultaneously organize workflows and to define roles and tasks under the appropriate consideration of the working reality of the healthcare providers needed as contributors. Thus, compared to other ICCs, the PANOS concept entrails are high level of implementation complexity. This is also reflected by the DMIC model that indicated a lot of relevant components are being worked on, but all at the same developmental stage. Because of this, measurements have been undertaken to limit complexity, e.g., PANOS abstains in the first implementation phase from the structured integration of all relevant healthcare providers (e.g., physiotherapists), and the repetitive patient self-monitoring is carried out in a conventional paper-and-pencil format and not digitally. The structured consensus process represents an effort to limit risks by the establishment of a collaborative intersectoral working environment and by assessing the needs of all potential contributors adequately. Because of the comparatively high implementation complexity, an ongoing formal evaluation and measure for iterative procedural and technical adaptations will be important to assure an improved fit of both the PANOS concept and its technical basis to the actual needs and expectations. Once the care pathway with specialists, community-based physicians and case managers as stakeholders of the first phase has been successfully implemented, an important perspective will be to integrate other relevant healthcare providers, such as physiotherapists or occupational therapists. The integration of repetitive patient self-monitoring with an associated medical data management strategy and the establishment of a disease-specific EHR represents an ideal basis for the implementation of digital patient monitoring and patient-physician interaction strategies. However, even if the integration of sensor-based monitoring and app-based interactions is already envisaged, this can only be successfully realized if the core concept as described is functional. Another important challenge will be to assure sustainable financing beyond the current project phase. This will be both dependent on the illustration of the medical effectiveness in the accompanying summative evaluation (primary endpoint: health-related QoL) and on an analysis of the economic impact of the concept. Studies on the economic effects of ParkinsonNet in the Netherlands illustrate that ICCs for PD patients can be cost-efficient [30,32,33]. The cost efficiency will depend on changes in healthcare utilization behaviors (e.g., by lowering the number of unplanned unstructured emergency admissions), on the reduction in disease-related complications (e.g., falls and fractures), and on the dimension related to the extra costs associated with the PANOS concept. However, considering the substantial differences between PANOS and the other ICCs described above, cost efficiency cannot be extrapolated and has to be illustrated for the specific concept. Limitations of the Structured Consensus Process Even though participants of the workshops came from various targeted professions and/or institutions relevant to the implementation of PANOS, the workshop participants cannot be considered to be representative for the full spectrum of healthcare providers, especially community-based physicians. The actual numbers of participants were well suited for the chosen concept of a structured consensus process and were within in the limits recommended. However, up to 1300 community-based physicians would have been eligible for workshop participation, but the highest number of participants from this group was only 20. Thus, a likely (and in our eyes unavoidable) recruitment bias in favor of those ready for healthcare innovations has to be accounted for when interpreting the results of the consensus process. The PANOS concept should not be expected to meet the same high level of acceptance in the real-world implementation scenario now to follow. It will be an important challenge to the project team to consider this adequately, in spite of encouraging the high support of the contributors to the current structure of the consensus. Supplementary Materials: The following are available online at http://www.mdpi.com/2077-0383/9/9/2906/s1, Table S1: Questions and items of online surveys. Funding: This work was supported with regard to a resolution of the German Bundestag by the Federal German Government. This work is further co-financed with tax funds on the basis of the budget enacted by the Saxon State Parliament (grant number: 100386587).
Nโ€glycan signatures identified in tumor interstitial fluid and serum of breast cancer patients: association with tumor biology and clinical outcome Particular Nโ€glycan structures are known to be associated with breast malignancies by coordinating various regulatory events within the tumor and corresponding microenvironment, thus implying that Nโ€glycan patterns may be used for cancer stratification and as predictive or prognostic biomarkers. However, the association between Nโ€glycans secreted by breast tumor and corresponding clinical relevance remain to be elucidated. We profiled Nโ€glycans by HILIC UPLC across a discovery dataset composed of tumor interstitial fluids (TIF, n = 85), paired normal interstitial fluids (NIF, n = 54) and serum samples (n = 28) followed by independent evaluation, with the ultimate goal of identifying tumorโ€related Nโ€glycan patterns in blood of patients with breast cancer. The segregation of Nโ€linked oligosaccharides revealed 33 compositions, which exhibited differential abundances between TIF and NIF. TIFs were depleted of bisecting Nโ€glycans, which are known to play essential roles in tumor suppression. An increased level of simple high mannose Nโ€glycans in TIF strongly correlated with the presence of tumor infiltrating lymphocytes within tumor. At the same time, a low level of highly complex Nโ€glycans in TIF inversely correlated with the presence of infiltrating lymphocytes within tumor. Survival analysis showed that patients exhibiting increased TIF abundance of GP24 had better outcomes, whereas low levels of GP10, GP23, GP38, and coreF were associated with poor prognosis. Levels of GP1, GP8, GP9, GP14, GP23, GP28, GP37, GP38, and coreF were significantly correlated between TIF and paired serum samples. Crossโ€validation analysis using an independent serum dataset supported the observed correlation between TIF and serum, for five of nine Nโ€glycan groups: GP8, GP9, GP14, GP23, and coreF. Collectively, our results imply that profiling of Nโ€glycans from proximal breast tumor fluids is a promising strategy for determining tumorโ€derived glycoโ€signature(s) in the blood. Nโ€glycans structures validated in our study may serve as novel biomarkers to improve the diagnostic and prognostic stratification of patients with breast cancer. Particular N-glycan structures are known to be associated with breast malignancies by coordinating various regulatory events within the tumor and corresponding microenvironment, thus implying that N-glycan patterns may be used for cancer stratification and as predictive or prognostic biomarkers. However, the association between N-glycans secreted by breast tumor and corresponding clinical relevance remain to be elucidated. We profiled N-glycans by HILIC UPLC across a discovery dataset composed of tumor interstitial fluids (TIF, n = 85), paired normal interstitial fluids (NIF, n = 54) and serum samples (n = 28) followed by independent evaluation, with the ultimate goal of identifying tumor-related N-glycan patterns in blood of patients with breast cancer. The segregation of N-linked oligosaccharides revealed 33 compositions, which exhibited differential abundances between TIF and NIF. TIFs were depleted of bisecting N-glycans, which are known to play essential roles in tumor suppression. An increased level of simple high mannose N-glycans in TIF strongly correlated with the presence of tumor infiltrating lymphocytes within tumor. At the same time, a low level of highly complex N-glycans in TIF inversely correlated with the presence of infiltrating lymphocytes within tumor. Survival analysis showed that patients exhibiting increased TIF abundance of GP24 had better outcomes, whereas low levels of GP10, GP23, GP38, and coreF were associated with poor prognosis. Levels of GP1, GP8, GP9, GP14, GP23, GP28, GP37, GP38, and coreF were significantly correlated between TIF and paired serum samples. Cross-validation analysis using an independent serum dataset supported the observed correlation between TIF and serum, for five of nine N-glycan groups: GP8, GP9, GP14, GP23, and coreF. Collectively, our results imply that profiling of N-glycans from proximal breast tumor fluids is a promising strategy for determining tumorderived glyco-signature(s) in the blood. N-glycans structures validated in our study may serve as novel biomarkers to improve the diagnostic and prognostic stratification of patients with breast cancer. Introduction Breast cancer (BC) is the most common cancer worldwide among women, with more than 1 300 000 new cases diagnosed every year. BC is the leading cause of cancer-related deaths among women (Torre et al., 2016). Numerous studies have established that stepwise accumulation of multiple genetic and epigenetic alterations in epithelial cancer cells (Cancer Genome Atlas, 2012), as well as changes in stromal composition, drive and direct the progression of breast cancer (Beck et al., 2011). These studies highlight the heterogeneity and complexity of breast malignancies and point to a major challenge in the development of targeted therapeutics. A growing body of evidence points to a crucial role of the multidirectional network communications between malignant epithelial cells and the tumor microenvironment in tumor evolution and progression. Multidirectional signaling events within the tumor stroma are implemented through the tumor interstitial fluid (TIF), which forms at the interface between circulating bodily (lymph and blood) and intracellular fluids. TIF facilitates the exchange of ions, proteins, cytokines, and miRNA within the interstitial space (Espinoza et al., 2016;Gromov et al., 2013;Papaleo et al., 2017). Various biomolecules are released by tumor and stromal cells into the interstitium (Horimoto et al., 2012;Zhang et al., 2017) and subsequently drain through the lymphatic system into the bloodstream, where they can be detected and quantified (Surinova et al., 2011). Given the high concentration of potential cancer-specific biomolecules within the local tumor milieu (Ahn and Simpson, 2007), interstitial fluid is considered to be a valuable resource for BC biomarker discovery (Wagner and Wiig, 2015). Glycosylation is a template-free enzymatic process that produces glycosidic linkages of monosaccharides to macromolecules such as carbohydrates, lipids, and proteins through the sequential attachment of glycan moieties in a function-specific context. This posttranslational modification is a well-known hallmark of cancer (Pinho and Reis, 2015) and is implicated in almost all molecular and metabolic events in normal and malignant cells. These events include protein folding and stability, cell-cell interaction, angiogenesis, immune modulation, cell signaling, and gene expression (Moremen et al., 2012). Two major types of glycosylation (N-linked and O-linked) coexist in mammalian cells and often occur simultaneously on the same target macromolecules (Pinho and Reis, 2015). The involvement of N-glycosylation in the development and progression of BC has been documented by in vitro and in vivo studies (Julien et al., 2006). N-glycan branching, particularly the increased expression of complex b-1,6 branched N-linked glycans, is often associated with more aggressive tumor behavior, such as enhanced migration, invasion, and metastatic potential (Contessa et al., 2008). In contrast, the expression of bisecting glycans strengthens cell adhesion and is associated with cancer suppression (Taniguchi and Kizuka, 2015). Several N-glycan patterns with altered circulating glycan structures originating from either a primary tumor or from other organs in response to a neoplastic process have recently been described (Kyselova et al., 2008). Levels of biantennary N-glycan chains as well as a-2,3linked sialic acid-modified N-glycans are often decreased in sera of patients with BC, compared to healthy controls. The same tendency is observed in the sera of lung cancer patients, and it was suggested that an aberrant N-glycan signature based on serum glycan analysis could be used to distinguish cancer types (Lan et al., 2016). However, no robust blood glycan markers for BC have been identified to date, mainly because of the high degree of complexity and dynamic range of biomolecules (>10 orders of magnitude) circulating in the bloodstream. Furthermore, similar types of molecules are externalized from other body tissues and organs under physiological conditions. To identify tumor-derived N-glycans patterns, we investigated the secreted glycome by profiling N-glycans released from matched tumors (TIF), normal mammary tissues (NIF), and serum samples using hydrophilic interaction liquid chromatography (HILIC) ultra-performance liquid chromatography (UPLC) (Saldova et al., 2014). The aims of our study were (a) to compare N-glycans secreted directly from tumor and stromal cells to correlate the N-glycan profiles and the corresponding abundances in paired TIF and serum samples; (b) to explore whether the appearance of particular glycoforms in TIF is correlated with the presence of tumor infiltrating lymphocytes (TILs) in corresponding tumors; (c) to examine a potential association between N-glycan levels and clinical outcome; and (d) to evaluate our data and results of analysis using an independent cohort of the normal, benign, and BC blood samples. (Gromov et al., 2014) at Copenhagen University Hospital. The criteria for high-risk cancers, applied by the DBCG, are age below 35 years, and/or a tumor diameter of more than 20 mm, and/or a histological malignancy 2 or 3, and/or negative estrogen (ER) and progesterone (PgR) receptor statuses, and/or a positive axillary status. Mastectomy enables pathologist to dissect a tissue sample from a nonmalignant area located relatively distant to the tumor, that is, 5 cm. at least. We used such criteria for dissection of normal breast lesions to avoid any impact of cancer field cancerization, which has been observed in histologically normal breast biopsies located 1 cm from the tumor margins, but not in lesions resected 5 cm from the tumor or obtained from reduction mammoplasty (Heaphy et al., 2006;Trujillo et al., 2011). All normal tissue specimen dissected from the breast after mastectomy were morphologically and histologically evaluated (Russo and Russo, 2014) to ensure normal epithelial acini and ducts structures. Materials and methods All the patients presented a unifocal tumor, and none of the patients had a history of breast surgery or had received preoperative treatment (naive samples). Patients were followed after surgery, and cancer-specific survival was measured from the date of primary surgery until the date of death from BC. Death records were complete up to October 08, 2014 and served as the censoring date. Registered clinicopathological data for the patients were available from the Department of Pathology, Rigshospitalet, Copenhagen University Hospital, Denmark. This study was conducted in compliance with the Helsinki II Declaration, and written informed consent was obtained from all participants and approved by the Copenhagen and Frederiksberg regional division of the Danish National Committee on Biomedical Research Ethics (KF 01-069/03). At the time of collection, each tissue specimen was divided into two pieces. One piece was stored at ร€80ยฐC and was subsequently prepared as a formalinfixed paraffin-embedded (FFPE) sample that was sectioned, mounted on glass slides, and stained for histological characterization, tumor subtyping, TIL scoring, and immunohistochemistry (IHC) analysis. The second biopsy piece was placed in PBS at 4ยฐC within 30-45 min of surgical excision and then was subjected to interstitial fluid recovery (see below). Matched sera were obtained from women enrolled in the Danish Center for Translational Breast Cancer Research program who underwent surgery between 2001 and 2006. Blood samples were collected preoperatively following a standardized protocol (Wurtz et al., 2008). Briefly, serum was collected in serum-separating tubes and was left on the bench for 30 min before centrifuging for 10 min at 2000 G. The separate serum cohort, Mammographic Density and Genetis (MDG), consisted of serum N-glycan profiles from 107 BC patients and 62 healthy women (Saldova et al., 2014) and was used to validate the results of the TIF analysis. 2.2. Immunohistochemistry of tissue biopsies: histological assessment and tumor subtyping Immunohistochemistry analysis was performed as described elsewhere (Celis et al., 2004). First, small FFPE blocks were prepared from two to three various parts of the tissue piece and the sections were stained with a CK19 (KRT19) antibody. Tissue morphology, tumor cell content and visual assessment of tumor stroma percentages were evaluated as previously described (Espinoza et al., 2016). All slides were blindly reviewed by two independent investigators (IIG, PSG). Subtype scoring of the tumor tissues as luminal A (LumA), luminal B (LumB), luminal B HER2-enriched (LumB HER2-enriched), HER2, and triple negative breast cancer (TNBC) was performed based on the estrogen receptor (ER), progesterone receptor (PgR), epidermal growth factor receptor-2 (HER2), and Ki67 status determined for each tissue sample mainly in accordance with the St. Gallen International Breast Cancer Guidelines (Esposito et al., 2015). The criteria used for each subtype classification are summarized in Table S1. The monoclonal mouse antibody raised against CK19 (clone 4E8) was obtained from ThermoFischer Scientific. The monoclonal mouse antibody raised against Ki67 (clone MIB-1) was purchased from DAKO. The monoclonal antibody raised against ER (clone 1D5) was obtained from DAKO. The monoclonal antibody raised against synthetic peptide directed toward the N-terminal end of PgR was purchased from DAKO. The polyclonal rabbit antibody raised against Her2 (Hercep Test) was obtained from DAKO. For all staining, positive control slides were included in parallel in accordance with the manufactory instructions. For the negative controls, the slides were incubated with PBS instead of primary antibodies. All information about patients and samples analyzed in the study is presented in Table S2. Estimation of tumor infiltrating lymphocytes and their subpopulations Immunohistochemistry analyses were performed to examine the most prominent components of the immune microenvironment in the corresponding tumor biopsies used for interstitial fluid recovery and molecular characterization. Scoring of total leukocytes, T lymphocytes, T helper lymphocytes, cytotoxic T lymphocytes, and macrophages were determined based on staining performed with antibodies raised against CD45+, CD3+, CD4+, CD8+, and CD68+, respectively. The monoclonal antibodies raised against CD45 (clone 2B11 + PD7/26), CD4 (clone IS 649), CD8 (clone 144B), and CD68 (clone PG-M1) were purchased from DAKO. The polyclonal antibody raised against synthetic peptide from the intracellular part of the e-chain of human CD3 was obtained from DAKO. The proportion of TILs in tissue sections was evaluated in accordance with the recommendations of the International TILs Working Group 2014 (Salgado et al., 2015). An assessment of overall inflammatory reactions and the number of lymphoid cells present within biopsies were determined by hematoxylin and eosin (HE) staining as described elsewhere (Denkert et al., 2010): 1+ (>10%), 2+ (10-50%), 3+ (>50%). These scores were independently and blindly assigned two independent investigators (IIG and PSG). The macrophage marker, CD68, was also evaluated with the same criteria. For each immune cell population that was analyzed, the expression results were dichotomized as low (<10%) and high (>10%). Table S2 (columns T-X) contains the detailed information regarding stratification of the samples based on the TILs presence. Interstitial fluid recovery Tumor interstitial fluids and NIF samples were extracted from fresh breast tumor and normal tissue specimen, as previously described (Celis et al., 2004). Briefly, 0.1-0.3 g of clean tissue was cut into small pieces (~1 mm 3 each), washed twice in cold PBS to remove blood and cell debris, and then incubated in PBS for 1 h at 37ยฐC in a humidified CO 2 incubator. The samples were then centrifuged at 200 g and 4000 g for 2 min and 20 min, respectively, at 4ยฐC. The supernatants were carefully aspirated, and total protein concentration for each sample was determined with the Bradford assay (Bradford, 1976). Sample processing for UPLC About 50-100 lL of TIF, NIF, depending on the original protein concentration, was lyophilized followed by resuspension in 10 lL of distilled water. N-glycans were released using an updated version of the high-throughput automated method described by Stockmann and coauthors using a liquid-handling robot. The samples were then denaturated with dithiothreitol, alkylated with iodoacetamide, and N-glycans were released from the protein backbone enzymatically via PNGase F (Prozyme Glyco N-Glycanase, code GKE-5006D, 10 lL per well, 0.5 mU in 1 M ammonium bicarbonate, pH 8.0). The released glycans were captured on solid supports, and excess reagents, salts, and other impurities were removed by vacuum or centrifuge filtration, and the glycans were then released and labeled with the fluorophore 2-aminobenzamide (2-AB). Next, glycans were purified in a 96-well chemically inert filter plate (Millipore Solvinert, hydrophobic polytetrafluoroethylene membrane, 0.45 lm pore size) using Hyper-SepDiol SPE cartridges (ThermoScientific, Waltham, MA, USA) , with each well containing all glycans released from individual sample. The samples were then lyophilized and dissolved in 10 lL of an acetonitrile-water mixture (70 : 30). UPLC analysis Purified N-glycans were automatically injected into the UPLS system in a mixture of 70% acetonitrile in water (see above). For UPLC analysis, a 2.1 9 150 mm HILIC column (Waters, Milford, MA, USA) was coupled with an Acquity UPLC system (Waters) equipped with a Waters temperature control module and a Waters Acquity fluorescence detector. The column temperature was set to 40ยฐC, and two buffer solutions, A (50 mM formic acid adjusted to pH 4.4 with ammonia solution) and B (pure acetonitrile), were used to run the following 30 min linear gradient: 0.56 mLรmin ร€1 flow rate for 23 min with 30-47% of buffer A followed by 47-70% of A and finally reverting back to 30% of A to complete the run. The elution of N-glycan was measured by fluorescence detection at 420 nm with excitation at 330 nm. The system was calibrated using an external standard of hydrolyzed and 2AB-labeled glucose oligomers to create a dextran ladder, as described previously . The use of an external standard enabled reproducible relative quantitation of glycans between the runs. The GU (glycose unit) assigned to every peak in the chromatogram, based on the standard and each peak (collection of glycans at the same GU), is proportional of the entire glycome calculated as 100% of fluorescence intensity. A total of 165 N-glycans assigned to 46 glycan peaks (GP1 to GP46) were detected in tissue interstitial fluids. Each glycan peak (GP) contains several predominant structures. The total composition of all structures and predominant glycan features are summarized in Table S3. Feature analysis N-glycan peaks (GP) were pooled based on similar structural or compositional features of the peak glycan members. Features relating to a peak were determined based on the major glycan members of that peak described at Saldova et al. (2014). Curation of the dataset for analysis The analysis of glycan abundances was performed using two datasets in parallel; (d1) all available samples corresponding to 85 TIF samples and 54 NIF samples and (d2) paired tumor and normal samples including a total of 54 individual TIF-NIF pairs (108 samples). The peaks of the glycan UPLC output represent the relative area for each glycan peak in the spectrum. The glycan abundances were log2-transformed to reduce the impact of outliers and to deal with the skewness of the glycan distribution. The log2 transformation resulted in the majority of glycan abundances approaching normal a distribution. After log2 transformation, the data were corrected for batch effects using the ComBat function of the SVA R package (Johnson et al., 2007). ComBat batch-corrected data were only used for plotting purposes. All the initial data, scripts for analyses, and outputs are released as free materials at https://github.com/ ELELAB/N-glycan-TIF so that our findings could be reproduced. Multidimensional scaling Classical multidimensional scaling (R version 3.3.1) was used to reduce the number of the dimensions within the data. Specifically, 108 samples (54 TIF-NIF pairs) with each 63 measurements of glycan/glycan feature abundances were reduced to two dimensions (M1 and M2). Multidimensional scaling (MDS) was performed with the function cmdscale() using Euclidean distance as the distance metric. The plotting was done with R package ggplot2 2.2.1. Differential abundance analysis The differential abundance analysis (DAA) was performed using the statistical software LIMMA (Linear Models for Microarray Data) implemented in R (Ritchie et al., 2015). LIMMA has few underlying statistical assumptions and is known to be powerful for small sample sizes as a result of shrinkage of featurespecific variances (Ritchie et al., 2015). Although LIMMA was originally developed for analysis of microarray, a number studies had shown the versatility of this software for the analysis of other -omics data (Castello et al., 2012). For the analysis of paired data (d2), the information on patient ID was incorporated into the design matrix to account for patient-specific effects. For the analysis with unpaired data (d1), information on batch was added to the model. We carried out DAA between NIF samples and TIF samples using a corrected Pvalue (FDR: false discovery rate) of 0.05 as the cutoff for significance. To determine whether any clinical variables could be related to the abundance of specific N-glycans groups or N-glycan features, DAA was performed for tumor grade (Gr), receptor status (HER2, ER, PgR), and tumor infiltrating lymphocyte status (TILs, CD3, CD4, CD8, CD45, CD68) in the sample (see Table S2). Hierarchical clustering analysis Hierarchical clustering was performed to visually inspect the results of the DAA. Agglomerative hierarchical clustering was implemented in R with hclust() (R-stats), using the Ward's method (ward.2D), the statistical premise of which is to minimize the total within-cluster variance (Murtagh and Legendre, 2014). Correlation analysis-glycan abundances in TIF and serum The log2-transformed abundances of TIF N-glycans structures were correlated with corresponding serum profiles of 28 serum samples available. Classic Pearson's product-moment correlation was performed in R. The significance of correlation scores was tested and obtained P-values corrected using FDR. Correlations with an FDR < 0.05 were considered significant and kept for further analysis. Survival analysis Survival analysis was performed using a Cox proportional hazard model. Dataset d1 (e.g., all available TIF, n = 85) was used for analysis. Survival was modeled using one N-glycan feature at a time, for example, not accounting for potential inter-glycan abundance effects. Clinical parameters where tested for confounding effects on N-glycan levels and/or clinical outcome. As expected, age at diagnosis was found to have a significant effect on overall survival. Of the remaining parameters, TIL status (CD4 + and CD45 + ) was found to be a confounder. Before regression analysis, the covariates were tested for violation of the proportional hazard assumption. Also, the log-linearity of continuous variables (N-glycans and age) was evaluated. In the final models, age was modeled with splines (df = 2). Four confounder packages were tested accounting for an age at surgery and/or tumor infiltrating lymphocytes status (total TILs, CD4 + , and CD45 + )-GPX represents a glycan peak: In addition to the cox regression model with overall patient survival as outcome (censored = 0 and event = 1), survival analysis was performed using, as events, only those deaths for which information on primary cause of death was available and denoted as 'malignant neoplasm of breast' (censored = 0 and malignant neoplasm of breast = 1). Results of the cox models were reported as hazard ratios, confidence intervals, and FDR values. Survival curves were generated using the corrected regression models. Survival curves were made assuming an age of 66 at surgery (median age at entry for the cohort). For each N-glycan composition, high abundance was defined as the upper 25th percentile, while low abundance was defined as the lower 25th percentile. Survival analysis was performed using R-packages survcomp and survminer (Haibe-Kains et al., 2008). The experimental workflow including number of samples used in each analysis is presented in Fig. 1. Comparative analysis of N-glycan structures in matched TIF and NIF: distribution across five BC subtypes and correlation with clinicopathological parameters To obtain a general overview of N-glycan profiles across TIF-and NIF-matched counterparts, we plotted all paired samples using multidimensional scaling (MDS). Forty-six glycan groups (GP1-GP46) and 17 N-glycan features were quantified (Table S3). The MDS plotting revealed considerable segregation of TIF and NIF samples ( Fig. 2A). To evaluate a possible segregation of glycan patterns across five main BC subtypes, we stratified tumor samples in accordance with the St. Galen criteria: luminal A, luminal B, luminal B HER2-enriched, HER2, and TNBC (Esposito et al., 2015). As seen in Fig. 2A, no clear clustering between subtypes was identified ( Fig. 2A), even after merging samples into three major groups: luminal, HER2, and TNBC (data not shown). The absence of a significant difference in N-glycan abundance across subtypes may be partly explained by a large difference in the numbers of samples in each subtype group (Table S2). We speculate that the partitioning of BC samples into subtypes based on immunohistochemistry is not directly transferrable to N-glycan abundance and/or that N-glycan levels may reflect an alternative glycan-based tumor stratification. We also did not find any significant correlation between the abundance of TIF N-glycan structures and clinical tumor variables including grade, type, and/or hormone or growth receptor status (data not shown). These results indicate that N-glycans externalized into breast tumor interstitial fluid may not be directly associated with these clinicopathological characteristics of the tumor. In order to determine which N-glycans were differentially represented in fluids originating from tumor Table 1. N-glycans with differential abundance in TIF and NIF. Statistics for each differentially abundant N-glycan group and feature (log-fold change, P-value, and FDR), as well as directionality in TIF (up or down). Bisected N-glycans are highlighted in bold. compared to normal breast tissue, we performed differential abundance analysis (DAA) using paired TIF-NIF samples. In accordance with the clustering observed in the MDS plot ( Fig. 2A), DAA yielded 33 N-glycan groups with significantly differential abundance in TIF vs. NIF: 13 groups with significantly elevated levels in TIF samples and 20 groups with significantly decreased levels in TIF as compared to NIF counterparts (Fig. 2B, Table 1). Our results showed that TIF samples were enriched for particular type of sialylated (S3-S4), highly galactosylated (G3-G4) N-glycans with a high number of antennae (A3-A4), as well as for simpler N-glycans (such as monoantennary glycan, A1, no galactosylation, G0). A significant decrease in core fucosylated and bisected Nglycans, represented by GP4, GP7, GP10, GP15, GP20, GP23, GP26, GP28, and GP35, was observed in TIF. DAA with all available samples (d1 set-see Material and methods 6.1) yielded a set of N-glycans almost identical to the one obtained with paired samples only (data not shown). Fig. 3. N-glycans with differential abundance in TIL-enriched and TIL-depleted samples. (A) Bar plot shows N-glycan groups with differential abundance in tumor samples with low (0/+1) and high (+2/ +3) overall TIL status, as determined by CD45 positivity (see Table S2 for details). (B) Bar plot shows N-glycan groups with differential abundance in samples with low vs. high TIL status, as determined by CD4 positivity. Height and directionality of bars indicate log-fold change. Shade depicts inverse FDR: Darker shade indicates lower FDR. All N-glycans depicted in the plot had FDR โ‰ค 0.05. Association of N-glycan pattern with TILs Tumor infiltrating lymphocytes have been shown to play an essential role in BC progression, influencing cross talk between tumor and stromal cells and providing prognostic and potentially predictive values (Bedognetti et al., 2016;Ingold Heppner et al., 2016). To disclose a possible relationship between N-glycan structures, which displayed differential abundance between TIF and NIF (Fig. 2B) with the composition of TILs within tumor microenvironment (Table S2 for details), we performed DAA, considering the extent of lymphocyte infiltration within tumor biopsies. A detailed evaluation of TIL subtypes, often described in breast cancer literature, was performed by IHC using the antibodies specific for particular lymphocyte antigen (Fig. S1). Prognostic potential of TIF N-glycan abundances To determine whether N-glycan abundance predicted outcomes for patients with BC, overall survival analysis was performed across all patients for which survival information was available with a Cox proportional hazard model. Two types of Cox models were compared: one corrected only for age at diagnosis and one corrected for age at diagnosis + TIL status (overall, estimated by CD45 + and CD4 + ). The Cox model, corrected only for age at diagnosis, yielded six N-glycan groups, which were significantly associated with overall outcome (Fig. 4). One of these, GP24 (biantennary bigalactosylated bi-sialic-acid Table 2. Differential abundance of N-glycan groups segregating TIF, NIF, matched serum, and MDG cancer and normal serum. Column 3 = N-glycan rank in TIF based on log-fold change, in total: 13 0 N-glycan groups increased in TIF; 20 N-glycan groups decreased in TIF (see Table 1). Column 4 = N-glycan rank in MDG cancer serum based on log-fold change, in total: 26 N-glycan groups increased and 18 N-glycan groups decreased. Column 5 indicates correlations between abundance of N-glycan composition in TIF and paired serum samples. DA, differential abundance. N-glycan groups for which abundance in MDG correlated with abundance in TIF-paired serum datasets are highlighted in bold. glycan, A2G2S2), had a hazard ratio below 1, for example, a high level of GP24 was predictive of superior prognosis. The remaining five groups had hazard ratios greater than 1, implying that a high abundance of these was associated with poor outcomes. These included GP5 (core fucosylated biantennary glycan, FA2), GP10 (core fucosylated bisected biantennary monogalactosylated glycan, FA2[6]BG1), GP23 (core fucosylated bisected biantennary bigalactosylated monosialylated glycans, FA2BG2S1), GP38 (mostly tetraantennary tetragalactosylated trisialylated glycans, A4G4S3), and coreF (core fucosylated glycans) (Fig. 4A). All glycans, except GP5, were among those that segregated TIF and NIF. The results of the survival analysis, in which a death was only classified as an events if the cause of death was known to be 'malignant neoplasm of breast', yielded a similar pattern as the one observed for the cox model with overall survival, with N-glycans GP5, GP8, GP10, GP23, GP38, and coreF displaying high hazard ratios (HR:~2.0 -7.0) (Fig. S2). However, despite the high HRs and the fact that the 95% confidence intervals of these N-glycans did not overlap 1, P-values were no longer significant after correction for multiple testing (FDR). We attribute this lack of significance to the lower power associated with this model, for example, if only outcomes with known cause of death are classified as events, the ratio of events/censures is notably reduced-this will have a large impact on a small(er) dataset. Correction for TILs estimated by CD45 + or CD4 + did not alter the overall results of survival analysis. Figure 4B shows the survival curves using the age-corrected regression model for each structure associated with overall survival. The association of low GP10 and GP23 levels with poorer outcome is in agreement with the functional role of bisecting N-glycans in tumor development. Correlation of N-glycan abundance in paired TIF and serum To identify N-glycans with potential as noninvasive blood-based biomarkers for breast malignancy, we determined which of the 33 N-glycans displayed a significant correlation of abundances within 28 paired serum samples, after segregating NIF and TIF samples (Table 1, Fig. 2B). Classic Pearson's product-moment analysis was performed to correlate N-glycan structures in TIF and serum. Of 33 N-glycan groups, nine were correlated with N-glycan levels in serum (Fig. 5, Table S5). To determine whether any of the N-glycans present in serum reflect tumor immune status, we performed correlation analysis for the overall proportion of TILs (CD45+) and T helper cells (CD4+) within corresponding tumors. Two N-glycan groups, GP37 (mostly triantennary outerarm fucosylated trigalactosylated trisialylated glycans, A3F1G3S3) and GP38 (mostly tetraantennary tetragalactosylated trisialylated glycans, A4G4S3), were less abundant in serum from patients exhibiting high overall levels of TIL (CD45 + ) or T helper cells, as determined by CD4 + staining in corresponding tumors (Fig. 3, Table S5). N-glycan GP23, GP38, and coreF structures were identified as predictive for overall survival (Fig. 4), demonstrating a significant correlation of abundance between TIF and paired serum (see Table S5 for details). Fig. 6. Overview of the main results obtained in the study. N-glycan groups (1-46) and N-glycan features. Rows = traits and columns = Nglycans ID. Black dots denote which N-glycan group and features were significantly associated with a given trait based on analysis described in the corresponding result sections. The prominent N-glycan structures within a given N-glycan peak are specified below the GP's ID. Validation of N-glycan structures in an independent serum cohort To validate the results obtained from glycan profiling of matched TIF/serum samples, 33 DA N-glycans were analyzed across an independent serum MDG dataset. The MDG cohort contains samples obtained from healthy controls and BC patients (Saldova et al., 2014) and profiled with the same UPLC-based technology, thus minimizing technical variability between experimental platforms. DAA was applied to the MDG dataset on log2-transformed data, in agreement with the protocol applied to our TIF-NIF BC cohort. Five of 12 N-glycan groups (highlighted in bold in Table 2) displayed a significant correlation of abundance in MDG-and TIF-paired serum datasets. Segregation analysis based on these groups (GP8, GP9, GP14, GP23, and coreF) revealed a significant separation between normal and cancer MDG serum samples and showed a significant correlation in matched serum (Fig. S3). Discussion To the best of our knowledge, this study is the first analysis of the N-glycome in the tumor interstitium of patients with BC. Experiments were designed to identify aberrant glycosylation associated with tumor growth and progression. The study is part of a comprehensive project focused on characterization of the entire molecular complement of breast tumor interstitial fluid, aiming to identify integrated signatures associated with events underlying breast tumor metabolism, as detectable in blood (Espinoza et al., 2016;Halvorsen et al., 2017). The analysis included a detailed morphological characterization of tumor lesions and evaluation of the spatial heterogeneity of TILs in tumor specimens, to elucidate the influence of tumor immune composition on secreted N-glycome complement. Data were subjected to bioinformatics analysis to characterize the N-glycome in the breast tissue interstitium and to reveal potentially valuable correlations of aberrant glycan patterns with breast tumor biology, including clinical outcome and presence in the blood. Finally, data were computationally validated with the independent serum MDG dataset (Saldova et al., 2014), which contains BC carcinoma and nonmalignant serum profiled by analogous technology. Figure 6 summarizes the main results of our analyses. N-glycan patterns in TIF Multidimensional scaling and DAA of N-glycan profiles revealed distinct segregation between TIF and NIF. We detected 33 N-glycan groups and features with differential level between groups. N-glycan structures displaying significantly higher abundance in TIF as compared to NIF belong mainly to the monoantennary type [GP1, GP3, and A1 (sum of all monoantennary glycans)]. Our results are consistent with those of a recent study reporting a clear segregation of N-glycans circulating in the blood of patients with BC and normal individuals (Saldova et al., 2014), particularly, for the core fucosylated N-glycans (Hamfjord et al., 2015;Kizuka and Taniguchi, 2016) that are represented by a set of GP4, GP7, GP10, GP15, GP20, GP23, GP26, GP28, and GP35 structures in our TIF samples. It has previously been shown that specific glycan structures have different impacts on cell adhesion, which is one of the major molecular events during malignant transformation that affects cancer cell fate (Moremen et al., 2012). Interestingly, we found nine bisecting structures (GP4, GP7, GP10, GP15, GP20, GP23, GP26, GP28, and GP35) to be significantly more abundant in NIF. The observed depletion of bisecting N-glycans in TIF is in agreement with current consensus regarding the functionality and role of N-bisecting glycans in cancer progression. Recent research has shown that extension of GlcNAc bisecting has a significant effect on cell survival and tumor aggressiveness (Kizuka and Taniguchi, 2016). This phenomenon has also been reported for cadherins, proteins that have a substantial impact on cell adhesion. When modified by bisecting glycans, cadherins reinforce cell adhesion and are consequently associated with cancer suppression. In contrast, cadherins bearing branched complex N-glycans are less involved in the control of cell adhesion and are associated with cancer progression (Carvalho et al., 2016). It has been proposed, mainly by using in vitro model systems, that the presence of this unique bisecting structural feature has important implications for the entire cellular glycan complement. Thus, enzymes responsible for producing N-glycan groups other than those with bisecting branches (e.g., GnT-IV, GnT-V) are almost completely inhibited by the presence of a bisecting GlcNAc residue in the N-glycan molecule (Stanley et al., 2009). The results of our study support this notion, clearly demonstrating a significant abundance of particular set of bisecting glycan species in the interstitium of nonmalignant lesions as compared to their neoplastic counterparts. Correlation between TILs and N-glycan composition Tumor infiltrating lymphocytes are frequently found within tumors, suggesting that tumors trigger an immune response in the host. The presence of TILs within the tumor microenvironment has been reported as an important biomarker linked to clinical outcome (Ingold Heppner et al., 2016). In this study, we immuneprofiled particular subsets of TILs, for example, those most often described in connection with BC in current literature (Denkert et al., 2010;Salgado et al., 2015). We identified a number of glycoconjugates in TIF that were significantly associated with the proportion of TILs (Fig. 3), as determined by immunohistochemistry. In samples with high levels of total TILs, we observed an increase in simple high mannose N-glycan features (G0, G1, S0, and highM) and, inversely, a decrease in the abundance of highly complex N-glycan groups (A4, G4, S4, GP25, GP37, GP38, GP41, GP45, etc.). To our knowledge, our data are the first evidence highlighting a direct impact of the tumor immune complement on the secreted N-glycan profile in breast tumor. Relationship between N-glycan patterns and clinical outcome Our survival analysis with overall patient survival as outcome, showed a significant association between Nglycome profiles and the overall survival of patients with BC. Cox proportional hazard regression revealed five N-glycan peaks to be significantly associated with poor survival (GP5, GP10, GP23, GP38, and coreF) and one glycan peak (GP24) as predictive of positive clinical outcome (Fig. 4). All glycan peaks, except GP5, were among those that segregated TIF and NIF. We speculate that the absence of GP5 (core fucosylated biantennary glycan, FA2) among the 33 differentially abundant glycans segregating TIF and NIF may be related to the fact that only a subset of patients exhibit high abundance of this N-glycan, which is prognostic for overall survival according to our analyses. Indeed, the remaining patients display GP5 levels similar to those observed in NIF. Thus, the differential abundance of GP5 will not be detected when stratifying NIF and TIF by DAA. The cox model in which deaths were classified as events only if cause of death was known and annotated as 'malignant neoplasm of breast' did not yield significant results after FDR correction (Fig. S2). Although it might be that the N-glycan groups identified as prognostic from the cox model with overall patient survival have inflated P-values and may be considered as partly false positives, we hypothesize that the observed lack of significance merely reflects the decrease in power (larger confidence intervals) of this model, for an already small dataset. This observation is supported by the fact that the two cox models show similar results in terms of Nglycans groups identified as having the highest hazard ratios (GP5, GP10, GP23, GP38, and coreF), with significant P-values before FDR correction. We cannot say for sure that patients, for which we do not have cause of death, did not die due cancer presence even if the primary cause of death is not BC itself, but a 'side effect' of disease. The later notion might still be of interest for prediction of patient prognosis by using identified N-glycan signature. Although correction for TIL status (CD45 + or CD4 + ) did not affect the overall results of survival analysis, the corrected P-value for GP38 did decrease when TILs were added to the model, highlighting the relationship between GP38 levels and TIL status seen in DAA. Reduced levels of GP38 correlated with a high proportion of overall and CD4 + TILs, which contribute to tumor suppression (Zanetti, 2015), thus supporting an association between tumor immune status and overall survival in patients with BC. Identification of GP10 and GP23 in relation to clinical outcome supports the suggested 'protective' status of bisecting glycans (Kizuka and Taniguchi, 2016), whereas decreased levels of coreF have recently been reported to contribute to the malignancy of gastric cancer (Zhao et al., 2014). Correlation of N-glycan abundance in TIF and serum samples Changes in N-linked glycan structure in serum or plasma of patients diagnosed with breast, prostate, ovarian, pancreatic, liver, or lung cancer have recently been reported (Lan et al., 2016 and references therein). Alterations in the N-glycome profile may be the result of a primary response or a general systemic reaction of the body to the progression and metabolism of a tumor. Additionally, the high degree of complexity and dynamic range of biomolecules externalized physiologically to the blood from other tissues can mask molecules released from the primary tumor. Comparative TIF serum analysis helps to discriminate biomolecules released directly from primary tumor into the tumor interstitium from the systemic body response. In this study, of 33 N-glycans that segregated TIF and NIF samples, levels of nine glycan groups (GP1, GP8, GP9, GP14, GP23, GP28, GP37, GP38, and coreF) were significantly correlated with N-glycan levels in serum (Fig. 5). One of these N-glycan groups, GP1 (monoantennary glycan, A1), displayed an inverse significant correlation with corresponding levels in serum, that is, high levels in TIF corresponded to low levels in serum. This observation may indicate that the molecules carrying this particular glycan feature accumulate within the tumor interstitium as a primary tumor response; however, this process is not associated with subsequent transport into the bloodstream. A more detailed look at the N-glycan profiles in TIF and NIF showed that two other N-glycan groups, GP3 and A1, with high TIF abundance (Fig. 2B) also exhibited a negative association with corresponding serum levels, although these trends were not significant (P = ร€0.19 and ร€0.44, respectively). The majority of N-glycans detected in these peaks are all core fucosylated biantennary except for GP38. Our findings support previous reports describing the decreased levels of some core fucosylated glycans in the sera of patients with BC (Saldova et al., 2014). The low levels of these types of N-glycans in the sera of patients with BC may indicate that biomolecules in the tumor interstitium bearing these N-glycan structures do not reach the bloodstream, but, rather, are involved in intercellular cross-communication within the local tumor space. This assumption may be supported by the functional features reported for core fucosylated glycans (Miyoshi et al., 2008). Alternatively, the inverse association between particular glycans in serum and TIF may be the result of a high dilution factor as well as the expected presence of other, more abundant, glycan species originated from no tumor sites, thus masking the presence of tumor-derived biomolecules in the blood. Computational validation of the paired TIF-NIF serum data was achieved through comparison with the independent N-glycan MDG serum dataset (Saldova et al., 2014). Among the nine N-glycan groups, levels of which in TIF were significantly correlated with those in matched serum samples, five (GP8, GP9, GP14, GP23, and coreF) were validated within the MDG serum dataset. The fact that we did not detect more overlaps in this validation experiment may be explained by the fact that (in contrast to the MDG serum dataset) our TIF-matched serum dataset did not include blood samples from healthy individuals, which are important when establishing the correct baseline for normality. Levels of most biantennary glycans, such as a2,3 sialic acid-modified N-glycan chains, decreased in the sera of patients with BC in this study, that is in agreement with previously reported data. The opposite trend was observed in the sera of lung cancer patients, which is characterized by a high level of biantennary N-glycan chains containing Sialyl Lewis structure (SLex) (Lan et al., 2016). Serum levels of biantennary N-glycan chains carrying core fucose or both core fucose and sialic acid, as well as the level of complex triantennary N-glycan containing only one sialic acid or both fucose and sialic acid, were decreased in tumor samples as compared to normal controls. Conclusions The results of our study showed (a) clear segregation of patterns of N-glycan release from tumor vs. normal mammary tissue; (b) elevated levels of particular bisecting glycans (GP4, GP7, GP10, GP15, GP20, GP23, GP26, GP37, and GP28), which contribute to tumor suppression in normal breast tissue interstitium; (c) association of several N-glycans (A1, G0, GP6, M5, highM, GP21, GP41, GP38, GP45, GP37, GP43, GP26, GP32, and S2) in breast tumor interstitium with the proportion and composition of infiltrating lymphocyte populations; and (d) correlation of N-glycan pattern in TIF and corresponding serum with clinical outcome. Levels of five differentially abundant N-glycans correlated with levels in TIF and matched serum (GP8, GP9, GP14, GP23, and coreF). Importantly, the prognostic potential of GP23 and coreF was validated in an independent serum cohort. These N-glycans most likely reflect the signaling events underlying tumor biology and progression and may have potential for use as biomarkers to improve the diagnostic and prognostic stratification of BC. In the current study, we were not able to estimate whether particular adjuvant therapies would have had any impact on the abundance patterns of released N-glycan in association with clinical outcome due to diversity of the treatment applied to the patients included in the discovery set. Further evaluation of the presented data using large independent dataset of serum from patients with breast cancer should be performed in a future. Supporting information Additional supporting information may be found online in the Supporting Information section at the end of the article: Fig. S1. The representative images of TILs distribution within a single tumor biopsy based on the IHC analysis. Fig. S2. Cox proportional-hazard regression with known cause of death. Fig. S3. The segregation of MDG BC cancer and normal serum based on the level of five N-glycans groups exhibited differential abundance across TIF, NIF and matched serum. Table S1. The biopsies with โ‰ฅ 1% of the invasive cancer cells positively stained for ER-and PgR were classified as positive. Table S2. Complete characteristics of 85 breast cancer patients enrolled in the study. Table S3. Glycan peaks and corresponding N-glycan features. Table S4. N-glycans identified as differentially abundant between samples with high and low tumor TILs. N-glycans are reported with associated log fold changes and adjusted P-values. Table S5. N-glycans with significantly correlated abundances between paired TIF and serum samples. N-glycans are reported with associated pearsons correlation score and adjusted P-value.
Chemical equilibrium in AGB atmospheres: successes, failures, and prospects for small molecules, clusters, and condensates Chemical equilibrium has proven extremely useful to predict the chemical composition of AGB atmospheres. Here we use a recently developed code and an updated thermochemical database, including gaseous and condensed species involving 34 elements, to compute the chemical equilibrium composition of AGB atmospheres of M-, S-, and C-type stars. We include for the first time TixCy clusters, with x = 1-4 and y = 1-4, and selected larger clusters ranging up to Ti13C22, for which thermochemical data is obtained from quantum chemical calculations. We find that in general chemical equilibrium reproduces well the observed abundances of parent molecules in circumstellar envelopes of AGB stars. There are however severe discrepancies, of various orders of magnitude, for some parent molecules: HCN, CS, NH3, and SO2 in M-type stars, H2O and NH3 in S-type stars, and the hydrides H2O, NH3, SiH4, and PH3 in C-type stars. Several molecules not yet observed in AGB atmospheres, like SiC5, SiNH, SiCl, PS, HBO, and the metal-containing molecules MgS, CaS, CaOH, CaCl, CaF, ScO, ZrO, VO, FeS, CoH, and NiS, are good candidates for detection with observatories like ALMA. The first condensates predicted are carbon, TiC, and SiC in C-rich atmospheres and Al2O3 in O-rich outflows. The most probable gas-phase precursors of dust are acetylene, atomic carbon, and/or C3 for carbon dust, SiC2 and Si2C for SiC dust, and atomic Al and AlOH, AlO, and Al2O for Al2O3 dust. In the case of TiC dust, atomic Ti is probably the main supplier of titanium. However, chemical equilibrium predicts that clusters like Ti8C12 and Ti13C22 become the major reservoirs of titanium at the expense of atomic Ti in the region where condensation of TiC is expected to occur, suggesting that the assembly of large TixCy clusters could be related to the formation of the first condensation nuclei of TiC. Introduction During their late evolutionary stages, low-and intermediatemass stars (< 8 M ) become red giants, increasing their radius by 2-3 orders of magnitude and decreasing their surface temperature down to 2000-3000 K. At these temperatures, the material is essentially molecular. When these stars enter into the so-called Asymptotic Giant Branch (AGB) phase, they start to lose mass through nearly isotropic winds that give raise to circumstellar envelopes which are mainly composed of gaseous molecules and dust grains (Hรถfner & Olofsson 2018). Thermochemical equilibrium provides a simple but incredibly useful starting point to describe the chemical composition of matter in the atmospheres of AGB stars. For example, chemical equilibrium has provided an elegant explanation of the marked chemical differentiation between oxygen-rich and carbon-rich AGB stars based on the high bond energy of carbon monoxide (Russell 1934). The high abundance of CO makes it to trap most of the limiting element and allows the element in excess to form either oxygen-bearing molecules when C/O < 1 or carbon-bearing molecules when C/O > 1. Moreover, many of the molecules discovered in envelopes around evolved stars, like HCP, PO, AlOH, or TiO (Agรบndez et al. 2007;Tenenbaum et al. 2007;Tenenbaum & Ziurys 2010;Kamiล„ski et al. 2013), have been largely inspired by the predictions of chemical equilibrium calculations like those of Tsuji (1964Tsuji ( , 1973. During the last decades however observations have evidenced a significant number of discrepancies with the scenario depicted by chemical equilibrium as, e.g., the discovery of warm water vapor in carbon stars , which point to non-equilibrium processes being at work in AGB atmospheres. Chemical equilibrium is also very useful to study the types of dust formed in AGB ejecta. We know that AGB stars are the main sources of dust in the Galaxy (Gehrz 1989) but the identification of the chemical nature of the dust is a difficult task. Only a handful of solid materials have been identified so far in circumstellar envelopes of AGB stars (e.g., Waters 2011) while some information is also available from the analysis of presolar material in meteorites (Lodders & Amari 2005). Chemical equilibrium can provide the basic theoretical scenario with the types of condensates that are thermodynamically favored and their condensation temperatures, which determine the sequence in which Article number, page 1 of 63 arXiv:2004.00519v1 [astro-ph.SR] 1 Apr 2020 A&A proofs: manuscript no. ChemEq_AGBs they are expected to appear as matter flows from the AGB star and cools (Sharp et al. 1995;Lodders & Fegley 1997, 1999Gail & Sedlmayr 2013). Although the formation of dust in AGB outflows is a complex process likely governed by chemical kinetics, as indicated by the extensive theoretical work of Gail & Sedlmayr (see, e.g., Gail & Sedlmayr 2013), chemical equilibrium can provide clues on the sequence of clustering that initiate the formation of the first solid materials from a gas of atoms and small molecules. The identification of the most thermodynamically favored intermediate clusters is an important piece of information. Several works have studied from a chemical equilibrium point of view the clustering process that initiate the formation of some of the condensates that are predicted to appear earlier in AGB winds, such as MgO (Kรถhler et al. 1997), SiC (Yasuda & Kozasa 2012;Gobrecht et al. 2017), silicates (Goumans & Bromley 2012, and Al 2 O 3 (รlvarez-Barcia & Flores 2016; Gobrecht et al. 2016;Boulangier et al. 2019). Nowadays, the unprecedented angular resolution and sensitivity of observatories like ALMA have the potential to identify the building blocks of dust in the atmospheres of AGB stars, providing constraints on the clustering process based on their abundances and spatial distributions (see, e.g., Kamiล„ski et al. 2017;Decin et al. 2017;McCarthy et al. 2019). In this study, we revisit thermochemical equilibrium in AGB atmospheres with different C/O ratios (M, S, and C stars) using the latest thermochemical data, with an interest in confronting the predictions of chemical equilibrium with the current observational situation. Our main motivations are threefold. (1) Review the successes of chemical equilibrium to explain the observed abundances of parent molecules in AGB envelopes and identify the main failures, all of which must be accounted for by any non-equilibrium scenario proposed for the atmospheres of AGB stars. (2) Identify potentially detectable molecules not yet observed in AGB atmospheres. (3) Compute the condensation sequence of solid materials in the atmospheres of M, S, and C stars and evaluate which are the most likely gas-phase precursors of different condensates and which thermodynamically favorable clusters 1 could play a role as intermediate species in the clustering process. In particular, we have computed thermochemical properties for various Ti x C y clusters to evaluate their abundances and role in the formation of titanium carbide dust in the atmospheres of C-type stars. Method of computation The composition of a mixture of gases and condensates at thermochemical equilibrium is determined by the minimization of the Gibbs free energy of the system and it only depends on three input parameters: pressure, temperature, and relative abundances of the elements. The calculations need to be fed with thermochemical data of the species included. Many programs based on different algorithms have been developed to compute chemical equilibrium in the atmospheres of cool stars, brown dwarfs, and planets. We can distinguish between two groups of methods: those based on equilibrium constants and those that minimize the total Gibbs free energy of the system. In the first group, the mathematical problem consists of a set of equations of conservation of each element, in which the partial pressure of each molecule is expressed in terms of the partial pressures of the constituent atoms via the equilibrium constant of atomization. In a first step, the system is solved only for the most abundant elements and then the whole system including all trace elements is solved iteratively using the Newton-Raphson method or similar ones. This method, originally developed by Russell (1934) for diatomic molecules and generalized by Brinkley (1947), was later on applied by Tsuji (1973) to atmospheres of cool stars. The method has been implemented with different refinements by Tejero & Cernicharo (1991) and by codes like CONDOR (Lodders & Fegley 1993), GGChem (Woitke et al. 2018), and FastChem (Stock et al. 2018). The second type of methods was introduced by White et al. (1958) and solves the problem of minimization of the total Gibbs energy of a mixture of species subject to the conservation of each element. This method is more general in that it makes no distinction between atoms, molecules, and condensates, as all them are simply constituent species of the mixture. The method is widely used by different programs, just to mention some, SOL-GAS (Eriksson 1971), NASA/CEA (Gordon & McBride 1994), and more recently TEA (Blecic et al. 2016). Zeleznik & Gordon (1960) demonstrated that the equilibrium constants method and the Gibbs minimization method are computationally identical, and therefore the various existing programs should converge to the same equilibrium composition regardless of the method used. Important differences however can appear if the species included are not the same or if the thermochemical data adopted are different. In fact, the precision of chemical equilibrium calculations is essentially limited by the completeness of the species included and the availability of accurate thermochemical data. Our chemical equilibrium code uses the Gibbs minimization method and is based on the algorithm implemented in the NASA/CEA program (Gordon & McBride 1994). The code has been developed in the past recent years and has been applied to describe the chemical composition of hot Jupiter atmospheres by Agรบndez et al. (2014a). Thermochemical data To solve chemical equilibrium by minimization of the Gibbs free energy of a system, the basic thermodynamic quantity needed is the free energy of each species as a function of temperature g 0 (T ). This quantity, also known as standard-state chemical potential, can be expressed as where H 0 (T ) and S 0 (T ) are the standard-state enthalpy and entropy, respectively, of the species, and standard-state refers to a standard pressure of 1 bar. These thermochemical properties are either given directly in compilations like NIST-JANAF (Chase 1998) 2 or are found parameterized as a function of temperature through NASA polynomial coefficients (see, e.g., McBride et al. 2002) in databases such as NASA/CEA (McBride et al. 2002) 3 or the Third Millenium Thermochemical Database (Goos,Burcat,& Ruscic) 4 . In this study we considered 919 gaseous species and 185 condensed species involving up to 34 elements. Thermochemical data was mostly taken from the library of NASA/CEA (McBride et al. 2002) and from the Third Millenium Thermochemical Database (Goos, Burcat, & Ruscic). The NASA/CEA data are Elemental composition Optical and infrared observations of AGB stars have found that the atmospheric elemental composition is nearly solar, with the exception of carbon and s-process elements, which are significantly enhanced in carbon stars because they are brought out to the surface by dredge-up processes. Determination of the abundances of C, N, and O in AGB stars indicate that these elements have essentially solar abundances, except for carbon which in Sand C-type stars is enhanced resulting in C/O ratios of โˆผ 1 and > 1, respectively (Smith & Lambert 1985Lambert et al. 1986). In our calculations we consider C/O ratios of 0.54 (solar), 1.00, and 1.40 for M-, S-, and C-type stars, respectively. Elements produced via neutron capture in the s-process such as Sr, Zr, and Ba are found to have moderate abundance enhancements in carbon stars (Abia et al. 2002). Other elements for which significant deviations from the solar abundances could be expected in AGB stars are fluorine and lithium. In the case of fluorine, however, recent observational studies find only mild enhancements and point to abundances very close to the solar value (Abia et al. 2015(Abia et al. , 2019. In the case of lithium, despite the existence of a few super-rich lithium stars (log > 4), on average the abun- .64 a Abundance defined as log (X) = 12 + log(X/H). Abundances are solar from Asplund et al. (2009) unless otherwise stated. b Abia et al. (1993). c The abundance of C in S-type and carbon stars is increased over the solar value to have a C/O ratio of 1.0 and 1.4, respectively. d Abia et al. (2015). e The abundances of the s-process elements Rb, Sr, Zr, and Ba are increased over the solar values by 0.36, 1.01, 0.88, and 0.89 dex, respectively, in S-type stars (Abia & Wallerstein 1998) and by 0.26, 0.46, 0.67, and 0.51 dex, respectively, in carbon stars (Abia et al. 2002). dance of lithium in galactic carbon stars is found to be below that in the Sun (Abia et al. 1993). The abundances adopted for the 34 elements included in the chemical equilibrium calculations are given in Table 1. Radial profiles of temperature and pressure The winds associated to AGB stars make them to have extended atmospheres, in which as one moves away from the star the gas cools and the density of particles drops. The temperatures and pressures in this extended atmosphere are critical to establish the chemical equilibrium composition. For example, high temperatures favor an atomic composition while low temperatures favor a molecular gas. It is therefore very important to have a realistic description of how the gas temperature and pressure vary with radius. The situation becomes complicated by two facts. First, the atmospheres of AGB stars are not static but are affected by dynamical processes ultimately driven by the pulsation of the star. Variability of the infrared flux has been observationally characterized from long time ago and it is interpreted as a consequence of the star pulsation during which the size and effective temperature of the star experience important changes (Le Bertre 1988;Suh 2004). Second, the low gravity of AGB stars makes the extended atmosphere to be affected by convective processes leading to asymmetric structures, hot spots, and high density clumps. This complex morphology is predicted by 3D hydrodynamical simulations (Freytag et al. 2017) and is starting to be characterized in detail with high angular resolution observations at infrared and (sub-)mm wavelengths (e.g., Khouri et al. 2016;Vlemmings et al. 2017;Fonfrรญa et al. 2019). Despite the complications related to the variation with time and the complex morphology, for our chemical equilibrium calculations we adopt a simple scenario representative of a generic AGB star in which the atmosphere is spherically symmetric and temperature and pressure vary smoothly with radius. Effective temperatures of AGB stars are usually in the range 2000-3000 K (Bergeat et al. 2001). Here we adopt an effective temperature of 2500 K. The temperature gradient across the extended atmosphere can be usually well accounted for using a Figure 1 of Bladh et al. 2019). The green and blue dashed curves show the profiles resulting from a 3D model of an AGB atmosphere with and without radiation pressure on dust (models st28gm06n06 and st28gm06n26 from Freytag et al. 2017, where profiles are averaged over spherical shells and time). The black dotted line shows the empirical profile derived for the carbon star IRC +10216 ). In the bottom panel we also show as thin solid lines various radial pressure profiles derived from high angular resolution observations of the radio continuum of various AGB stars (Reid & Menten 1997), of Mira from ALMA observations of SiO and H 2 O (Wong et al. 2016) and of CO v = 1 (Khouri et al. 2018), and of Mira, R Leo, W Hya, and R Dor from ALMA observations of the (sub)mm continuum (Vlemmings et al. 2019). The radial temperature and pressure profiles adopted in this study as representative of an AGB atmosphere are shown as thick magenta curves. power law, i.e., T (r) โˆ r โˆ’ฮฑ , with values of ฮฑ in the range 0.5-1.0. For example, a gray atmosphere, in which ฮฑ approaches 0.5 for r > R * , has been adopted to model high angular resolution observations of continuum emission at radio and (sub)mm wavelengths (Reid & Menten 1997;Vlemmings et al. 2019). Several works have modeled molecular lines arising from the inner envelope around the carbon star IRC +10216, finding values of ฮฑ in the range 0.55-0.58 (Fonfrรญa et al. 2008;De Beck et al. 2012;Agรบndez et al. 2012). The 3D hydrodynamic models of Freytag et al. (2017) result in steeper radial temperature profiles close to the star, with a power law index of 0.8-0.9 inside 2 R * , and more shallow in the 2-3 R * region. For our chemical equilibrium calculations, we adopt a power law for the radial temperature profile with an index of 0.6 (see thick magenta line in the upper panel of Fig. 1), which results in a temperature profile similar to that derived for IRC +10216 and those resulting from 3D hydrodynamic models. In our adopted profile, the gas temperature decreases from 2500 K at the star surface down to โˆผ 630 K at 10 R * . The radial pressure profile is expected to be given by hydrostatic equilibrium at the star surface, while in the outer parts of the circumstellar envelope, where the gas has reached the terminal expansion velocity, mass conservation imply that the gas density varies with radius as a power law, n(r) โˆ r โˆ’ฮฒ , with ฮฒ = 2. The region in between these two parts, the extended atmosphere, is a complex environment where the gas is accelerated and the radial density profile should be shallower than at hydrostatic equilibrium but steeper than the r โˆ’2 power law. In general, this behavior is supported by models and observations, although estimated densities can easily differ by various orders of magnitude among different studies. For example, hydrodynamic models that explain the formation of AGB winds through a combination of star pulsation plus radiation pressure on dust grains can provide estimates of the gas density in the extended atmosphere (Hรถfner & Olofsson 2018). These models however can result in very different densities depending on the adopted parameters and the processes included. For example, compare in the lower panel of Fig. 1 the various dashed curves, which correspond to a 1D model by Bladh et al. (2019) at two different phases and to two 3D models from Freytag et al. (2017). High angular resolution observations from radio to infrared wavelengths can provide constraints on the densities in the extended atmosphere of AGB stars (see thin solid curves in the lower panel of Fig. 1). Infrared observations of R Dor, W Hya, and IK Tau point to ฮฒ values between 2.7 and 4.5 in regions extending out to a few stellar radii (Khouri et al. 2016;Ohnaka et al. 2017;Adam & Ohnaka 2019). From ALMA (sub)mm continuum observations of the low mass loss rate objects Mira (o Cet), R Dor, W Hya, and R Leo, Vlemmings et al. (2019) derive values of ฮฒ in the range 5-6 for the 1-3 R * region. An even more steep radial density profile is obtained for the same 1-3 R * region from high angular resolution observations of radio continuum emission from various AGB stars (Reid & Menten 1997) and from 3D hydrodynamic models (Freytag et al. 2017). As an illustration of the differences found in the literature we show in the lower panel of Fig. 1 three radial density profiles derived from ALMA data of the star Mira, using SiO and H 2 O (Wong et al. 2016), CO v = 1 J = 3-2 (Khouri et al. 2018), and (sub)mm continuum (Vlemmings et al. 2019). Note that although the slopes derived are similar, the absolute densities differ by as much as two orders of magnitude. The most striking feature is that using SiO and H 2 O data, the densities needed to properly excite the observed lines are significantly higher than those derived from vibrationally excited CO or (sub)mm continuum. It is clear that further observational studies are needed to investigate which are the best density tracers and to converge in the estimation of densities. It seems that a single power law cannot adequately reproduce the variation of density across the whole extended atmosphere. It is likely that the radial density profile becomes progressively less steep as one moves away from the star until reaching a power law with ฮฒ = 2 outside the acceleration region. The radial pressure profile adopted for the chemical equilibrium calculations catches this idea and is shown as a thick magenta line in the lower panel of Fig. 1. Using this prescription, the pressure at the star surface is 5ร—10 โˆ’5 bar, in agreement with typical values from hydrodynamical models. Then, pressure decreases down to a few 10 โˆ’8 bar at 2 R * , in between the values derived from high angular resolution observations, and finally becomes โˆผ 10 โˆ’11 bar at 10 R * , which is in the range of values expected for a high mass loss rate of โˆผ 10 โˆ’5 M yr โˆ’1 , as is the case of IRC +10216. Successes Thermochemical equilibrium has been remarkably successful at explaining the molecular composition of circumstellar envelopes around AGB stars (e.g., Tsuji 1964Tsuji , 1973. A major success is that chemical equilibrium has provided the theoretical framework to understand the chemical differentiation between envelopes around M-, S-, and C-type AGB stars according to the elemental C/O ratio at the star surface. In this scenario, CO is the most abundant molecule after H 2 and it locks most of the carbon or oxygen depending on whether the C/O ratio is below or above one. This basic fact is at the heart of the most widely used method to determine mass loss rates from AGB stars from observations of circumstellar emission in rotational lines of CO (Hรถfner & Olofsson 2018). Predictions of chemical equilibrium on the budget of major elements have in the main been confirmed by observations. In Table 2 we list the parent molecules observed in envelopes around AGB stars of M-, S-, and C-type and the ranges of abundances derived. By parent molecules we refer to those that are formed in the inner regions of AGB envelopes, in opposition to daughter molecules that are formed in the external layers of the envelope. For most of them the parent character of the molecule has been confirmed by observation of high energy lines or through interferometric maps. For a few of them, information of their spatial distribution is not conclusive although formation in the inner envelope is the most likely origin. The observed abundances are compared in Fig. 2 with the results from the chemical equilibrium calculations performed in this study for a standard AGB atmosphere, i.e., using the elemental composition given in Table 1 and the pressure-temperature profile discussed in Sec. 2.4. In the calculations presented in Fig. 2 only gaseous species are included. Calculated abundances are expressed here as mole fractions while observed abundances are usually given in the literature relative to H 2 (where it is implicitly assumed that most hydrogen is molecular). These two quantities are identical over most of the atmosphere. Only in the hot innermost regions, where atomic hydrogen may become more abundant than H 2 (inner to โˆผ 2 R * for our adopted radial profiles of pressure and temperature), the two abundance measures can differ by as much as a factor of two. For our purposes, this is not very important since calculated and observed abundances are compared at the level of the order of magnitude. Chemical equilibrium calculations (e.g., Tsuji 1964Tsuji , 1973) make clear predictions on the major reservoirs of C, N, and O in AGB atmospheres. The major carrier of oxygen (apart from CO) in envelopes around oxygen-rich AGB stars is predicted to be H 2 O, something that has been verified by observations (Gonzรกlez-Alfonso & Cernicharo 1999;Maercker et al. 2016). In carbon-rich AGB atmospheres, the major carriers of carbon (other than CO) are predicted to be C 2 H 2 and HCN, which also has a solid observational support (Fonfrรญa et al. 2008;Schรถier et al. 2013). Molecular nitrogen is predicted to be the major carrier of nitrogen regardless of the C/O ratio although this has never been confirmed by observations due to the difficulties to detect N 2 . Hydrocarbons like CH 4 and C 2 H 4 are calculated to be quite abundant at 10 R * in C-type atmospheres (see Fig. 2) and have been observed in IRC +10216 with abundances of the order of the predicted ones (Keady & Ridgway 1993;Fonfrรญa et al. 2017). Also, carbon dioxide is calculated with a mole fraction in the range 10 โˆ’8 -10 โˆ’6 in M-type atmospheres (see Fig. 2) and it is observed with an abundance relative to H 2 of 3 ร— 10 โˆ’7 in SW Vir (Tsuji et al. 1997). Sulfur is predicted to be largely in the form of molecules like CS and SiS in C-rich atmospheres, SiS in S-type stars, and H 2 S in O-rich atmospheres (see Fig. 2), which essentially is in agreement with observations (Danilovich et al. 2017(Danilovich et al. , 2018Massalkhi et al. 2019). Other S-bearing molecules predicted to be abundant in M-type atmospheres are SO and SiS, and these species are indeed observed with relatively high abundances in some O-rich envelopes (Bujarrabal et al. 1994;Schรถier et al. 2007;Danilovich et al. 2016;Massalkhi et al. 2020). The radical SH is predicted to be relatively abundant in the hot inner regions of AGB atmospheres and has been observed through infrared observations in the atmosphere of the S-type star R And (Yamamura et al. 2000). Silicon monoxide (SiO) is predicted to be the most abundant Si-bearing molecule all over the 1-10 R * range in the atmospheres of M stars. In S-type atmospheres the calculated abundance of SiO decreases by two orders of magnitude in the 1-5 R * while keeping a very high abundance beyond, and the same occurs in C-rich atmospheres, although in this case the abundance drop in the 1-5 R * is even more pronounced (see Fig. 2; see also Agรบndez & Cernicharo 2006). Observations indicate that the abundance of SiO does not differ significantly between envelopes around M-, S-, and C-type stars, although in all them the SiO abundance decreases with increasing mass loss rate (Gonzรกlez Delgado et al. 2003;Schรถier et al. 2006;Ramstedt et al. 2009;Massalkhi et al. 2019Massalkhi et al. , 2020. This decline in the SiO abundance with increasing envelope density is not a consequence of chemical equilibrium (Massalkhi et al. 2019), but has been interpreted as an evidence of SiO disappearing from the gas phase at high densities to incorporate into dust grains (Gonzรกlez Delgado et al. 2003;Schรถier et al. 2006;Ramstedt et al. 2009;Massalkhi et al. 2019Massalkhi et al. , 2020. It thus seems that the gradual abundance decline calculated for SiO in the 1-5 R * region as one moves from star type in the sense M โ†’ S โ†’ C does not have a direct consequence in the SiO abundance that is injected into the expanding wind. However, this behavior predicted by chemical equilibrium probably explains why SiO masers are observed in M-type stars but not toward carbon stars (e.g., Pardo et al. 2004). Apart from these details, chemical equilibrium and observations agree in the fact that SiO is one of the most abundant carriers of silicon in the atmospheres of M-, S-, and C-type stars. Calculations and observations also agree for SiS in that it is an abundant molecule regardless of the C/O ratio. However, observations point to a differentiation between C-and O-rich envelopes, with SiS being on average one order of magnitude more abundant in carbon-rich sources (Schรถier et al. 2007;Danilovich et al. 2018;Massalkhi et al. 2019Massalkhi et al. , 2020. Moreover, in some oxygen-rich envelopes the fractional abundance of SiS relative to H 2 is as low as โˆผ10 โˆ’8 , well below the predictions of chemical equilibrium (Danilovich et al. 2019;Massalkhi et al. 2020). There are two silicon-bearing molecules, SiC 2 and Si 2 C, that according to chemical equilibrium become quite abundant in C-rich atmospheres (Tejero & Cernicharo 1991; A&A proofs: manuscript no. ChemEq_AGBs Table 2. Abundances (relative to H 2 ) of parent molecules other than H 2 and CO derived from observations of M, S, and C stars. With regard to phosphorus, chemical equilibrium predicts that HCP is a major carrier in C-type atmospheres while PO dominates to a large extent in M-type stars (Agรบndez et al. 2007;Milam et al. 2008). The two molecules have been detected in the corresponding environments confirming this point (Agรบndez et al. 2007;Ziurys et al. 2018). Calculations predict also a relative abundance of the order of 10 โˆ’10 for PN in C-rich atmospheres, value which is in agreement with the abundance derived in the C-star envelope IRC +10216 (Milam et al. 2008). The halogen elements fluorine and chlorine are predicted to be largely in the form of HF and HCl in the inner regions of AGB atmospheres, regardless of the C/O ratio (see Fig. 2). The fact that most F is predicted to be as HF has been used to derive fluorine abundances in carbon stars by observing the v = 1-0 vibrational band of HF (Abia et al. 2010). An independent measurement of the abundance of HF was provided by the detection of the J = 1-0 rotational transition in the C-rich envelope IRC +10216 (Agรบndez et al. 2011). The abundance derived in this study was found to be โˆผ 10 % of the value expected if fluorine is mostly in the form of HF with a solar abundance, which was interpreted in terms of depletion onto dust grains. Agรบndez et al. (2011) also reported observations of low-J transitions of HCl in IRC +10216 and derived an abundance for HCl of 15 % of the solar abundance of chlorine, while Yamamura et al. (2000) derived an even lower abundance for HCl in the S-type star R And from observations of ro-vibrational lines. The missing chlorine could be depleted onto dust grains or in atomic form. Given the variation of the chemical equilibrium abundances of HF and HCl with radius (see Fig. 2) and the uncertainties in the abundances derived from observations, we can consider that calculations and observations agree in that HF and HCl are important carriers of fluorine and chlorine, respectively, in AGB atmospheres. However, observations of these two molecules in more Table 2. Rectangles are located at different radii simply to facilitate visualization and with no other purpose. Empty rectangles correspond to those cases in which observed abundances are in agreement with any of the abundances calculated by chemical equilibrium in the 1-10 R * range (usually the maximum abundance). Filled rectangles are used to indicate those cases in which there is a severe disagreement (by several orders of magnitude) between observed and calculated abundances, while we use hatch rectangles to indicate a significant disagreement (by more than one order of magnitude). The level of disagreement between observed and maximum calculated abundance is indicated by a vertical line. sources is needed to understand better the chemistry of halogens in AGB atmospheres. Unlike in interstellar clouds, where metals are largely forming part of dust grains, in envelopes around AGB stars a wide variety of metal-bearing molecules are observed. Many of them are in fact formed in the hot stellar atmosphere, where they are relatively abundant according to chemical equilibrium, and are later on incorporated into the expanding envelope. Early observations of metal-containing molecules revealed the metal halides NaCl, KCl, AlCl, and AlF (Cernicharo & Guรฉlin 1987), while the metal cyanides NaCN and KCN were detected later (Turner et al. 1994;. All these molecules were discovered in the C-star envelope IRC +10216 and some of them have been exclusively observed in this source. Other metal cyanides like MgNC or CaNC are observed in IRC +10216 (Kawaguchi et al. 1993;Guรฉlin et al. 1993;Cernicharo et al. 2019a), although they are formed in the outer envelope and thus are not parent molecules. The parent character of NaCN has been confirmed through interferometric observations (Quintana-Lacaci et al. 2017), while in the case of KCN the parent character is merely suggested by chemical equilibrium, which predicts a similar behavior than for NaCN in C-star atmospheres (see Fig. 2). In general, the abundances derived for NaCl, KCl, AlCl, AlF, NaCN, and KCN in IRC +10216 Agรบndez et al. 2012) are consistent with the expectations from chemical equilibrium calculations of C-rich AGB atmospheres. The metal halides NaCl and AlCl have been also observed in a few O-rich envelopes (Milam et al. 2007;Decin et al. 2017) with abundances that are in agreement with chemical equilibrium calculations. Some metal oxides have long been observed at optical and near-infrared wavelengths in the spectra of AGB stars. For example, TiO and VO are detected toward M-and S-type stars (Joyce et al. 1998) and the oxides of s-process elements ZrO, YO, and LaO are conspicuous in S-type stars (Keenan 1954). More recently, metal oxides like AlO, AlOH, TiO, and TiO 2 have been detected toward M-type AGB stars through their rotational spectrum, allowing to constrain their abundances (Kamiล„ski et al. 2016(Kamiล„ski et al. , 2017Decin et al. 2017). In general, the abundances derived for AlO, AlOH, TiO, and TiO 2 are in good agreement with the values calculated by chemical equilibrium in O-rich AGB atmospheres (see Fig. 2). These molecules are clearly parent molecules, as confirmed by interferometric observations, and their study is interesting because some of them may act as gasphase precursors in the formation of dust around M stars. Failures In recent years, there has been an increasing number of observational discoveries regarding parent molecules in AGB envelopes that severely disagree with the predictions of chemical equilibrium. Several molecules are observed in envelopes around AGB stars of certain chemical type with abundances well above (by several orders of magnitude) the expectations from chemical equilibrium calculations. It is interesting to note that the disagreement with chemical equilibrium always goes in the sense of anomalously overabundant molecules. That is, there is no molecule predicted to be abundant that is observed to be much less abundant. These anomalously overabundant molecules are highlighted in blue in Table 2 and are also indicated by filled rectangles in Fig. 2, with a vertical line that shows the level of disagreement between observations and chemical equilibrium. One of the most noticeable failures of chemical equilibrium concerns H 2 O. This molecule is predicted and observed to be very abundant in O-rich envelopes, but in S-and C-type stars water is predicted to have a negligible abundance and should not be observable, although it is detected with a relatively high abundance (see Fig. 2). The story of the problem of water started with the detection of the low energy rotational line at 557 GHz in the carbon star IRC +10216 with the space telescopes SWAS and Odin (Melnick et al. 2001;Hasegawa et al. 2006). Different scenarios were hypothesized to explain this unexpected discovery. Among them, the sublimation of cometary ices from a putative Kuiper belt analog (Melnick et al. 2001;, Fischer-Tropsch catalysis on iron grains (Willacy 2004), and the formation in the outer envelope through the radiative association of H 2 and O (Agรบndez & Cernicharo 2006). The subsequent detection with Herschel of many high energy rotational lines of H 2 O in IRC +10216 ruled out the three previous scenarios and constrained the formation region of water to the very inner regions of the envelope, within 5-10 R * . In fact, the presence of water in C-rich envelopes was found to be a common phenomenon and not restricted to IRC +10216 (Neufeld et al. 2011). Moreover, the problem of water in carbon stars has been extended to S-type AGB stars with the detection of abundant H 2 O in a couple of sources (Schรถier et al. 2011;Danilovich et al. 2014). As illustrated in Fig. 2, the problem of water is a problem of more than 4-5 orders of magnitude. Ammonia is also a remarkable example of chemical equilibrium breakdown in AGB atmospheres. The chemical equilibrium abundance of NH 3 is vanishingly small regardless of the C/O ratio (see Fig. 2). Nonetheless, infrared observations of rovibrational lines and Herschel observations of the low energy rotational line at 572 GHz have outlined a scenario in which NH 3 is fairly abundant, with abundances relative to H 2 in the range 10 โˆ’7 -10 โˆ’5 , in envelopes around M-, S-, and C-type AGB stars (Danilovich et al. 2014;Schmidt et al. 2016;Wong et al. 2018). The formation radius of NH 3 is constrained to be in the range 5-20 R * in the C-star envelope IRC +10216 (Keady & Ridgway 1993; Schmidt et al. 2016). For ammonia, the disagreement with chemical equilibrium is a problem of more than six orders of magnitude. The carbon-bearing molecules HCN and CS are, together with C 2 H 2 , the major carriers of carbon in C-type atmospheres. Indeed, HCN and CS are observed with high abundances in C-rich envelopes, in agreement with expectations from chemical equilibrium. In the atmospheres of S-type stars, the two molecules are also predicted to be relatively abundant, although with somewhat lower abundances than in C-type atmospheres, something that is observationally verified (see Fig. 2). Both HCN and CS show a clear chemical differentiation depending on the C/O ratio, because observed abundances in oxygen-rich envelopes are systematically lower than in C-rich sources by about two orders of magnitude (Bujarrabal et al. 1994;Schรถier et al. 2013;Danilovich et al. 2018;Massalkhi et al. 2019). Although the observed abundances of HCN and CS in O-rich envelopes are below those in C-rich envelopes, they are still much higher than expected from chemical equilibrium, which for C/O < 1 predicts negligible abundances for these two molecules. According to the interferometric observations by Decin et al. (2018a), HCN is formed at radii smaller than about 6 R * in R Dor, while in the case of CS, the abundance profiles derived for IK Tau, W Hya, and R Dor point to a formation radius of a few stellar radii (Danilovich et al. 2019). The difference between observed abundances and the values calculated by chemical equilibrium is large, more than four orders of magnitude for CS and more than five orders of magnitude for HCN (see Fig. 2). Sulfur dioxide (SO 2 ) has been observed in oxygen-rich AGB envelopes since long time ago (Omont et al. 1993). It was originally thought to be formed in the outer layers of the envelope, but infrared observations of the ฮฝ 3 band at 7.3 ยตm by Yamamura et al. (1999) and observations of high energy rotational lines Velilla Prieto et al. 2017) pointed out that in fact SO 2 originates in the inner regions of the envelope with abundances of the order of 10 โˆ’6 relative to H 2 . The result is striking because the chemical equilibrium abundance predicted for SO 2 in M-type atmospheres is lower by at least 3-4 orders of magnitude (see Fig. 2). Velilla point to a formation radius for SO 2 in IK Tau in the range 1-8 R * , although interferometric observations would be desirable. Apart from H 2 O and NH 3 , there are two other hydrides, silane (SiH 4 ) and phosphine (PH 3 ), which are observed in C-rich envelopes with abundances well above of what chemical equilibrium predicts. Silane was detected in the carbon star IRC +10216 via infrared observations of ro-vibrational lines (Keady & Ridg-way 1993), while phosphine has been observed toward the same carbon star from observations of low energy rotational transitions (Agรบndez et al. 2008. The formation radii of SiH 4 and PH 3 in IRC +10216 are not well constrained, although Keady & Ridgway (1993) favor a distribution in which SiH 4 is present from โˆผ 40 R * . Thus, silane can be considered a late parent species because of its large formation radius. The discrepancy between the abundances derived from observations and the values calculated with chemical equilibrium is > 6 orders of magnitude for SiH 4 and PH 3 . It is curious to note that all the anomalously overabundant molecules in C-type AGB stars are hydrides. The same is not true for M-type AGB stars. There is a couple of molecules for which there exists an important discrepancy between the abundances derived from observations and calculated by chemical equilibrium, although it is not as severe as for the molecules discussed above. We refer to PN in O-rich stars and H 2 S in C-rich stars, which are indicated by hatch rectangles in Fig. 2. With regard to PN in Orich AGB atmospheres, the disagreement between the observed abundances, (1-2)ร—10 โˆ’8 (Ziurys et al. 2018), and the maximum chemical equilibrium abundance calculated is almost three orders of magnitude. However, uncertainties in both the observational and theoretical sides make it unclear what is the true level of disagreement. For example, while Ziurys et al. (2018) derive a PN abundance of 10 โˆ’8 relative to H 2 in IK Tau, De Beck et al. (2013) andVelilla Prieto et al. (2017) derive higher abundances, (3-7)ร—10 โˆ’7 , in this source. If we give preference to these later abundances, then the level of disagreement would be even higher. On the other hand, the formation enthalpy of PN is rather uncertain (see Lodders 1999), which directly translates to the chemical equilibrium abundance calculated. In this study we adopted the thermochemical data for PN from Lodders (1999), who gives preference to a formation enthalpy at 298.15 K of 171.5 kJ mol โˆ’1 , while other compilations like JANAF use lower values that would result in higher chemical equilibrium abundances for PN, reducing the level of disagreement. In the case of H 2 S in C-rich AGB stars, the maximum chemical equilibrium abundance calculated is 7 ร— 10 โˆ’11 , while the value derived from observations is โˆผ 50 times larger. In this case, the observed abundance is based on the detection of just one line in just one source (see Agรบndez et al. 2012), and thus it has to be viewed with some caution. In summary, the main failures of chemical equilibrium to account for the observed abundances of parent molecules in circumstellar envelopes are NH 3 , HCN, CS, SO 2 , and possibly PN in M-type stars, H 2 O and NH 3 in S-type stars, and the hydrides H 2 O, NH 3 , SiH 4 , PH 3 , and perhaps H 2 S as well in C-type stars. The large discrepancies between the abundances derived from observations and those calculated with chemical equilibrium necessarily imply that non-equilibrium chemical processes must be at work in AGB atmospheres. Any non-equilibrium scenario invoked must account for all these anomalously overabundant molecules but must also reproduce the rest of molecular abundances that are reasonably explained by chemical equilibrium. For the moment no scenario provides a fully satisfactory agreement with observations, although two mechanisms that can drive the chemical composition out of equilibrium have been proposed. The first scenario involves shocks produced in the extended atmosphere as a consequence of the pulsation of the AGB star. A model based on this scenario was originally developed to study the inner wind of the carbon star IRC +10216 (Willacy & Cherchneff 1998) and the oxygen-rich Mira star IK Tau (Duari et al. 1999), and was later on generalized by Cherchneff (2006) to AGB winds with different C/O ratios. The main successes of these models were that they resulted in high abundances for HCN and CS in O-rich winds, although neither H 2 O was efficiently produced in C-rich winds nor NH 3 independently of the C/O ratio. Subsequent models in which the chemical network was modified resulted in a relatively high abundance of H 2 O in C-rich AGB winds (Cherchneff 2011(Cherchneff , 2012. In this new chemical network, however, rate constants of reverse reactions do not obey detailed balance, which may affect the predicted abundance of water. Similar models performed by Marigo et al. (2016) found that in O-rich atmospheres, HCN is only formed with abundances of the order of the observed ones when nearly isothermal shocks are considered. A different scenario, consisting in photochemistry driven by interstellar ultraviolet photons that would penetrate into the inner envelope through the clumpy envelope, was proposed by Decin et al. (2010) to explain the formation of water in the inner envelope of IRC +10216. The scenario was later on generalized to AGB envelopes with different C/O ratios by Agรบndez et al. (2010). In these models, warm photochemistry is able to efficiently form H 2 O, NH 3 , and H 2 S in the inner regions of C-rich envelopes, while NH 3 , HCN, and CS are synthesized in the inner layers of O-rich envelopes. Similar models based on a different formalism were carried out by Van de Sande et al. (2018), who found similar qualitative results. Later on, Van de Sande et al. (2019) found that ultraviolet photons from the AGB star could also lead to some photochemistry for sufficiently high stellar temperatures and degree of clumpiness. Although the two scenarios are promising in that they result in an enhancement of some of the anomalously overabundant parent molecules observed in AGB envelopes, the main problem with them is that they are quite parametric, i.e., they depend on parameters such as the shock strength or the degree of clumpiness that are poorly constrained from observations. S-type atmospheres: sensitivity to C/O ratio The chemical equilibrium calculations for an S-type atmosphere shown in Fig. 2 assume that the elemental C/O ratio is exactly one. However, S stars may have C/O ratios ranging from slightly oxygen-rich to slightly carbon-rich. Given the marked chemical differentiation between C/O < 1 and C/O > 1, here we explore how sensitive are molecular abundances to slight changes in the C/O ratio. This permits to have an idea of how diverse could be the atmospheric composition between S stars with slightly different C/O ratios. In Fig. 3 we show the abundances of some important parent molecules as a function of radius for C/O ratios in the range 0.98-1.02. These calculations include only gaseous species. Over such a narrow range, some molecules experience large abundance variations (e.g., C 2 H 2 and SiC 2 ), while other species remain almost insensitive to C/O (e.g., NH 3 , H 2 S, and SiS). The sensitivity to the C/O ratio is more pronounced in the inner atmosphere, while at radii larger than โˆผ 5 R * , molecules tend to show little abundance variation with C/O. Thus, if the abundances of parent molecules are set by the chemical equilibrium values in the hot inner regions, we should expect important abundance variations from source to source, reflecting the diversity of C/O ratios, from slightly below one to slightly above one. This would be most noticeable for molecules like C 2 H 2 , HCN, CS, SiO, and SiC 2 , for which predicted abundances range from moderately high to very low depending on whether the C/O ratio is above or below one. Note that the severe disagreement that is found between chemical equilibrium and observations for H 2 O Article number, page 9 of 63 A&A proofs: manuscript no. ChemEq_AGBs . The left and middle panels show the fraction of C and O, respectively, that is not locked into CO as a function of radius and C/O ratio, as calculated by gas-phase chemical equilibrium. To illustrate that the vast majority of oxygen not locked into CO is trapped by SiO for C/O ratios around one, in the right panel we show the fraction of oxygen that is not locked by CO or SiO. Chemical equilibrium calculations include only gaseous species. and NH 3 in S-type stars (see discussion in Sec. 3.2) persists when the C/O ratio is allowed to vary slightly around one. In general, carbon-bearing molecules show up more than oxygen-bearing molecules for C/O ratios around one. For example, C-bearing molecules like HCN, C 2 H 2 , and CH 4 maintain moderately high abundances over certain radii in slightly oxygen-rich conditions, while the only O-bearing molecule that is present with a non-negligible abundance under slightly carbon-rich conditions is SiO. Other O-bearing molecules like H 2 O and SO need C/O ratios well below one to reach moderately high abundances. The reason for this behavior is illustrated in Fig. 4. It is seen that for radii larger than โˆผ 4 R * , when the gas is slightly O-rich (C/O ratios in the range 0.96-1.00), a few percent of the carbon is not locked by CO and goes to Cbearing molecules like C 2 H 2 , HCN, and CH 4 . If we now focus on slightly C-rich conditions, we see that for radii larger than โˆผ 4 R * , a few percent of oxygen is not trapped by CO. However, in this case, the oxygen not locked by CO is mostly in the form of SiO. In fact, SiO competes with CO for the oxygen over a wide range of C/O ratios, and this is at the origin of the relatively high abundance of SiO in C-rich AGB atmospheres. Apart from SiO, no other O-bearing molecule show up with a significant abundance under slightly C-rich conditions. In summary, C-bearing molecules compete more efficiently with CO for the carbon than O-bearing molecules (other than SiO) for the oxygen. The consequence is that the chemical equilibrium composition of S-type atmospheres resembles more that of carbon stars and less that of M-type stars. Potentially detectable molecules The gas-phase budget of the different elements included in the chemical equilibrium calculations in M-, S-, and C-type atmospheres is discussed in detail in Appendix A. The most abundant molecular reservoirs of the non-metal elements (C, O, N, Si, S, P, F, Cl, and B) are usually observed in AGB envelopes with abundances in agreement with predictions from chemical equilibrium. The only cases where major molecular reservoirs are not detected correspond to non-polar molecules like N 2 or molecules containing elements with a very low abundance like B. There are however several molecules predicted to be relatively abundant which have not been yet detected, and thus observations can still be used to test the predictions of chemical equilibrium. In Table 3 we present a list of molecules that have not been yet observed in AGB atmospheres but are predicted with nonnegligible abundances, and thus are potentially observable. We generally include molecules for which the maximum calculated mole fraction over the 1-10 R * range is โ‰ฅ 10 โˆ’10 . These chem- Table 3. Potentially detectable molecules in AGB atmospheres. ical equilibrium calculations include only gaseous species. We also list the electric dipole moment of each molecule and indicate whether the rotational spectrum has been measured in the laboratory. Some of these molecules are good targets for detection through high-angular resolution and sensitive observations using observatories like ALMA. There are some molecules which are more favorable for detection than others. Factors that play against detection are a low abundance, a low dipole moment, a complex rotational spectrum, which results in spectral dilution, and a low spatial extent restricted to the photosphere and near surroundings. The latter may happen for some radicals, which are only abundant in the very inner atmosphere and may be converted into more stable molecules at larger radii by non-equilibrium chemistry, and for metal-bearing molecules, which can be severely depleted from the gas phase at relatively short radii as they incorporate into condensates. Sensitive highangular resolution observations able to probe the very inner atmosphere will be the best way to observe these molecules. There are also some molecules, for which the chances of detection are uncertain because they only reach high abundances at large radii, close to 10 R * , where chemical equilibrium is less likely to hold due to the lower temperatures and pressures. Non-metal molecules The hydride radicals CH, CH 2 , and NH reach the maximum abundance at the photosphere and show a marked abundance fall-off with increasing radius. The chances to detect these species depend on whether the photospheric abundance can be maintained throughout the extended atmosphere and be injected into the expanding wind or whether these radicals are chemically processed and converted into more stable molecules like methane and ammonia. If their presence is restricted to the innermost atmosphere, observations with a high angular resolution or in the infrared domain could allow to probe them. The hydrides SiH and PH are also abundant at the photosphere, but have a more extended distribution than the above three radicals, which makes it more likely that they survive the travel over the extended atmosphere. Their detection however is complicated because of their low dipole moment. Molecular oxygen is listed in Table 3 although its detection seems difficult. It has a relatively low mole fraction (โˆผ 5 ร— 10 โˆ’8 ) over a narrow region restricted to the very inner atmosphere, and more importantly, the rotational transitions have very low intrinsic line strengths due to lack of electric dipole moment. Detection of O 2 in AGB envelopes must probably await sensitive observations by future space telescopes. Among the silicon-carbon clusters, apart from the already known SiC 2 and Si 2 C (see Table 2), there are other two, Si 3 C and Si 5 C, which are predicted to form with high abundances in S-and C-type atmospheres (see Appendix A.4 and bottom panel of Fig. A.2). The thermochemical data for these species is taken from Deng et al. (2008). The low dipole moments calculated for Si 3 C and Si 5 C however play against their detection. In any case, detection of any of these two molecules must await the characterization of their rotational spectrum in the laboratory. The silicon-carbon cluster Si 2 C 2 is also predicted with a nonnegligible abundance in S-and C-type atmospheres. However, this molecule has a non-polar rhombic structure in its ground state (Lammertsma & Gรผner 1988;Presilla-Mรกrquez et al. 1995;Rintelman & Gordon 2001;Deng et al. 2008) and thus cannot be detected through its rotational spectrum. The molecule iminosilylene (SiNH) is predicted to be relatively abundant (mole fraction up to 10 โˆ’8 ) in atmospheres around S-and C-type stars. Thermochemical data for this species is taken from the Chemkin Thermodynamic Database (Kee et al. 2000), which assigns a formation enthalpy at 298.15 K of 160.6 kJ mol โˆ’1 based on quantum calculations. An astronomical search is feasible because the rotational spectrum has been measured in the laboratory , although the calculated dipole moment is low (0.34 D; McCarthy et al. 2015). The silicon monohalides SiF and SiCl are also potentially detectable in AGB atmospheres. They are predicted with mole fractions up to 10 โˆ’10 and 10 โˆ’9 , respectively, in S-and C-type atmospheres. Although the predicted abundances are not very high, the dipole moments in excess of 1 D (see Table 3) may help in the detection. Among the P-bearing molecules not yet observed in AGB atmospheres, PS has the highest chances of being detected. This molecule was already predicted as the most abundant Pbearing molecule in O-rich atmospheres by Tsuji (1973), although searches for it have not been successful (Ohishi et al. 1988). According to our calculations, in oxygen-rich atmospheres PS is predicted with an abundance as high as that of PO (which has been already observed), with the main difference being that PO locks most of the phosphorus in the 1.5-4.5 R * region while PS is the major reservoir of P somewhat farther (in the 4.5-7 R * ), although still in the region of influence of chemical equilibrium. The dipole moment is not very high (0.565 D; Mรผller & Woon 2013), but the high predicted abundance should permit to detect it. The P-bearing molecule HPO 2 is also predicted with a high abundance in O-rich atmospheres, although only in the outer regions (> 8 R * ), where chemical equilibrium is less likely to hold. The most stable isomer of HPO 2 (cis HOPO) has been characterized spectroscopically in the laboratory (O'Sullivan et al. 2006), although the pure rotational spectrum has not been directly measured. There are various B-containing molecules which are predicted to lock most of the boron depending on the region of the atmosphere and the C/O ratio. Concretely, BO, HBO, and HBO 2 are predicted to be major reservoirs of boron in O-rich atmospheres, while in S-and C-type atmospheres HBO and BF are the main molecular carriers of this element. The major handicap to detect them is the low elemental abundance of boron (5ร—10 โˆ’10 relative to H), although the fact that these molecules are major reservoirs of boron together with their high dipole moment (except for BF) should make feasible to observe some of them and put some constraints of the so far unexplored chemistry of boron. Metal-bearing molecules The observation of metal-containing molecules is complicated by several facts. First, many of the metals are present at a level of trace. Second, in many cases the main gas-phase reservoir of the metal is atomic, and molecules are predicted at a lower level. Third, metals have a refractory character and thus high condensation temperatures, which make them to easily leave the gas phase to form condensates. In spite of these difficulties, a variety of metal-bearing molecules have been detected with abundances in agreement with expectations from chemical equilibrium. Currently, constraints on the molecular budget of metals in the atmospheres of AGB stars is restricted to Na, K, Al, and Ti (see Table 2), although there is still margin for further characterization of the molecular reservoirs of other metals. Apart from the Al-bearing molecules known in AGB atmospheres (AlCl, AlF, AlO, and AlOH), there are some others that could be detected. Aluminum monohydride (AlH) is predicted with a non-negligible abundance in M-, S-, and C-type atmospheres. In fact this molecule has been detected in the O-rich star o Cet through optical observations (Kamiล„ski et al. 2016), although no abundance was derived. The rotational spectrum of AlH is known from the laboratory (Halfen & Ziurys 2004 and thus it can be searched for at (sub-)mm wavelengths. However, the low dipole moment (0.30 D; Matos et al. 1988) and the fact that the two lowest rotational transitions cannot be observed from ground because they lie close to water atmospheric lines play against its detection. The molecules AlS and AlOF are also potentially detectable in O-rich atmospheres, as they are predicted with non-negligible abundances (mole fractions up to 10 โˆ’10 and 10 โˆ’8 , respectively). Other Al-bearing molecules are predicted to be quite abundant, like Al 2 O, Al(OH) 3 in M stars, and Al 2 C 2 in C stars (see bottom panel of Fig. A.5). However, their structures are predicted to be highly symmetric with zero or very low dipole moment (Turney et al. 2005;Wang & Andrews 2007;Cannon et al. 2000;Naumkin 2008;Dong et al. 2010), which makes very unlikely, if not impossible, to detect them via their rotational spectrum. Magnesium is predicted to be essentially in the form of neutral atoms in AGB atmospheres. There are however several Mg-bearing molecules observed in the C-rich AGB envelope IRC +10216, such as MgNC, MgCN, HMgNC, MgC 3 N, MgC 2 H, and MgC 4 H (Kawaguchi et al. 1993;Guรฉlin et al. 1993;Ziurys et al. 1995;Cabezas et al. 2013;Agรบndez et al. 2014c;Cernicharo et al. 2019b), although they are formed in the outer envelope and thus are not parent molecules. Among Mg-bearing parent molecules, the most promising candidates for detection are MgS and MgO, which are predicted with non-negligible abundances in O-rich atmospheres and have fairly high dipole moments (Fowler & Sadlej 1991;Bรผsener et al. 1987). In the case of calcium, neutral atoms are also the major reservoir in AGB atmospheres. However, there are some molecules which are predicted to trap a fraction of Ca in the 3-10 R * range, reaching non-negligible abundances. This is the case of CaS, which is predicted to have a mole fraction as high as 4 ร— 10 โˆ’8 in O-rich atmospheres. Its very high dipole moment (10.47 D; see Table 3) makes it a very interesting candidate for detection. The hydroxides CaOH and Ca(OH) 2 are also predicted to be abundant in the outer parts of O-rich atmospheres. While CaOH has a dipole moment of 1.465 D (Steimle et al. 1992), Ca(OH) 2 is predicted to be highly linear with a very low or zero dipole moment (Wang & Andrews 2005;Vasiliu et al. 2010). The calcium halides CaCl, CaCl 2 , CaF, and CaF 2 are also predicted with nonnegligible abundances, especially in oxygen-rich atmospheres. Among them, CaCl, CaF, and CaF 2 have high dipole moments, but CaCl 2 is predicted to be linear and thus non-polar (Vasiliu et al. 2010). Ziurys et al. (1994) searched without success for CaF toward the C-rich AGB star IRC +10216. Our chemical equilibrium calculations (see third panel in Fig. A.7) indicate that CaF only reach non-negligible abundances in O-rich atmospheres. Concerning the trace metals Ba, Sc, Zr, and V, there are some molecules, mainly oxides, which are potentially detectable through their rotational spectrum in oxygen-rich atmospheres because they are predicted with mole fractions in the range 10 โˆ’10 -10 โˆ’8 and have fairly large dipole moments. These molecules are BaO, BaS, ScO, ScO 2 , ZrO, ZrO 2 , VO, and VO 2 . Some of them, like ZrO and VO, are long known to be present in the atmospheres of S-and M-type AGB stars from observations at optical and near-infrared wavelengths (Keenan & Schroeder 1952;Keenan 1954;Joyce et al. 1998), and various absorption bands in the spectra of S-type stars have been assigned to BaO by Dubois (1977). Moreover, ScO has been observed in the optical spectrum of V1309 Sco, a remnant of a stellar merger whose conditions resemble those of AGB outflows (Kamiล„ski et al. 2015). However, these oxides (BaO, ScO, ZrO, and VO) have not been yet detected through their rotational spectrum, which would allow to derive their abundances. Other molecules like BaCl 2 , Sc 2 O, Sc 2 O 2 , and ZrCl 2 only reach mole fractions of the order of 10 โˆ’10 at large radii (โˆผ 10 R * ) in certain AGB atmospheres, although in this region it is more uncertain that chemical equilibrium prevails. The vanadium-oxygen cluster V 4 O 10 is predicted to be the major carrier of V in oxygen-rich atmospheres beyond โˆผ7 R * (see bottom panel in Fig. A.8). However, this highly symmetric cluster is predicted to be non polar according to quantum chemical calculations at the B3LYP/cc-pVTZ + PP for V level and thus cannot be observed through its rotational spectrum. The large titanium-carbon cluster Ti 8 C 12 is predicted to lock most of the titanium in S-and C-type atmospheres beyond 2-3 R * (see second panel in Fig. A.8). However, its large size is a handicap to detect it through its rotational spectrum, which is likely to be crowded of lines so that spectral dilution is a serious issue. Three Cr-bearing molecules (CrS, CrO, and CrCl) are potentially detectable in AGB stars. Although chromium is mostly in atomic form, these molecules are predicted with mole fractions in the range 10 โˆ’10 -10 โˆ’9 in M-type atmospheres, and all of them have quite high dipole moments. In fact, CrO has been observed in the optical toward the stellar-merger remnant V1309 Sco, an object where other oxides like TiO, VO, ScO, and AlO have also been found (Kamiล„ski et al. 2015). These observations support that CrO can be plausibly detected at radio wavelengths in O-rich AGB stars. The metal transition hydrides MnH and CoH are also in the list of potentially detectable molecules. Chemical equilibrium predicts that MnH is present with a uniform mole fraction of โˆผ 10 โˆ’10 in AGB atmospheres, regardless of the C/O ratio (see second panel in Fig. A.9). The high dipole moment of MnH (10.65 D;Koseki et al. 2006) can help to detect it. In the case of CoH, the dipole moment is lower (1.88 D; Wang et al. 2009) but this hydride is predicted to be the major carrier of Co in AGB atmospheres of any chemical type and thus it should be present with a fairly large abundance. Although there is certainly a problem of incompleteness in the Co-bearing molecules included in the calculations which could affect the predicted abundance of CoH (see Sec. A.13), this molecule is a very interesting target for future searches in AGB atmospheres, once the rotational spectrum is measured in the laboratory. Iron and nickel are, as many other transition metals, predicted to be mostly in the form of neutral atoms. However, in O-rich atmospheres the sulfides FeS and NiS reach relatively high mole fractions (up to โˆผ 4 ร— 10 โˆ’8 and โˆผ 4 ร— 10 โˆ’7 , respectively), which together with the high dipole moments of these molecules, make them attractive candidates for detection. Iron monoxide (FeO) is also calculated with a non-negligible mole fraction (up to โˆผ 3 ร— 10 โˆ’10 ) in M-type atmospheres, and thus could be detectable given its high dipole moment. In fact, there has been a recent claim of detection of FeO in the oxygen-rich AGB star R Dor using ALMA (Decin et al. 2018b). The inferred abundance relative to H 2 is a few times 10 โˆ’8 , about two orders of magnitude above the predictions of chemical equilibrium. Further observations are required to unambiguously establish the presence of FeO and derive its abundance. The molecule Fe(OH) 2 is predicted to have a rising abundance with increasing radius, reaching a mole fraction of โˆผ 10 โˆ’8 at 10 R * (see third panel in Fig. A.9). However, this molecule is predicted to have a linear O-Fe-O structure, and thus should have a very low dipole moment (Wang & Andrews 2006). Observational constraints It is well known that solid dust grains are formed in AGB atmospheres, and that the later ejection of this material into the interstellar medium constitutes the major source of interstellar dust in the Galaxy (Gehrz 1989). Infrared observations have allowed to identify a few solid compounds in AGB envelopes, although the identification of some of them is still under discussion (see reviews by Molster et al. 2010 andWaters 2011). The observational situation of condensates identified in AGB envelopes is summarized in Table 4. The dust in oxygen-rich AGB envelopes is mainly composed of silicates and oxides. Amorphous silicate is widely observed through the 9.7 ยตm band and crystalline silicates of the families of olivine (Mg (2โˆ’2x) Fe 2x SiO 4 ) and pyroxene (Mg (1โˆ’x) Fe x SiO 3 ) have been also identified through narrow bands at mid-and far- IR wavelengths (Waters et al. 1996;Blommaert et al. 2014). Alumina (Al 2 O 3 ), a highly refractory condensate, is also observed at infraed wavelengths (Onaka et al. 1989) and has been also identified in presolar grains (Nittler et al. 1994;Hutcheon et al. 1994). There is also evidence of Mg-Fe oxides of the type Mg 1โˆ’x Fe x O, with a high content of Fe (Posch et al. 2002). Another condensate which is very refractory is hibonite (CaAl 12 O 19 ), which has been identified in presolar grains (Choi et al. 1999). Spinel (MgAl 2 O 4 ) has been proposed as a constituent of dust in O-rich envelopes (Posch et al. 1999), although the identification has been questioned (DePew et al. 2006;Zeidler et al. 2013). Further evidence for the presence of spinel comes from the analysis of presolar grains in meteorites (Nittler et al. 1997). Heras & Hony (2005) found evidence on the presence of gehlenite (Ca 2 Al 2 SiO 7 ) based on the modeling of the spectral energy distribution of 28 O-rich AGB stars in the 2.4-45.2 ยตm range. The presence of metallic iron grains has been also inferred in the oxygen-rich envelope OH 127.8+0.0 (Kemper et al. 2002). However, the identification is particularly uncertain because Fe lacks spectral features and it is only recognized by an excess of opacity in 3-8 ยตm range. In carbon-rich AGB envelopes, dust is mostly composed of carbon, either amorphous or in the form of graphite. These two materials do not have spectral features but provide a smooth continuum at IR wavelengths. Martin & Rogers (1987) modeled the spectral energy distribution of the prototypical carbon star IRC +10216 and found that amorphous carbon, rather than graphite, is the major form of carbonaceous dust in C-rich envelopes. Graphite must also be present to some extent as it has been identified in presolar meteoritic material, with isotopic ratios pointing to formation in the outflows of C-rich AGB stars (Amari et al. 1990). Silicon carbide (SiC) dust is widely identified toward C stars through a band centered at 11.3 ยตm (Treffers & Cohen 1974) and presolar SiC grains have also been identified in carbonaceous meteorites (Bernatowicz et al. 1987). Goebel & Moseley (1985) proposed MgS as the carrier of a band observed at 30 ยตm. The assignment to MgS has been disputed by Zhang et al. (2009), who argued that the amount of MgS required to reproduce the observed band strength imply a sulfur abundance higher than solar. This problem vanishes if MgS is only present in the outer layers of the grains , as originally proposed by Zhukovska & Gail (2008). Currently, MgS remains the best candidate for the 30 ยตm feature (Sloan et al. 2014). Finally, there is strong evidence of the presence of TiC in grains formed in C-rich AGB ejecta from the analysis of presolar material in meteorites (Bernatowicz et al. 1991). The chemical composition of dust around S stars appears to contain features of both O-rich and C-rich stars. The study of Hony et al. (2009) revealed the presence of the 30 ยตm band attributable to MgS, the amorphous silicate band, which however appears shifted from 9.7 ยตm to redder wavelengths and is proposed to be due to non-stoichiometric silicates, and a series of emission bands in the 20-40 ยตm which were tentatively assigned to diopside (MgCaSi 2 O 6 ). The presence of alumina (Al 2 O 3 ) and gehlenite (Ca 2 Al 2 SiO 7 ) was also inferred by Smolders et al. (2012) by modeling the spectral energy distribution of a large sample of S stars. Expectations from chemical equilibrium Chemical equilibrium calculations can be very informative on the types of condensates expected to form in AGB atmospheres and on the sequence in which they are expected to appear (Sharp et al. 1995;Lodders & Fegley 1997, 1999Gail & Sedlmayr 2013). Our main motivation to revisit here the subject is twofold. First, we aim to cross-check our calculations against previously published results. Second, we seek to establish a condensation sequence in M-, S-, and C-type atmospheres using a realistic pressure-temperature profile that serves us as starting point to discuss about the most likely gas-phase precursors of selected condensates. Here we present results from chemical equilibrium calculations in which condensates are considered. We collected thermochemical data for 185 condensed species. If all condensates are included simultaneously in the calculations, when multiple condensates having elements in common are thermodynamically favorable, only the most stable ones form at the expense of others that may never show up because their constituent elements are trapped by other, more stable, compounds. To circumvent this problem of competition between condensates with elements in common, calculations are run including only one condensed species each time. This allows to have a complete condensation sequence, with no missing condensates, and to compare with previously published condensation sequences. For these calculations, we adopted the elemental composition given in Table 1 and the pressure-temperature profile discussed in Sec. 2.4. Carbonaceous dust in C-rich envelopes should be mostly in the form of amorphous carbon (Martin & Rogers 1987). However, thermochemical data is not available for amorphous carbon and thus we use graphite as a proxy of it. Similarly, there is no thermochemical data for amorphous silicates or crystalline silicates with varying Mg/Fe ratios, which are observed in Orich envelopes (Woolf & Ney 1969;Waters et al. 1996). Therefore, forsterite (Mg 2 SiO 4 ) and enstatite (MgSiO 3 ) are used in the chemical equilibrium calculations as proxies of olivine, py-0.6 0.8 roxene, and amorphous silicate. Similarly, we lack thermochemical data for the Mg-Fe oxide Mg 0.1 Fe 0.9 O identified in M stars. Nevertheless, the oxides MgO and FeO are included, and the latter is used as a proxy of Mg 0.1 Fe 0.9 O. We however note that there could be significant differences between the thermochemical properties of amorphous carbon and graphite, amorphous silicates and crystalline forsterite and enstatite, and Mg 0.1 Fe 0.9 O and FeO, which could lead to some changes in the condensation sequence calculated here for M-, S-, and C-type atmospheres. Results from the chemical equilibrium calculations regarding condensates are shown in Fig. 5 and Fig. 6, where we show the radius (bottom x-axis) and temperature (top x-axis) at which each condensate appears. In Fig. 5 the condensation radius (and temperature) of some relevant condensates are shown as a function of the C/O ratio (y-axis). In Fig. 6 we put all the condensates that appear in the 1-10 R * range in M-and C-type atmospheres on an abundance scale (y-axis). The abundance scale is given by the maximum mass ratio relative to H that each condensate may attain, according to the abundances of its constituent elements (see Table 1). This is to be compared with the typical dust-to-gas mass ratios derived in AGB envelopes, in the range (1 โˆ’ 4) ร— 10 โˆ’3 (Ramstedt et al. 2008) and indicated by a gray horizontal band in Fig. 6. Thus, condensates with maximum attainable mass ratios close to 10 โˆ’3 can be major constituents of dust, while those for which the maximum attainable mass ratio is substantially lower than 10 โˆ’3 can only be minor components of circumstellar dust. Still, minor condensates could be very important if they are predicted to be among the first condensates, in which case they can serve as condensation nuclei and accelerate the condensation of other compounds. Chemical equilibrium predicts that the first condensates in carbon-rich atmospheres should be carbon, followed by TiC, and then SiC, while in oxygen-rich atmospheres Al 2 O 3 should be the first condensate to appear, followed by minerals like hibonite (CaAl 12 O 19 ), grossite (CaAl 4 O 7 ), scandia (Sc 2 O 3 ), perovskite (CaTiO 3 ), gehlenite (Ca 2 Al 2 SiO 7 ), and spinel (MgAl 2 O 4 ). How-ever, these conclusions change for C/O ratios close to one (see Fig. 5). For example, the condensation sequence C-TiC-SiC changes to TiC-C-SiC for C/O ratios below 1.02, and to TiC-SiC-C for even lower C/O ratios, in the range 0.96-1.00. These values hold for the specific pressure-temperature profile adopted here, which yields pressures between a few 10 โˆ’10 and 10 โˆ’8 bar in the region where these compounds are expected to condense (see Fig. 1). For higher pressures, the C/O ratios that separate the different condensation sequences shift to higher values. For example, at pressures of 10 โˆ’6 -10 โˆ’5 bar, TiC condenses before carbon if C/O < 1.1 (Lodders & Fegley 1997). Evidence of TiC serving as nucleation site for carbon dust has been found from the analysis of presolar grains (Bernatowicz et al. 1991(Bernatowicz et al. , 1996, which implies formation at low C/O ratios and/or high pressures (Lodders & Fegley 1997). In oxygen-rich atmospheres with C/O ratios in the range 0.82-0.96, Sc 2 O 3 , rather than Al 2 O 3 , is predicted to be the first condensate. Although Sc 2 O 3 can only be a minor condensate due to the low elemental abundance of Sc (see upper panel in Fig. 6), it may provide the condensation nuclei for oxides and silicates. Therefore, depending on the C/O ratio, either carbon, TiC, Sc 2 O 3 , or Al 2 O 3 would be the first condensate according to chemical equilibrium. Lodders & Fegley (1999) find that highly refractory condensates involving trace elements like Hf and Zr are also likely to serve as condensation nuclei. Concretely, these authors point to HfO 2 and ZrO 2 in O-rich atmospheres and ZrC in C-rich stars. These condensates are not included in our calculations. Despite the low elemental abundance of Zr, there is evidence for ZrC in presolar grains (Bernatowicz et al. 1996). Therefore, it would not be surprising that other highly refractory condensates involving trace elements like Sc 2 O 3 could also be identified in presolar grains. It is worth noting that for slightly oxygen-rich conditions (C/O = 0.96-1.00), our calculations predict that condensates typical of carbon-rich conditions, like TiC, SiC, and carbon, form well before minerals typical of oxygen-rich conditions, like oxides and silicates (see Fig. 5). This behavior remind us to that C star (C/O=1.40) Fig. 6. All condensates predicted to appear in the 1-10 R * range in atmospheres of M-and C-type AGB stars (upper and lower panels, respectively). Condensates are located in the diagrams according to their condensation radius (bottom x-axis; the corresponding temperature is given in the top x-axis) and maximum mass ratio relative to H attainable. Condensates observed in AGB envelopes (see Table 4) are indicated in magenta. The range of dust-to-gas mass ratios derived by Ramstedt et al. (2008) for envelopes of AGB stars is indicated by a gray horizontal band. In C-type AGB atmospheres, MgS is predicted to condense at temperatures below 628 K (see Table 5), i.e., beyond 10 R * . found previously for gas-phase molecules (see Sec. 3.3). That is, for slightly oxygen-rich conditions, the chemical composition shares more features with a carbon-rich mixture than with an oxygen-rich one, and this applies to both gaseous species and condensates. This conclusion has direct consequences for S-type stars. For example, the predicted condensation sequence for an S-type atmosphere with C/O = 1 resembles much more that of a carbon-rich star than that of an M-type atmosphere (see Fig. 5). Our calculations indicate that dust should form very close to the star, at 1-3 R * . Indeed, near-infrared polarimetric interferometric observations of R Dor and W Hya find that dust is already present as close to the star as 1.3-1.5 R * (Khouri et al. 2016; Ohnaka et al. 2017). The calculations also tell that carbon dust in C-rich atmospheres should form closer to the star than alumina dust in O-rich stars. Similarly, in S-type stars condensation is shifted to slightly larger radii compared to O-and C-rich stars. The limited observational constraints on the composition of dust in AGB envelopes is roughly consistent with the expectations from chemical equilibrium. In the case of M-type stars (see Table 4 and upper panel in Fig. 6), observations indicate that the bulk of grains is composed of Mg-Fe silicates, which is consistent with the fact that MgSiO 3 and Mg 2 SiO 4 are the first condensates among those that can attain dust-to-gas mass ratios above 10 โˆ’3 . The detection of Al 2 O 3 , MgAl 2 O 4 , and CaAl 12 O 19 , and the possible presence of Ca 2 Al 2 SiO 7 are also in line with expectations from chemical equilibrium. These are among the first condensates predicted to appear and all them may attain moderately high dust-to-gas mass ratios, above 10 โˆ’4 . In C-type stars (see Table 4 and lower panel in Fig. 6), amorphous carbon and SiC are major constituents of grains according to observations, and this is also in line with the predictions from chemical equilibrium, which clearly point to these two compounds as the first major condensates. In the case of S-type stars, observations seem to point to a chemical composition of dust more similar to that of O-rich stars, at the exception of MgS (see Table 4), while chemical equilibrium favor a dust composition more carbon-rich-like (see Fig. 5). We however note that the comparison between observations and chemical equilibrium for S stars is difficult because on the one hand, observational constraints are more un- Notes: Condensates observed in AGB stars (see Table 4) are highlighted in magenta. a Element and abundance log , defined as log (X) = 12 + log(X/H). b Maximum fraction of element that can be trapped by condensate. c Condensation temperature. certain, and on the other, the predicted condensation sequence is extremely sensitive to the exact C/O ratio. Table 5 presents similar information to that shown in Fig. 6 but in a different manner. We list, in order of appearance, the condensates that are expected to trap each refractory element in Mand C-type atmospheres. The table lists also the maximum fraction of the element that each condensate can trap and the corresponding condensation temperature. In the main, the solid reservoirs of each element identified in M-and C-type atmospheres are similar to those presented by Lodders & Fegley (1999) in their Table 1, although there are some differences, which probably arise from differences in the thermochemical database of condensates and in the pressures and C/O ratios involved in the calculations. We now discuss individually each element guided by Table 5. Magnesium -. The first Mg condensate expected in M-type atmospheres is spinel (MgAl 2 O 4 ), for which there is evidence from infrared observations (Posch et al. 1999) and analysis of presolar grains (Nittler et al. 1997). Spinel however cannot be a major reservoir of Mg as it can only trap a small fraction of Mg (up to 4 %) due to the lower abundance of Al. The next Mg condensate predicted is diopside (MgCaSi 2 O 6 ), which has been tentatively identified in S-type stars (Hony et al. 2009) but not in O-rich envelopes. The presence of diopside in M-type envelopes, which would only trap as much as 5 % of Mg, depends on whether some Ca is left after condensation of more refractory Ca compounds, like CaTiO 3 , Ca 2 Al 2 SiO 7 , and CaAl 2 Si 2 O 8 . Next in the condensation sequence of Mg, we have forsterite (Mg 2 SiO 4 ) and enstatite (MgSiO 3 ), the Mg-rich end members of the olivine and pyroxene families, Mg (2โˆ’2x) Fe 2x SiO 4 and Mg (1โˆ’x) Fe x SiO 3 , respectively, which are expected to be major reservoirs of Mg, in agreement with observations (Molster et al. 2002). There is evidence of Mg in the form of Mg 0.1 Fe 0.9 O (Posch et al. 2002), which according to the condensation temperatures of MgO and FeO, should form from the Mg left after condensation of Mg-rich silicates. MgS is predicted to condense at even farther distances than MgO, and thus little Mg should be available to form it. In C-rich atmospheres, MgS is predicted to form at even larger distances from the star, beyond 10 R * (at temperatures below 628 K) for our radial pressure and temperature profiles. The lower condensation temperature of MgS in C-rich atmospheres, compared to O-rich ones, is related to the different main gaseous reservoirs of sulfur in these two types of sources (SiS in C-rich atmospheres and H 2 S in O-rich ones), which compete differently with solid MgS for the sulfur. In spite of the large condensation radius of MgS in C-rich sources, this is the only Mg condensate identified so far in C-rich ejecta (Goebel & Moseley 1985). There are several O-containing Mg condensates predicted to appear earlier than MgS in C-rich atmospheres (see Table 5). Their formation must therefore be inhibited, either due to the difficulty to compete for the oxygen or because more refractory compounds would have trapped most of the Si, Al, and Ca. The formation of MgS at large distances from the AGB star, although somewhat surprising, is consistent with its presence in the outer layers of preexisting grains (Zhukovska & Gail 2008;Lombaert et al. 2012). There is evidence that CS act as gas-phase precursor of MgS dust in high mass loss rate C-rich envelopes, as indicated by the decrease in its fractional abundance with increasing envelope density and with increasing flux of the 30 ยตm feature attributed to MgS dust (Massalkhi et al. 2019). However, in some C-rich envelopes gaseous CS and SiS trap most of the sulfur (Danilovich et al. 2018;Massalkhi et al. 2019), so that little would be left to form MgS dust. Silicon -. In O-rich ejecta, silicon is predicted to condense firstly in the form of various Ca-and Al-containing silicates: gehlenite (Ca 2 Al 2 SiO 7 ), anorthite (CaAl 2 Si 2 O 8 ), andalusite (Al 2 SiO 5 ), and diopside (MgCaSi 2 O 6 ), although these can only take a small fraction of the silicon. Among them, there is only a tentative identification of the first expected condensate, gehlenite (Heras & Hony 2005), but no observational indication of the others. Clearly, the first major Si condensates are forsterite (Mg 2 SiO 4 ) and enstatite (MgSiO 3 ). Silica (SiO 2 ) could also be an important reservoir of Si as it is predicted to condense at only slightly lower temperatures than forsterite and enstatite. In Crich atmospheres, the first and major Si condensate is clearly SiC, for which there is evidence from both infrared observations and analysis of presolar grains (Treffers & Cohen 1974;Bernatowicz et al. 1987). Other major Si condensates are pure Si, Si 3 N 4 , and Si 2 N 2 O, although they have much lower condensation temperatures. Iron -. Metallic iron is predicted to condense at a temperature of 988 K independently of the C/O ratio. In M-type atmospheres, this would be the first and major solid reservoir of the element, which is consistent with the inference of Fe grains from the modeling of the spectral energy distribution of the O-rich star OH 127.8+0.0 (Kemper et al. 2002). Metallic iron is expected to condense after silicates. Other Fe condensates like sulfides, oxides, and carbides (see Table 5) can form later on from the Fe left after condensation of pure iron. The detection of Mg 0.1 Fe 0.9 O (Posch et al. 2002) is an indication of this. A potentially important reservoir of iron is troilite (FeS), which is predicted to condense at slightly higher temperatures than FeO. In C-rich envelopes, most iron should be in the form of Fe 3 C as this carbide is predicted to condense earlier than pure Fe. Lodders & Fegley (1999) find that (Fe,Ni) 3 P and FeSi could also be important Fe condensates in O-rich and C-rich, respectively, atmospheres. Sulfur -. The first S-containing condensate expected in both M-and C-type atmospheres is CaS. This compound, which would condense at a significantly higher temperature in O-rich atmospheres than in C-rich conditions, can take up to 17 % of the sulfur. Depending on the degree of depletion of this element from the gas phase, other S-containing condensates may form, e.g., FeS and Ni 3 S 2 in M-type atmospheres and Ni 3 S 2 and MgS in C-rich atmospheres. The presence of FeS in O-rich ejecta depends on whether some Fe is left after the condensation of pure iron, while, similarly, the formation of Ni 3 S 2 is conditioned to the condensation of pure Ni. The observational evidence of MgS in C-rich envelopes, together with the fact that CaS is the first condensate involving either Ca or S in C-rich atmospheres strongly points to CaS as a very likely constituent of dust. Aluminium -. Alumina (Al 2 O 3 ) is predicted to be the first and major Al condensate in O-rich atmospheres, which is in line with observational evidences from infrared observations (Onaka et al. 1989) and from the analysis of presolar meteoritic material (Nittler et al. 1994;Hutcheon et al. 1994). Other major condensates that appear later on in the condensation sequence of Al and that can trap part of the Al not used by alumina are hibonite (CaAl 12 O 19 ), grossite (CaAl 4 O 7 ), gehlenite (Ca 2 Al 2 SiO 7 ), spinel (MgAl 2 O 4 ), anorthite (CaAl 2 Si 2 O 8 ), and andalusite (Al 2 SiO 5 ). There are observational evidences for the presence of some of these condensates, concretely hibonite (Choi et al. 1999), gehlenite (Heras & Hony 2005), and spinel (Posch et al. 1999;Nittler et al. 1997). In C-rich atmospheres, aluminum is predicted to condense at temperatures below 1000 K. Major condensates, in order of appearance, are AlN, Al 4 C 3 , and Al 2 O 3 . None of them have so far been observed. Calcium -. The first Ca condensates expected in O-rich atmospheres are the calcium aluminum oxides hibonite and grossite (already mentioned when discussing aluminum), and perovskite (CaTiO 3 ). These condensates however can only trap a fraction of the calcium. In particular, the low abundance of CaTiO 3 may be behind the lack of observational evidence of this mineral in O-rich dust. Major Ca condensates that appear later on in the condensation sequence of Ca are the calcium aluminum silicates gehlenite and anorthite. In C-rich atmospheres, calcium is expected to condense at relatively large distances from the AGB star (> 6.5 R * , corresponding to temperatures around or below 800 K). The first major Ca condensate is CaS, followed by the same calcium aluminum oxides and silicates expected in O-rich ejecta. The condensates involving other refractory elements less abundant than calcium are given in Table 5. For several elements, the first and major expected condensate is the same regardless of the C/O ratio. This is the case of Na and K, in which case albite (NaAlSi 3 O 8 ) and orthoclase (KAlSi 3 O 8 ) are the main expected condensates, of Ni and Cu, which are predicted to condense in pure metallic form, and of Sc, whose main condensate is Sc 2 O 3 . In the case of Cr and V, the oxides Cr 2 O 3 and V 2 O 3 , respectively are the main expected condensate in O-rich atmospheres, while in carbon-rich ejecta these two elements are expected to condense in pure metallic form. Finally, for titanium, the main condensate in O-rich conditions is CaTiO 3 , followed by several titanium oxides, while in C-rich atmospheres, TiC is clearly the main expected condensate, and this is supported by analysis of presolar grains (Bernatowicz et al. 1991). Gas-phase precursors of dust Condensates can only form in those regions where the gas temperature and pressure makes the appearance of solid compounds thermodynamically favorable. That is, condensates cannot appear earlier than predicted by chemical equilibrium, although they can appear later depending on the kinetics of the process of condensation. The first condensates are predicted to form at a given distance from the star and this process must necessarily occur at the expense of gas-phase atoms and small molecules. For our adopted radial profiles of pressure and temperature, the first condensates are expected to appear when temperatures are below 2000 K in the C-rich case and 1500 K in the O-rich case (see Fig. 5). Although condensation in the expanding and cooling outflow from AGB stars occurs in non-equilibrium conditions (Gail & Sedlmayr 2013), chemical equilibrium can provide a useful starting point to examine which are the most likely gasphase precursors of the first condensates. Here we focus on the possible gas-phase precursors of the three condensates that are predicted to appear well before any other in C-rich outflows (carbon, TiC, and SiC) and of the first solid expected to condense in O-rich atmospheres: Al 2 O 3 . For each condensate, we examine which are the main gaseous reservoirs of the constituent elements, discuss the plausibility of the different reservoirs to act as precursors, and comment on the role that clusters of medium size may play in the formation of condensation nuclei. Carbon dust in C-rich atmospheres Carbon dust is expected to be the first condensate in C-rich atmospheres, except for very small C/O ratios. In the region where graphite is expected to form according to chemical equilibrium (1.6 R * ; see top left panel in Fig. 7), the main reservoir of carbon is acetylene. Other abundant C-bearing molecules are HCN, CS, C 3 , and atomic carbon. The fact that HCN and CS contain nitrogen and sulfur, respectively, make them less likely candidates to act as precursors of dust made up purely of carbon. Thus, chemical equilibrium points to C 2 H 2 , C 3 , and C as the most likely precursors of carbon dust in C-type AGB atmospheres. Gail et al. (1984) consider that C 2 H 2 is the main source of carbon atoms in the synthesis of graphite in C-rich outflows, which would imply a depletion in the gas-phase abundance of acetylene in the region where carbon dust forms. This is in contrast with the study of Fonfrรญa et al. (2008), who modeled rovibrational lines of C 2 H 2 in the carbon star IRC +10216 and found that acetylene maintains a constant abundance out to โˆผ 20 R * , a distance at which most carbon dust should have already formed. Atomic carbon is the main predicted reservoir of the element at the photosphere, although its abundance declines steeply with increasing radius. If, as predicted by chemical equilibrium, carbon dust forms inside 2 R * , then atomic carbon may provide the necessary carbon to form dust. In a recent experiment designed to mimic the formation of carbon dust in evolved stars, atomic carbon was used as precursor, leading to the synthesis of amorphous carbon nanoparticles (Martรญnez et al. 2019). If atomic carbon is the precursor of carbon dust, then the formation of the first condensation nuclei must proceed through clusters C n of increasing size. In fact, this is the theoretical scenario used by Gail et al. (1984) to describe the process of nucleation during the formation of graphite in C-rich atmospheres. Our calculations only include C n clusters up to C 5 . From these the most abundant in the region around 1.6 R * is C 3 , while C 2 , C 4 , and C 5 have lower abundances. It is unclear whether carbon clusters larger than C 5 could be stable enough to be present with a significant abundance. In any case, the large abundance of C 3 makes it a good candidate to act as precursor in the formation of carbon dust. Observational constraints of the radial variation of the C 3 abundance in the innermost regions of C-rich envelopes, currently restricted to the outer envelope (Hinkle et al. 1988), could shed light on this. Titanium carbide dust in C-rich atmospheres Titanium carbide is predicted to be the first condensate in the atmospheres of C-stars with very low C/O ratios and in S-type atmospheres. The finding of TiC grains embedded into presolar graphitic material also points to TiC as a first condensate (Bernatowicz et al. 1991). Given the relative elemental abundances of titanium and carbon and the stoichiometry 1:1 of solid TiC, the formation of titanium carbide dust will be limited by the precursor providing Ti. Titanium carbide is expected to condense at a radius of 2.3 R * in our fiducial C-rich atmosphere. In this region, the main reservoir of titanium is atomic Ti (see top right panel in Fig. 7). Gail & Sedlmayr (2013) suggest that atomic Ti reacting with C 2 H 2 is the net reaction responsible for the formation of TiC dust. Indeed, acetylene is the main reservoir of carbon at 2.3 R * , although atomic carbon has also an abundance comparable to that of atomic Ti and could act as the carbon-supplier precursor. There are also several gaseous molecules that contain both titanium and carbon and which could play a role in the formation of TiC dust. The most obvious is the diatomic molecule TiC, which however is predicted to have a too low abundance. Therefore, a process of formation of TiC grains through a sequence of addition of TiC molecules to form clusters (TiC) n of increasing size n is unlikely due to the low amount of Ti locked by TiC molecules. Among the small Ti-C molecules, TiC 2 is the most abundant, although it remains orders of magnitude below atomic Ti. Large Ti x C y clusters become abundant at the expense of atomic Ti and small Ti-bearing molecules at the region where TiC dust condensation is expected. Although this coincidence may be accidental, it strongly suggests that the assembly of large Ti x C y clusters could be related to the formation of TiC condensation nuclei. The most stable and abundant Ti x C y clusters are Ti 8 C 12 and Ti 13 C 22 , the former displacing atomic Ti as main reservoir of titanium at radii larger than 2.3 R * . Therefore, Ti 8 C 12 emerges as a very attractive candidate to gas-phase precursor of TiC dust. It is clear that formation through addition of Ti 8 C 12 monomers leading to (Ti 8 C 12 ) n of increasing n does not preserve the stoichiometry of solid TiC. If Ti 8 C 12 acts as precursor, some rearrangement in which Ti atoms are incorporated or carbon atoms are lost is needed during the growth of TiC condensation nuclei. The condensation of TiC in the outflows of carbon stars has been studied theoretically in conditions of non-equilibrium by Chigai et al. (1999). These authors consider the growth of (TiC) n clusters of increasing size through chemical reactions involving atomic Ti and C 2 H 2 , following the formalism described by Gail & Sedlmayr (1988) for the heteromolecular formation and growth of carbon grains. Chigai et al. (1999) find that formation of TiC cores covered by graphite mantles, in agreement with the constraints from presolar grains (Bernatowicz et al. 1991 possible over certain ranges of mass loss rate and gas outflow velocity. The study of Chigai et al. (1999), however, does not investigate the detailed chemical pathways leading to the formation of TiC molecules and small (TiC) n clusters, mainly because the relevant reactions and rate constants are unknown. Silicon carbide dust in C-rich atmospheres Silicon carbide is expected to condense after carbon and TiC for C/O ratios higher than 1.02 and after TiC for any C/O ratio (see Fig. 5). This implies that SiC grains may nucleate heterogeneously, that is, on preexisting condensation nuclei of carbon and/or TiC. However, the analysis of a presolar SiC grain containing TiC crystals seems to indicate that SiC and TiC nucleated and grew independently (Bernatowicz et al. 1992), which implies that SiC can nucleate homogeneously. Silicon carbide is expected to condense at 2.8 R * , and in this region the main reservoir of silicon is atomic Si while the main reservoir of carbon is C 2 H 2 (see bottom left panel in Fig. 7). Therefore, these two species are candidates for gas-phase precursors of SiC dust, as suggested by Gail & Sedlmayr (2013). The role of acetylene in the formation of dust is however in question due to the lack of radial abundance decline inferred for IRC +10216 (Fonfrรญa et al. 2008), as discussed in Sec. 6.1. Alternative candidates are molecules containing Si-C bonds, some of which are predicted to be abundant in C-rich atmospheres. Concreteley, SiC 2 , in the condensation region of SiC dust, Si 2 C, slightly farther, and Si 5 C, at even larger radii, are predicted to be the most abundant carriers of Si-C bonds (see bottom left panel in Fig. 7). The molecules SiC 2 and Si 2 C are indeed observed to be abundant in C-rich atmospheres Fonfrรญa et al. 2014;Cernicharo et al. 2015;Massalkhi et al. 2018). Moreover, there is evidence that SiC 2 is a gas-phase precursor of SiC dust. On the one hand, Fonfrรญa et al. (2014) infer an abundance decline with increasing radius in the dust formation region of IRC +10216. On the other, Massalkhi et al. (2018) found that the abundance of SiC 2 in C-rich AGB envelopes decreases with increasing envelope density. Both observational facts point to a depletion of gaseous SiC 2 to form SiC dust grains. The molecule Si 2 C is as abundant as SiC 2 and could also act as gas-phase precursor of SiC dust. The formation of SiC dust in the outflows of C-rich AGB stars has been studied theoretically by Yasuda & Kozasa (2012). These authors present chemical equilibrium abundances for Si x C y species which are in line with ours. This is not surprising given that we use the same thermochemical properties for Si x C y species than them, i.e., those of Deng et al. (2008). Yasuda & Kozasa (2012) investigated further the kinetics of formation of SiC dust in the framework of a dust-driven wind model considering a nucleation process consisting of addition of SiC and Si 2 C 2 molecules to form (SiC) n clusters of increasing n. This clustering sequence is also favored by the quantum chemical calculations of (SiC) n clusters by Gobrecht et al. (2017). While small condensation nuclei may form at the expense of SiC and Si 2 C 2 , the abundances of these molecules is too low to provide the required amount of SiC dust. Observations derive dust-to-gas mass ratios of (1 โˆ’ 4) ร— 10 โˆ’3 (Ramstedt et al. 2008) and mass ratios between SiC and carbon dust of of 0.02-0.25 (Groenewegen et al. 1998), which results in a mass ratio between SiC dust and H 2 of (0.2 โˆ’ 10) ร— 10 โˆ’4 . The calculated mole fraction of Si 2 C 2 is at most 10 โˆ’8 , which translates to a mass ratio relative to H 2 of 2ร—10 โˆ’7 , at least two orders of magnitude below the observational value. The chances of the SiC molecule to act as main gas-phase precursor of SiC dust are even lower due to the low predicted abundance, which is in line with the abundance upper limit derived from observations (Velilla Prieto et al. 2015). In summary, nucleation may occur by addition of SiC and Si 2 C 2 molecules to (SiC) n clusters, although the mass of condensation nuclei growth by this process is limited by the low gas-phase abundance of SiC and Si 2 C 2 . The molecules SiC 2 and Si 2 C thus emerge as the two most likely gas-phase precursors of SiC dust. Alumina dust in O-rich atmospheres Alumina is the first major condensate predicted to appear in Orich atmospheres. Our chemical equilibrium calculations put its condensation radius at 2.6 R * . In this region, the major carriers of aluminum are atomic Al and AlOH (see bottom right panel in Fig. 7). Other carriers of Al in the condensation region of alumina are AlO, Al 2 O, AlF, and AlCl. From these, atomic Al and the molecules containing an Al-O bond arise as the most likely gas-phase precursors of Al 2 O 3 . Gail & Sedlmayr (2013) suggest that atomic Al reacting with water, the major carrier of oxygen other than CO, drive the condensation of Al 2 O 3 . Gobrecht et al. (2016) modeled the kinetics of formation of alumina dust in M-type atmospheres. In their chemical scheme, clusters (Al 2 O 3 ) 2 , the seed of condensation nuclei, form by three-body recombination of Al 2 O 3 molecules. The formation of Al 2 O 3 molecules relies on the oxidation, by reaction with H 2 O, of Al 2 O 2 , which is formed by three-body recombination of AlO, which in turn is formed in the reaction of atomic Al with OH. Therefore, in the scenario depicted by these authors, the starting reservoir of aluminum is atomic Al, with AlO, Al 2 O 2 , and Al 2 O 3 acting as intermediate species. In the more recent model by Boulangier et al. (2019), in which the chemical kinetics scheme was revised with respect to Gobrecht et al. (2016), Al 2 O 3 molecules are not efficiently formed, which prevents the growth of large (Al 2 O 3 ) n clusters. Boulangier et al. (2019) points to limitations in the chemical network used for Albearing species as the reason of the low abundance of Al 2 O 3 . From an observational point of view, two potential gas-phase precursors of alumina dust have been observed around M-type stars: AlO and AlOH (Kamiล„ski et al. 2016;Decin et al. 2017), although reliable radial abundance distributions have not been derived, which prevents to evaluate whether these molecules act as gas-phase precursors of alumina dust. Summary We investigated theoretically the chemical composition of AGB atmospheres of M-, S-, and C-type by means of chemical equilibrium calculations using a recently developed code. We compiled a large dataset of thermochemical properties for 919 gaseous and 185 condensed species involving 34 elements. We consider for the first time a large number of titanium-carbon clusters. Concretely, we have computed thermochemical data for all Ti x C y clusters with x = 1-4 and y = 1-4 and for various stable large clusters like Ti 3 C 8 , Ti 4 C 8 , Ti 6 C 13 , Ti 7 C 13 , Ti 8 C 12 , Ti 9 C 15 , and Ti 13 C 22 . We studied the chemical composition in the 1-10 R * region of a generic AGB atmosphere adopting realistic radial profiles of temperature and pressure based on constraints from recent observations and results from hydrodynamic models. We confronted the predictions of chemical equilibrium with the latest observational constraints. Chemical equilibrium reproduces reasonably well the observed abundances of most of the parent molecules detected in AGB envelopes. However, there are serious discrepancies between chemical equilibrium and observations for some parent molecules which are observed with abundances several orders of magnitude above the expectations from chemical equilibrium. The species in conflict are HCN, CS, NH 3 , and SO 2 in M-type stars, H 2 O and NH 3 in S-type stars, and the hydrides H 2 O, NH 3 , SiH 4 , and PH 3 in C-type stars. We systematically surveyed the budget of each element, examining the main reservoirs (see Appendix A) and identifying several molecules that have not been yet observed in AGB atmospheres but are predicted with non-negligible abundances. The most promising detectable molecules are SiC 5 , SiNH, SiCl, PS, HBO, and the metal-containing molecules MgS, CaS, CaOH, CaCl, CaF, ScO, ZrO, VO, FeS, CoH, and NiS. For most of them, sensitive high-angular resolution observations with telescopes like ALMA offer the best chances of detection. We also investigated which condensates are predicted to appear and at which radius are they expected according to chemical equilibrium. In agreement with previous studies, we found that carbon, TiC, and SiC are the first condensates predicted to appear in C-rich outflows, while in O-rich atmospheres, Al 2 O 3 is the first major expected condensate. Chemical equilibrium indicates that the most probable gas-phase precursors of carbonaceous dust are acetylene, atomic carbon, and/or C 3 , while silicon carbide dust is most probably formed at the expense of the molecules SiC 2 and Si 2 C. As concerns TiC dust, most titanium is atomic in the inner regions of AGB atmospheres and thus atomic Ti is a likely supplier of titanium during the formation of TiC dust. Interestingly, we found that according to chemical equilibrium, large titanium-carbon clusters like Ti 8 C 12 and Ti 13 C 22 become the major reservoirs of titanium at the expense of atomic Ti in the region where TiC condensation is expected to occur. This strongly point to large Ti x C y clusters as important intermediate species during the formation of the first condensation nuclei of TiC. Finally, in the case of Al 2 O 3 dust, chemical equilibrium indicates that the main gas-phase precursor must be atomic Al or the molecules AlOH, AlO, and Al 2 O, which are the major carriers of Al-O bonds in O-rich atmospheres. Appendix A: Element by element gas budget Here we review which are the main gas-phase reservoirs of each element in AGB atmospheres according to chemical equilibrium. The calculations include only gaseous species and use as input the elemental composition given in Table 1 for AGB stars of M-, S-, and C-type and the pressure-temperature profile discussed in Sec. 2.4, which is taken as representative of AGB atmospheres. We do not discuss the noble gases He, Ne, and Ar, which are essentially as neutral atoms, nor hydrogen, which is mostly as H 2 at the exception of the hot inner atmosphere (> 1700 K for our adopted pressure-temperature profile) where H is more abundant. The rest of elements included comprise the non-metals B, C, N, O, F, Si, P, S, and Cl, metals such as Al, the alkali metals Li, Na, K, and Rb, the alkali-earth metals Be, Mg, Ca, Sr, and Ba, and the transition metals Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, and Zr. The importance of the various types of reservoirs, either atomic, molecular, or in the form of condensates, is different for each element. Here we focus solely on the gas-phase budget and do not consider condensates. However, as discussed in Sec. 5, most elements, especially metals, tend to form thermodynamically stable condensates below a given temperature. Therefore, we need to keep in mind that in the regions where the temperature has dropped below the relevant condensation temperature for each element, the abundance of the element in the gas phase must decrease and so all the abundances calculated here for the gaseous species should be scaled down in those regions. As concerns the gas-phase budget, in general, non-metals tend to be in molecular form, although neutral atoms can also be a major reservoir in the hot inner atmosphere for many of them, e.g., C, Si, and B in carbon-rich stars, O and S in oxygen-rich stars, and P and Cl regardless of the C/O ratio. For metals, the atomic reservoir tends to be more important than for non-metals. Ionized atoms can be abundant in the hottest regions and it is not rare that neutral atoms dominate largely over any molecular form throughout all the extended atmosphere. We however caution that for some metals the number of molecules for which thermochemical data is available is small and therefore there could be important molecular reservoirs of metals that are missed. Appendix A.1: Carbon The main gas reservoirs of carbon are shown in the two upper panels of Fig. A.1. It is seen that most carbon is in the form of CO. This fact implies that for M-type stars, which have C/O < 1, there is no carbon-bearing molecule with a significant abundance at the exception of CO 2 . On the other hand, in C-type stars, where the C/O ratio is higher than one, a great variety of carbonbearing molecules are formed with large abundances. Among them, there are pure carbon clusters, hydrocarbons, and different stable molecules in which a carbon atom is bonded to a nonmetal, such as N, S, Si, or P, or to a metal, like Na, K, Al, or Ti. The situation for S-type stars with C/O = 1 resembles that of Ctype stars but with the abundances of carbon-bearing molecules scaled down by some orders of magnitude. In C-type stars, the major reservoir of carbon not locked into CO is atomic in the hot innermost atmosphere and C 2 H 2 elsewhere. Only at large radii (>10 R * ) CH 4 becomes the major reservoir, although it is unlikely that chemical equilibrium regulates the chemical composition at such large radii. At large radii, polycyclic aromatic hydrocarbons (PAHs) have been also predicted to become important carriers of carbon (Tejero & Cernicharo 1991;Cherchneff & Barker 1992). However, PAHs are not observed in envelopes around AGB stars and thus it is uncertain whether they effectively form in these environments. Other major reservoirs of carbon are HCN, CS, and C 3 , while at a given distance from the star (3-5 R * ), silicon, titanium, and aluminum carbides of medium to large size (SiC 2 , Si 2 C, Si 3 C, Si 5 C, Ti 8 C 12 , and Al 2 C 2 ) become increasingly abundant. Appendix A.2: Oxygen In the two lower panels of Fig. A.1 we show the calculated abundances of the most abundant oxygen-bearing molecules. Similarly to the case of carbon, the very high abundance of CO makes that, in this case, in C-type stars no O-bearing species are present with a significant abundance at the exception of SiO and Al 2 O, which become important reservoirs of silicon and aluminum, respectively, at large radii. On the contrary, in oxygen-rich atmospheres, many different O-bearing molecules are formed abundantly, mostly consisting of oxides, hydroxides, and inorganic acids. Apart from CO, most oxygen is atomic in the surroundings of the AGB star and in the form of H 2 O elsewhere. Additional important reservoirs of oxygen are the radical OH, which is very abundant in the hot inner atmosphere, and SiO, which is a very stable molecule that locks most of the silicon. Other molecules present at a lower level of abundance are AlOH, CO 2 , SO, and PO. At large radii (>5 R * ) polyatomic molecules like the hydroxides Al(OH) 3 and Ca(OH) 2 become also quite abundant. In S-type stars the situation resembles that of carbon stars, with a paucity of oxygen-bearing molecules and only SiO and Al 2 O reaching relatively high abundances. Appendix A.3: Nitrogen The main reservoir of nitrogen in AGB atmospheres, regardless of the C/O ratio, is clearly N 2 (see top panel in Fig. A.2). Unfortunately, this species is very difficult to detect and has never been observed in the atmosphere or envelope of an AGB star. Only HCN competes with N 2 in abundance in carbon-rich atmospheres, and to a lower extent in S-type atmospheres. Neutral atoms are not an important reservoir of nitrogen as they would need temperatures in excess of 3000 K to compete in abundance with N 2 . There are other nitrogen-bearing molecules which are present at a lower level. The metastable isomer HNC and the radical CN reach relatively high abundances, comparable or somewhat below HCN, in the hottest inner atmosphere of S-and C-type stars, although the HCN/HNC and HCN/CN abundance ratios experience an important decline with decreasing temperature, and thus with increasing radius. At large radii (> 5 R * ), molecules like SiNH, and the metal cyanides NaCN and KCN reach non-negligible abundances in S-and C-type stars. In oxygen-rich atmospheres, the only N-bearing molecule that reaches a non-negligible abundance, apart from N 2 , is NO, which is calculated with a mole fraction of โˆผ 10 โˆ’8 in the inner atmosphere, although decreasing rapidly with increasing radius. Appendix A.4: Silicon The calculated abundances of Si-bearing species are shown in the two lower panels of Fig. A.2. The major carrier of silicon in M-type atmospheres is clearly SiO, while SiS and SiS 2 become also abundant at large radii (> 5 R * ). In C-type atmospheres, atomic silicon, in the inner atmosphere, and SiS, in the outer parts, are the most abundant reservoirs. SiO is also an important Si-bearing species in carbon-rich atmospheres but it only reaches C star (C/O=1.40) a high abundance, slightly below that of SiS, beyond โˆผ 5 R * . In the atmospheres around S-type stars, the three species Si, SiS, and SiO are all major reservoirs of silicon, each one in a different region. In carbon-rich atmospheres, the availability of carbon not locked into CO brings a variety of silicon-carbon clusters of the type Si x C y , some of them present with very high abundances (see bottom panel of Fig. A.2). The most abundant are clearly SiC 2 , Si 2 C, and Si 5 C, the latter being a major reservoir only at large radii (> 6 R * ). Other clusters predicted at a lower level are Si 3 C and Si 2 C 2 . The calculated abundances are of the same order than those reported by Yasuda & Kozasa (2012), who also used thermochemical data for Si x C y clusters from Deng et al. (2008). These silicon-carbon clusters are also present in S-type atmo-spheres with abundances comparable of somewhat lower than in carbon-rich atmospheres, while they are completely absent in oxygen-rich atmospheres. Appendix A.5: Sulfur The calculated abundances of sulfur-bearing species are shown in the two upper panels of Fig. A.3. Atomic sulfur is the main reservoir of this element in the inner atmosphere of M-type stars and also for S-type stars, although restricted to a smaller region around the star. In carbon-rich atmospheres, CS replaces atomic sulfur as the main reservoir in the inner atmosphere. The other major carrier of sulfur in S-and C-type stars is SiS, while in M-type stars molecules like S 2 , H 2 S, and SiS lock most of the sulfur in the region where atomic S drops in abundance. The radical SH is also an important S-bearing species, especially in M-type stars, where also SO, PS, and SiS 2 trap a non-negligible amount of sulfur. Appendix A.6: Phosphorus In the bottom panel of Fig. A.3 we show the calculated abundances of the most abundant P-bearing molecules. Most of the phosphorus is atomic in the inner atmosphere of S-and C-type stars, while at larger radii (> 3 R * ) the molecules HCP and P 2 become the major carriers of this element. In M-type stars, atomic P is the main reservoir only in the very close surroundings of the star, and molecules like PO, PS, P 2 , and HPO 2 (the latter only at large radii, โˆผ 10 R * ) lock most of the phosphorus. The calculations presented here differ somewhat from those presented in a previous study by Agรบndez et al. (2007). For example, our calculations indicate that in carbon-rich atmospheres P, P 2 , and HCP are all major carrier of phosphorus, while the calculations carried out by Agรบndez et al. (2007) pointed to HCP as the main, almost exclusive, reservoir of this element. The reason of the difference resides most likely in the formation enthalpy of HCP. The value at 298.15 K given in the NIST-JANAF compilation (Chase 1998) is 149.9 ยฑ 63 kJ mol โˆ’1 (note the substantial uncertainty). The thermochemical data adopted here for HCP is taken from the Third Millenium Thermochemical Database (Goos, Burcat, & Ruscic) and use a substantially higher forma- tion enthalpy, 216 kJ mol โˆ’1 , based on a more recent revision (see NIST CCCBDB 5 ). Another important difference concerns P 4 O 6 , for which Agรบndez et al. (2007) found that it is the main reservoir of phosphorus beyond โˆผ 4 R * in oxygen-rich atmospheres. Here we find a negligible abundance for this species. The reason again is related to the formation enthalpy. The NIST-JANAF compilation (Chase 1998) gives a value at 298.15 K of โˆ’2214.3 ยฑ 33.5 kJ mol โˆ’1 . The Third Millenium Thermochemical Database (Goos, Burcat, & Ruscic) states that the latter value is erroneous and uses a much higher value, โˆ’1606 kJ mol โˆ’1 , from the compilation of Gurvich et al. (1989). the element. In the atmospheres of M-type stars the situation is different. Atomic boron is not a major carrier and the molecules BO, HBO, HBO 2 take most of the element. At radii larger than 7-8 R * , where chemical equilibrium is less likely to hold, the alkali metaborate species NaBO 2 and KBO 2 are predicted to be the main reservoirs of boron in oxygen-rich atmospheres. Appendix A.10: Aluminum The aluminum budget is shown in the lower panel of Fig. A.5. It is seen that this element is mostly in the form of neutral atoms at the photosphere of AGB stars, regardless of the C/O ratio, while Al + is the second most abundant carrier and becomes the major reservoir for temperatures above โˆผ 3000 K. At distances larger than a few stellar radii, atomic aluminum begins to lose importance in favor of molecules. In M-type atmospheres, hydroxides like AlOH and Al(OH) 3 , the latter only at large radii (> 9 R * ), are major carriers of aluminum, while molecules like AlO, Al 2 O, AlCl, and AlF are also predicted to trap a significant fraction of the element. In the atmospheres of S-and C-type stars, the major molecular reservoirs of aluminum are the halides AlCl and AlF, together with Al 2 O at radii larger than โˆผ 7 R * . Appendix A.11: Alkali metals: Li, Na, K, and Rb The calculated abundances of species containing the alkali metals Li, Na, K, and Rb are shown in Fig. A.6. The four elements show a similar behavior. That is, most of the element is in atomic rather than in molecular form. Ionized atoms are the major reser-voir at the photosphere while neutral atoms dominate from radial distances not too far from the AGB star. The importance of ionized atoms increase with atomic number, following the decrease in the ionization energy. As concerns the molecular reservoir, the most important for all the alkali metals are chlorides. In the case of lithium, LiCl even becomes the major carrier of Li at large radii, especially in M-type atmospheres. For Na, K, and Rb, the corresponding chloride (NaCl, KCl, and RbCl, respectively) trap a significant fraction of the alkali metal regardless of the C/O ratio. In S-and C-type atmospheres, the cyanides NaCN and KCN become also abundant at large radii, while in M-type atmospheres the alkali metaborate species LiBO 2 , NaBO 2 , KBO 2 , and RbBO 2 are also important carriers of alkali metals. In general molecules are not the main reservoirs of alkali metals. However, the high elemental abundance of Na and K (โˆผ 10 โˆ’6 and โˆผ 10 โˆ’7 relative to H, respectively) makes it possible to observe molecules like NaCl, KCl, NaCN, and KCN in envelopes of AGB stars. In the case of Li and Rb, they can experience significant abundance enhancements compared to the Sun in some AGB stars, but their mean abundances remain low (โˆผ 10 โˆ’12 for Li and โˆผ 10 โˆ’10 for Rb, relative to H), which makes it difficult to detect molecules like LiCl and RbCl. Appendix A.12: Alkali-earth metals: Be, Mg, Ca, Sr, and Ba In Fig. A.7 we show the calculated abundances of species containing the alkali-earth metals Be, Mg, Ca, Sr, and Ba. As with alkali metals, alkali-earth metals are mostly in atomic form in the photosphere of AGB stars and also to a large extent in the rest of the extended atmosphere. Also, as occurs for alkali metals, ionized atoms become increasingly important with increasing atomic number, and thus decreasing ionization energy. They are of little importance for Be but a major reservoir in the case of Ba. Molecules can also trap a more or less important fraction of the alkali-earth metal depending on each element and on the radial distance from the AGB star. Beryllium is essentially in the form of neutral atoms in Sand C-type atmospheres, while in M-type stars, molecules like Be(OH) 2 and Be 4 O 4 become major reservoirs of this element beyond โˆผ 4 R * . The abundances reached by these molecules are however low due to the low intrinsic abundance of Be (โˆผ 10 โˆ’11 relative to H). In the case of magnesium, neutral atoms are clearly the major reservoir throughout all the extended atmosphere for any C/O ratio. The only Mg-bearing molecules that are present with nonnegligible abundances are MgS and MgO, which reach mole fractions between โˆผ 10 โˆ’10 and a few times 10 โˆ’9 in oxygenrich atmospheres. In S-and C-type atmospheres, Mg-bearing molecules are largely absent. Calcium is also mostly atomic in AGB atmospheres regardless of the C/O ratio. However, some molecules like CaOH, Ca(OH) 2 , CaS, CaCl, CaCl 2 , CaF, and CaF 2 form with relatively high abundances, especially in oxygen-rich atmospheres. In Sand C-type atmospheres, no Ca-bearing molecule is predicted with a significant abundance, except for CaCl and CaCl 2 at large radii (โˆผ 10 R * ). The situation of Sr resembles that of Ca, with atoms being the major reservoir and some hydroxides, halides, and monosulfide trapping a fraction of Sr in oxygen-rich atmospheres. As a s-process element, the abundance of Sr is enhanced in AGB atmospheres compared to the Sun but it is still substantially lower than that of Ca, resulting in very low mole fractions for Srbearing molecules (< 10 โˆ’10 ) and little prospects for detecting any of them. The last alkali-earth metal included is Ba. Similarly to Ca and Sr, most barium is atomic in AGB atmospheres, although in M-type atmospheres BaO and BaS emerge as two important reservoirs of this element, with mole fractions of the order of 10 โˆ’10 . In fact, there is evidence of BaO in M-type atmospheres from near-infrared spectra (Dubois 1977). This author also identified BaF and BaCl in M-and S-type atmospheres, respectively, although our calculations show low abundances for these halides. Their presence in such atmospheres could be a conse-quence of an enhancement in the abundance of the s-process element Ba over the values adopted by us. .10 (Ni,Cu,and Zn). In general, transition metals tend to be mostly as neutral atoms in S-and C-type atmospheres, while in oxygen-rich atmospheres, apart from neutral atoms, oxides are also an important reservoir. There seems to be a trend in which as one moves from the left to the right in the periodic table, the importance of oxides in M-type atmospheres decreases in favor of atoms or other molecules like sulfides and hydrides. For example, oxides are a major reservoir for Sc, Ti, Zr, and V, but not for Cr, Mn, Fe, Ni, Cu, or Zn. These generic conclusions are however to be taken with caution because there could be an important problem of completeness regarding the metal-bearing molecules for which thermochemical data is available. An example of this is illustrated by the case of titanium, in which the availability of thermochemical data for the many titanium-carbon clusters computed in this work reveals that some large Ti x C y clusters become the major reservoir of titanium in S-and C-type stars over an important part of the atmosphere. The same could also happen for other metals for which currently thermochemical data of metal-carbon clusters is missing. Another example is provided by cobalt, in which case thermochemical data is only available for a few Co-bearing molecules like CoH and some halides. Unlike the rest of transition metals, cobalt is predicted to be essentially as CoH. However, including in the calculations Co-bearing molecules like oxides or sulfides, for which currently thermochemical data is missing, may change the situation. The Sc, Ti, Zr, and V budgets (shown in Fig. A.8) share some similarities. In S-and C-type atmospheres, the metal is mostly in the form of neutral atoms. This is clearly the case of Sc and V, although for Ti and Zr some molecules become major carriers of the metal over a certain region of the atmosphere. In the case of Ti, large titanium-carbon clusters (mostly Ti 8 C 12 ) trap most of the Ti at radii larger than โˆผ 2 R * , while for Zr, the molecules ZrO and ZrCl 2 are also major carriers of Zr at radial distances > 6 R * . We however note that if thermochemical data were available for metal-carbon clusters M x C y (where M stands for the metal) involving Sc, Zr, and V, the budget of these elements in S- Ti TiH TiC TiC2 Ti8C12 Ti13C22 TiO TiO2 TiN TiS TiCl TiCl2 TiOCl TiF TiF2 TiF3 TiOF 20 (Hinkle et al. 1989;Jonsson et al. 1992;Joyce et al. 1998). Our calculations however result in a low mole fraction (< 10 โˆ’12 ) for TiS in S-type atmospheres, while in the case of ZrS we lack thermochemical data for it. We note that, similarly to the case of titanium-carbon clusters in carbonrich atmospheres, large metal-oxygen clusters with specific stoichiometries could also be fairly abundant in oxygen-rich atmospheres, as illustrated by the case of V, where V 4 O 10 is a major reservoir at large radii. Calculations of thermochemical data for such clusters (M x O y ) should allow to shed light on this. For Cr, Mn, Fe, Ni, Cu, and Zn, neutral atoms are clearly the major reservoir of the metal all over the extended atmosphere and for any C/O ratio. For the most abundant of these elements, some molecules can reach non-negligible abundances. In the case of Cr (top panel in Fig. A.9), the molecules CrO, CrS, and CrCl reach mole fractions of the order of 10 โˆ’10 in oxygenrich atmospheres. The only Mn-bearing molecule predicted with a non-negligible abundance is MnH, which has a calculated mole fraction around 10 โˆ’10 in atmospheres of any chemical type (see second panel from the top in Fig. A.9). For iron (third panel from the top in Fig. A.9), the molecules FeS and FeO are present with mole fractions up to โˆผ 4 ร— 10 โˆ’8 and โˆผ 3 ร— 10 โˆ’10 in M-type atmospheres, while Fe(OH) 2 also becomes abundant at large radii. The hydride FeH has been observed at near-infrared wavelengths toward S-type stars, although its abundance has not been constrained (Clegg & Lambert 1978). According to our calculations, the abundance of FeH is insensitive to the C/O ratio and thus it is expected with the same abundance in M-, S-, and C-type atmospheres. The maximum predicted abundance, which is reached at the stellar photosphere, is however somewhat low (slightly below 10 โˆ’10 ). The only Ni-bearing molecule with a non-negligible abundance is NiS, which becomes increasingly abundant with increasing radius in oxygen-rich atmospheres (see top panel in Fig. A.10). Finally, for Cu and Zn, no molecule is predicted with a significant abundance. The case of cobalt (see bottom panel in Fig. A.9) deserves some discussion. This is the only transition metal for which an hydride like CoH is found to be by far the major carrier of the element. Indeed, the Co budget is completely different to that of any other transition metal discussed here. Chemical equilibrium predicts that CoH is more abundant than atomic Co by orders of magnitude and this applies to atmospheres with any C/O ratio. For the other transition metals discussed here, neutral atoms are the major reservoir of the metal, or at least an important reservoir in the hottest regions of the atmosphere. This implies that CoH is a rather stable species. The thermochemical data for this molecule is taken from Barklem & Collet (2016). The other Cocontaining molecules included in the calculations are the halides CoCl, CoCl 2 , CoCl 3 , Co 2 Cl 4 , and CoF 2 , which are all predicted to have negligible abundances compared to CoH. These halides are thus much less stable than CoH. It is clear that for cobalt there exists a problem of incompleteness in the set of molecules for which thermochemical data is available. It would be desirable to have such data for potentially abundant molecules such as oxides, hydroxides, sulfides, and carbides. If some of them is found to be especially stable, then it could become a major carrier of Co at the expense of CoH. Article number, page 37 of 63 A&A proofs: manuscript no. ChemEq_AGBs Appendix B: Thermochemical data of Ti x C y clusters For each cluster Ti x C y , we compute a partition function Z (Kardar 2007) which takes into account electronic (e), translational (t), rotational (r), and vibrational (v) contributions (Ochterski 2000), where N is the number of particles, V the volume, T the absolute temperature (i.e., we work in the microcanonical ensemble), the different terms U are the internal energy contributions as labelled above, and k is the Boltzmann constant. The main thermodynamical magnitudes per unit mol, enthalpy (H), entropy (S ), and heat capacity at constant pressure (C P ), are derived as where R is the ideal gas constant and P is the pressure. Here, we are only interested in values for not too low temperatures, T โ‰ฅ 50 K. Accordingly, we write: where h is the Planck constant ( = h 2ฯ€ ), M the total mass, m the spin multiplicity of the electronic state (see Table B.1), and w i is the vibrational frequency of the mode i (the product is taken over all modes i with positive frequencies). For linear clusters, ฮ˜ = h 2 8ฯ€ 2 kI (I is the moment of inertia) and ฯƒ = 1 or 2 depending on whether it is heteronuclear or homonuclear. The only linear molecule is TiC, in which case ฯƒ = 1 since it is heteronuclear. For non-linear clusters (all Ti x C y clusters except TiC), ฮ˜ = ฮ˜ x ฮ˜ y ฮ˜ z (x, y, and z are the principal axes of the moment of inertia tensor) and ฯƒ is the order of the rotational subgroup in the point group associated to the cluster (see Table B.1). For each cluster, self-consistent many-body wavefunctions ฮจ and their associated ground-state variational total energies U 0 for optimized geometrical configurations have been obtained from ab initio Density Functional Theory calculations (Hohenberg & Kohn 1964;Kohn & Sham 1965). The optimized geometries of the clusters are given in Table B.2. For the sake of efficiency and accuracy we apply two different strategies. For small Ti x C y clusters, with x + y โ‰ค 10, we use an all-electron localized basis formed with linear combinations of gaussians (cc-pVTZ; Frisch et al. 2009;Duning 1989) and a chemistry model based on the hybrid exchange and correlation functional B3LYP (Becke 1988). For large Ti x C y clusters, with x + y > 10, the system is big enough to require the use of pseudo-potentials for Ti and we favor the use of an extended basis formed with linear combinations of plane-waves (Giannozzi et al. 2009) and a chemistry model based on a generalized-gradients approximation for exchange and correlation (Perdew et al. 1996), with E c = 490 eV and ฮ“ point. Both approaches provide a reasonable representation of equilibrium geometries, but their combined use yields more accurate values of the enthalpy of atomization for all the clusters studied here. In order to work with a minimal set of geometrical parameters, symmetrized models have been preferred whenever it has been possible. Finally, the electric dipole moment p has been obtained from p =< ฮจ | r | ฮจ > (Snyder 1974). Since this is the expectation value of a one-electron operator, its value should not critically depend on the choice for the exchange-correlation functional, although it should be noted that in practice one cannot expect in computed values a precision better than โ‰ˆ 10%, because the contribution of the tails of wave functions require the use of large basis sets with diffuse functions. The calculated enthalpies of atomization agree well with literature values, either experimental or theoretical, when these are available (see Table B.1). The thermochemical properties of all Ti x C y clusters calculated at 1 bar and as a function of temperature are given in Tables B.3-B.25. (Goos, Burcat, & Ruscic). g Experimental value from Sevy et al. (2018). h Experimental value from Stearns & Kohl (1974). i DFT calculation by Muรฑoz et al. (1999).
Residual stress effects during additive manufacturing of reinforced thin nickelโ€“chromium plates Additive manufacturing (AM) is a powerful technique for producing metallic components with complex geometry relatively quickly, cheaply and directly from digital representations; however, residual stresses induced during manufacturing can result in distortions of components and reductions in mechanical performance, especially in parts that lack rotational symmetry and, or have cross sections with large aspect ratios. Geometrically reinforced thin plates have been built in nickelโ€“chromium alloy using laser-powder bed fusion (L-PBF) and their shapes measured using stereoscopic digital image correlation before and after release from the base-plate of the AM machine. The results show that residual stresses cause potentially severe out-of-plane deformation that can be alleviated using either an enveloping support structure, which increased the build time substantially, was difficult to remove and wasted material, or using buttress supports to the reinforced edges of the thin plate. The buttresses were quick to build and remove, minimised waste but needed careful design. Plates built in a landscape orientation required out-of-plane buttresses while those built in a portrait orientation required both in-plane and out-of-plane buttresses. In both cases, out-of-plane deformation increased on release from the baseplate but this was mitigated by incremental release which resulted in out-of-plane deformations of less than 5% of the in-plane dimensions. Introduction Additive manufacturing (AM) is a modern technique for producing engineering components in a variety of materials including metals, plastics and composites. Many types of additive manufacturing are available; however, their common feature is the creation of parts in a layer-by-layer, or slice-by-slice, process based on a digital representation of the engineering component or part. A key advantage of additive manufacturing is the capability to produce a part with a complex geometry without part-specific tooling, such as the moulds and dies needed in casting and forging, and without wasting large amounts of material as in machining or subtractive manufacturing. However, stresses induced during additive manufacturing can result in distortion and ultimately reduce the mechanical performance of parts. These stresses, known as residual stresses, have been the subject of investigations since additive manufacturing was first introduced for manufacturing prototypes (for example, Curtis et al. [1]). For many parts with some level of rotational symmetry, the distribution of residual stresses will also possess a degree of symmetry that will prevent significant distortion of the part although may still influence mechanical properties such as strength and failure. Parts with substantial cross sections may also not exhibit significant shape distortion due to the stiffness of the part containing the residual stresses. However, parts that lack rotational symmetry and, or have cross-sections with large aspect ratios (i.e. in-plane to outof-plane dimensions) are likely to be particularly susceptible to distortion caused by the residual stresses generated during additive manufacturing. One class of such parts has been investigated here, namely thin rectangular plates with geometrically reinforced edges made using a metallic alloy with a high resistance to temperature. This geometry is of interest because it is representative of panels used to contain plasma in fusion reactors and of the skin of hypersonic flight vehicles [2][3][4]. The number of devices manufactured for both of these applications is likely to be small and hence additive manufacturing is a potentially attractive option for producing parts and provides the motivation for this study. The influence of residual stresses may be thought of as occurring at three different length scales in metals [5]. At the macroscale, non-uniform plastic deformation sets up differential strains within a component that create macrostresses that act on the scale of the geometry of the component to cause global distortion. At the microscale, local microstructural effects and phase transformations generate micro-stresses which can influence the material properties of the component. And at the atomic scale, residual stresses can be induced by heterogeneous behaviour [6,7]. Macroscale residual stresses have been the primary focus of attention in additive manufacturing due to their tendency to cause part distortions and the possibility of alleviating their effects through changes in the process parameters. It is difficult to measure residual stresses in metals directly; however, X-ray diffraction [8,9] and neutron diffraction [10] have been used to evaluate residual strains in additively manufactured parts following completion of the build. Magana-Carranza et al. [11,12], who also reviewed residual stresses induced by additive manufacturing, have used a force transducer device incorporated into an AM machine to measure forces induced during the build process in order to enhance understanding of the development of macroscale residual stresses. They studied the laser-powder bed fusion (L-PBF) process [13,14] and concluded that laser scanning strategies with shorter vectors tended to induce higher levels of force which were sensitive to the laser power and point distance when the energy density delivered to the part was below a critical value. These findings resolved some apparently contradictory evidence from simulations and post-build measurements thus providing some confidence to attempt to build thin plates which are susceptible to deformation due to residual stress even when built using subtractive manufacturing [8]. This confidence was perhaps misplaced since many attempts were required before an acceptable part was built, as outlined below. Nevertheless, the study has resulted in a series of conclusions that should be relevant when building this type of part using additive manufacturing and laserpowder bed fusion in particular. Geometrically reinforced plate The specific motivation of the study was to build, using additive manufacturing, thin rectangular plates with reinforced edges that could be used in a study of their response to thermo-acoustic excitation and compare the results to plates of identical geometry subtractively machined from a thick plate stock [4]. As mentioned above, the choice of a reinforced thin-plate geometry was motivated by applications in fusion powerplants and hypersonic flight vehicles. The details of the geometry of the reinforced plate are shown in Fig. 1 and consist of a 1-mm-thick flat plate with in-plane dimensions of 130 mm and 230 mm surrounded by a reinforcing edge, or frame, of 10 mm ร— 5 mm rectangular cross section. The frame contained a number of holes that were designed for use in the thermo-acoustic excitation experiments and also had the specification of the plate embossed on the top left corner as shown in Fig. 1. Note the presence of supporting structures around the reinforced plate in Fig. 1. Details of these supports will be provided in Fig. 1 Geometry of edgereinforced thin plates showing the design of the out-of-plane buttresses that were attached in latter builds to the edges of geometric reinforcement or frame. All dimensions in mm subsequent sections. All of the reinforced plates were built using a nickel-chromium alloy (Inconel 625) in the form of gas-atomised powder (Carpenter Additive, UK) which is widely used for additive manufacturing employing L-PBF. A material datasheet for this powder is available from the supplier [15]. L-PBF L-PBF uses a laser beam to selectively melt sections of a thin layer of metal powder on a flatbed in an inert atmosphere [9]. A three-dimensional part is built layer-by-layer with a thin layer of powder spread over the previous layer at each stage. The initial layer is built on the baseplate of the machine which is typically pre-heated to reduce residual stresses generated by differential strains in the baseplate and lower layers of the part. Residual stresses are generated during the L-PBF process as a result of the thermal gradient mechanism that occurs around the spot where the laser beam is incident on the part and causes local heating or melting of the upper layer(s) of the part followed by rapid cooling with a steep temperature gradient when the laser beam moves away [10]. Manufacturing of the geometrically reinforced plates was performed using a L-PBF machine (Renishaw AM250, UK) with a maximum laser power of 1 kW. In all builds the laser power was 400 W, the exposure time 40 ฮผs, the point distance 70 ฮผm, and the layer thickness 60 ฮผm. These values for the processing parameters were chosen based on prior experience [7] and to achieve a density in the parts greater than 99% of the cast material. The density was measured in ten cubes with side length of 10 mm using Archimedes' principle and was found to have a mean of 99.64% with a standard deviation of 0.42% relative density. Initially two half-size (65 mm ร— 115 mm) test plates, with the same thicknesses as shown in Fig. 1, were built to establish whether to use the Meander or Stripes scan strategy (shown in Fig. 2) for these thin section parts. The build times for these plates were 5 h 8 min and 4 h 32 min, respectively, using the Meander and Stripes scan strategies. The resultant out-of-plane measurements across a diagonal for each of the two test plates after release from the baseplate are shown in Fig. 3, with details of how the measurements were made presented in the next section. It can be seen in Fig. 3 that the Stripes scan strategy produced less curvature by a factor of 2, and hence, it was used in all subsequent builds in this study. This result is consistent with the results from our earlier study from which we concluded that scan strategies with shorter scan vectors generate lower residual stresses [7]. The reinforced plates were built on the standard baseplate for the machine. Previous studies suggest that a pre-heated substrate, to 160 ยฐC [16] or to 180 ยฐC [17], reduced the temperature gradients within the parts and decreased residual stresses; therefore, the base plate was heated to 170 ยฐC, which was the maximum temperature permitted by the machine's control system. The dimensions of the baseplate and the reinforced plates required the latter to be orientated across the diagonal of the former when built in landscape orientation, i.e. with the long side horizontal, and the same arrangement across the diagonal was maintained when the parts were built in portrait orientation, i.e. with the short side horizontal. A 3-mm high support structure consisting of a series of tapered rods as shown in Fig. 1 was built to separate the part from the baseplate and allow it to be detached after the build process. On completion of a build cycle, the baseplate was removed from the machine and the shape of the reinforced plate measured using stereoscopic digital image correlation, as described below, before progressively removing the reinforced plate from the baseplate by cutting through the supporting rods. In some cases, this removal proceeded in incremental stages with measurements made of the change in shape for each stage using the digital image correlation system. Stereoscopic digital image correlation The shapes of the plates were measured using a stereoscopic digital image correlation (DIC) system (Q400, Dantec Dynamics GmbH, Ulm, Germany). This system is capable of providing data on the shape of the plate relative to a plane or the displacements of the surface of the plate relative to an initial state [18]. In this study, it was used in both modes with the former datasets referred to as shape or z-measurements while the latter as out-of-plane displacements. The system was set up, as shown in the plan view in Fig. 4, to achieve a spatial resolution of 20 pixels/mm using a pair of identical CCD cameras with 1292 ร— 964 pixels and 50 mm lenses. At the end of each build process, the thin plate was painted black and then sprayed with white paint to form a speckle pattern to support the image correlation process which was performed using the software provided with the system (Istra 4D, Dantec Dynamics GmbH). A typical speckle pattern is shown inset in Fig. 4. The image correlation was performed using square facets or sub-images with sides of length 29 pixels whose centres had a pitch of 15 pixels. Before each set of measurements was acquired, the system was calibrated using a calibration target (Al-08-BMB-9 X 9) supplied by the instrument manufacturer. In addition, the measurement uncertainty was estimated by taking an initial image of the plate before removal from the baseplate and then replacing it at the same position with the aid of positioning guides, capturing another image and correlating the two images. A typical result for out-of-plane displacement resulting from the correlation of two such pairs of images is shown as an inset in Fig. 4. This process was repeated six times for each plate. The resultant correlation maps provide an estimate of the measurement uncertainty which typically had a mean of zero and standard deviation of 0.0014 mm, i.e. there was no bias and a very low value of random noise. On completion of a build process and after spray-painting the reinforced plate, the baseplate from the AM machine with the reinforced plate attached was placed on an optical table for the DIC measurements. The accuracy with which the baseplate could be relocated was within the measurement uncertainty of the digital image correlation system, as explained above. Pairs of stereoscopic images were recorded for the reinforced plate attached to the baseplate and subsequently as the supports were released which allowed the initial and final shapes of the reinforced plate to be evaluated as well as the evolution of out-of-plane displacements as it was released. Results After the reduced-scale test plates had been used to confirm that the Stripes scan strategy produced lower levels of deformation, an attempt was made to build a full-scale plate as described by the CAD drawing in Fig. 1, i.e. in landscape orientation with the long sides horizontal. This was not successful because the plate developed an S-shape profile as shown in the photograph of the completed build in Fig. 5. The shape of this plate was measured using the DIC system following its complete removal from the baseplate and the results are shown in Fig. 6. The lower three-quarters of the plate developed an approximately elliptical dome; however, at a height of about 105 mm a sudden change in behaviour occurred with a two-dimensional out-of-plane curve appearing. A repeat of the process used to build the plate shown in Fig. 5 produced an identical plate. When two plates were built without the edge-reinforcement of the frame, a similar result was produced but with a delamination or split appearing horizontally between build layers at the height at which the out-of-plane curvature commenced, i.e. about 100 mm. Hence, it was decided to provide an additional support structure around the reinforced plate to stiffen and strengthen it during the AM build process. The design and completed build for the additional support structure are shown in Fig. 7. The completed build was removed from the baseplate and then the additional support structure was removed. The latter was a substantial task which was undertaken using an electric discharge machine; however, the resultant thin plate had a flatness of only 5.05 mm and its shape is shown in Fig. 6. Metrologically, flatness is defined as the minimum distance between two planes within which all the points on a surface lie [19]. The time required to remove the additional support structure shown in Fig. 7 and the material wasted probably negated the advantage gained from using additive manufacturing; hence, an alternative strategy was sought. The additional support structure was reduced to buttresses supporting the vertical portion of the edge-reinforcements. These buttresses were triangular in their plane perpendicular to the plane of the reinforced-plate with their outside edge subtending an angle of 8.5ยฐ to the vertical and their inside edge attached to the edge reinforcement via small rods identical to those used to attach the structure to the baseplate, as shown in Fig. 1. Two plates were built separately with this geometry and the buttresses were removed, before each plate was removed from the base plate progressively by cutting the rod supports in increments alternating between the left and right side. For one plate the increments were 15 mm and for the other plate the increments were 5 mm. After each incremental cut, the baseplate was relocated on the optical table and a pair of stereoscopic images recorded which allowed the displacement of the thin plate to be monitored as it was released from the baseplate. The final shapes for these plates are shown in Fig. 8 which shows that the plate released in smaller increments had a slightly lower flatness of 5.5 mm; hence, the out-of-plane displacements during release of this plate are shown in Fig. 9. The use of buttresses to provide support to the vertical portion of the edge-reinforcement or frame resulted in successful build processes but a flatness that was about 10% worse than using the additional support structure shown in Fig. 7. Therefore, it was decided to investigate building the plates in a portrait orientation, i.e. with their short side horizontal, and with buttresses of the same triangle ratio providing out-of-plane support during the build process. This strategy was unsuccessful and resulted in a delamination appearing between build layers at a height of about 230 mm or about 20 mm from the end of the build, as shown in Fig. 10 (left). However, the addition of in-plane buttresses solved this problem resulting in a successful build, also shown in Fig. 10 (centre and right). This successful build was released from the baseplate in alternate increments of 5 mm which resulted in a flatness of 4.6 mm as shown in Fig. 11. The out-of-plane displacements during the incremental release from the baseplate are shown in Fig. 12. Discussion The formation of residual stresses and the development of the shape of an edge-reinforced plate are history-dependent and time-varying processes in which the equilibrium of forces in the plate changes during the build process due to the addition of mass to the plate and due to energy transfers from the laser and to the surroundings. The equilibrium state also changes during the removal of the built part from the baseplate and when any additional supporting structure is removed. The sequence of out-of-plane displacement fields shown in Figs. 9 and 12 illustrate the changing state of the equilibrium of forces during release from the baseplate which results in changes to the shape of the part. However, Fig. 5 Example of a build that failed for a thin plate without additional support shown in Fig. 1 and shape data in the top of Fig. 6 Fig. 6 Shape of the front face of the edge-reinforced landscape plates built without any supporting structure (top) as shown in Fig. 5 and using an additional support system (bottom) as shown in Fig. 7 following removal from the baseplate. The maximum deviation from a flat plane is 5.05 mm. For the plate built with an additional support system, the maximum deviation from a flat plane is 5.05 mm which does not occur along the diagonal plotted in the graphs on the right it is very difficult to monitor directly the changing balance of forces during the build in the chamber of the AM machine. In previous work, the authors have used a force transducer device fitted into the baseplate of an AM machine to monitor the time-varying response of the forces on the baseplate during the build process [6,7]. It was found that the forces induced in the early stages of the build represented about 80% of the maximum force induced and relatively low levels of relaxation occurred during the scanning by the laser, which implies that significant residual stress was locked into the part. In later stages of the build, the magnitude of forces induced and relaxed as each layer was added were approximately equal, which implies that the level of residual stress locked into the upper layers of the part was substantially less than for the lower layers adjacent to the baseplate. These observations were made on parts that were square in the horizontal plane scanned by the laser, i.e. had an aspect ratio of one compared to 50 for the reinforced edge of the thin plates, and had a height of the same order of magnitude as the reinforced edge of the plates in this study. Nevertheless, they help to explain both the substantial changes in shape that occurred in this research when the reinforced plates were released from the baseplate and the importance of the rate and pattern of release. In the early stages of the build process, significant residual stresses are induced by the differential thermal expansion both within the part and between the part and the baseplate, during which the mechanical constraint from the baseplate is significant. Previous studies have found compressive residual forces in the centre of parts and tensile forces at the edges [4,6], and this distribution of forces at the interface of the part with the baseplate is probably responsible for distribution of out-of-plane displacements along the bottom horizontal edge of the reinforced plates following release from the baseplate, as shown in Figs. 9 and 12. It is important to consider that the shapes shown in Figs. 5 and 10 were developed layer-by-layer as the part was built. As the part height is built up, the influence of the baseplate constraint is likely to be diminished, while the reinforcement of the part by its frame may become more important. The influence of the reinforced edge was investigated by building two thin plates (230 ร— 130 mm) without the reinforced edge and with plate thicknesses of 1 mm and 1.2 mm. A delamination appeared in the thinner of the two plates at 93 mm above the baseplate and the thicker plate formed an out-ofplane S-shaped curve similar to the edge-reinforced plate in Fig. 5 but starting 108 mm above the baseplate compared to 100 mm when the edge-reinforcement was present. The development of a delamination between layers in the 1 mm thick plate when the edge reinforcement was absent suggests that the reinforcement was providing additional strength to the plate during the build process. Similar behaviour was observed in the plates built in portrait orientation without in-plane buttresses when delamination occurred, whereas for those plates with in-plane buttresses there was no delamination, as shown in Fig. 10. Previous work has concluded that residual stresses tends to be compressive in the centre of a layer and tensile at the edges [4,6] which will cause bending of a layer into a catenary with the ends of the layer bending upwards, towards the laser. It is difficult to identify a trigger for the delaminations at a specific height in the build; Fig. 6 for resultant geometry of edge-reinforced plate however, it could be related to the rate of cooling, which will change as the mass of the part increases and alters both the heat capacity and conduction paths of the part. It might also be related to a misalignment of layers causing a new layer to be incompletely bonded to the previous one. It can be seen in Fig. 10 that there are small horizontal discontinuities in the surface of the plate built without in-plane buttresses at heights of 113 and 225 mm above the baseplate. Data on the shape of the plate during the build process are not available; however, data at the completion of build and prior to removing the edge-reinforced plate from the baseplate are shown as the first step in Figs. 9 and 12 as part of the Fig. 8 Shape measured using digital image correlation in edge-reinforced landscape plates built with additional buttresses (shown in Fig. 1) and removed from the baseplate in alternating 5 mm (top color map) and 15 mm (bottom color map) increments together with z-coordinate profiles (bottom) across diagonals (flatness was 5.5 and 5.69 mm, respectively) sequence of changes on release from the baseplate and on a large scale in Fig. 11 for the plate built in portrait orientation. Hence, it can be postulated that these out-of-plane displacements move the top layer of the plate horizontally away from the path of the laser, which means the next layer is not built directly aligned with previous layers but effectively over-hangs the previous layer resulting in an incomplete bonding of the layers that allows delamination to occur when cooling of the subsequent layers induces deformation into a catenary. In the absence of sufficient reinforcement, the stresses associated with the deformation into a catenary cause the plate to delaminate. For the plates built in the Fig. 9 Out-of-plane displacements measured at each increment of 5 mm release from baseplate (odd numbers on right) for edge-reinforced landscape plate with additional buttresses (see geometry in Fig. 8). The dis-placements are relative to initial shape of the plate on the baseplate at the end of the build cycle and measured on the front face (step 0) Fig. 10 Completed builds of edge-reinforced plates built in portrait orientation without in-plane buttresses (left) leading to delamination and with in-plane buttresses (center and right) resulting in a successful build landscape orientation, the reinforced edge is sufficient to resist delamination because only the plate without the edge reinforcement was seen to develop a delamination. However, for the plates built in the portrait orientation, both in-plane and out-of-plane buttresses were required to prevent delamination as shown in Fig. 10. The size of the AM machine's build platform prevented in-plane buttresses being used in the landscape orientation. When a delamination does not initiate then it would seem that top edge of the partially built plate tends to translate out-of-plane due to the differential thermal strains in the plate. This might be a result of the additional constraint of the reinforcement inducing out-of-plane rather than inplane deflection. The resultant misalignment of the top layer and next layer to be built results in a slight offset creating the beginning of the curved shape seen in Fig. 5. At a later stage in the build the reinforced plate starts to deflect in the opposite out-of-plane direction causing the just-built layer to realign under the path of the laser so that the s-shaped curve is formed. The reason for the realignment is difficult to surmise. These out-of-plane displacements during the build were prevented by including additional out-of-plane reinforcement, initially in the form of the total support structure shown in Fig. 7, and then using the buttresses shown in Figs. 1 and 10. The flatness of the edge-reinforced plates built in both landscape and portrait orientation with out-of-plane buttresses is about 2 mm when they are attached to the baseplate of the AM machine, as shown in the step 0 in Figs. 9 and 12, respectively. However, significant residual forces are reacted through the baseplate, which are released when the reinforced plate is removed from the baseplate inducing significant deformation of the plates and approximately doubling their flatness. There is also an accompanying change in the shape as shown in Figs. 9 and 12. For the plate built in the landscape orientation (Fig. 9), the out-of-plane displacements during the incremental release from the baseplate are substantially larger in the bottom half of the plate than in the top half. This is unsurprising because the effect of the residual forces reacted through the baseplate would be expected to be greater adjacent to the baseplate and to dissipate with distance from the baseplate. Hence, the release of these forces as the connections to the baseplate are cut would be expected to cause greater deformation adjacent to the cut edge. The same phenomenon is present in the reinforced plate built in portrait orientation (Fig. 12); however, Fig. 11 Shapes before (left) and after (right) release from baseplate for edge-reinforced plate (130 ร— 230 ร— 1 mm) built in portrait orientation with in-plane and out-of-plane buttress (see Fig. 1) and with alternating incremental release from baseplate the magnitude of the difference between the displacements at the top and bottom of the plate is much smaller probably due to the shorter horizontal dimension and closer proximity of vertical portions of the edge-reinforcement. Based on prior work [6], it would be expected that the reinforced plate exerts compressive forces vertically on the baseplate in the centre of its bottom edge and tensile forces at the ends of its bottom edge. Hence, from Newton's third law, the baseplate exerts tensile forces at the centre of the edge and compressive forces at the ends. The release of these forces would tend to make the bottom edge of the reinforced plate bend upwards in the centre to form a catenary orientated like an arch; however, the in-plane stiffness of the thin plate is very high compared to its out-of-plane stiffness due to its geometry and, as a consequence, the plate deforms outof-plane forming a domed shape. The deformation is more severe in the bottom half of the plate than in the top half (see Figs. 9 and 12) due to the proximity to the baseplate and the larger influence of the residual forces reacted through the baseplate as described above. For the plate built in the landscape orientation, the end result is a domed shape that is broader at the bottom of the plate than the top. Whereas for the plate built in the portrait orientation, the result is a change in shape from a central negative z-direction dimple with positive z-direction dimples above and below (see Fig. 11) to a single negative direction dimple with a single positive direction one above. In the portrait orientation, the release of constraining forces from the baseplate removes the lower dimple, deepens the central one and relaxes the top one. We also investigated whether the resulting shapes in the portrait and landscape orientation builds were connected to buckling characteristics of the plate geometries. However, it is important to note that both the development of residual stresses during the build as material is being added and the development of the final plate shape as baseplate supports are being severed are gradual processes. Thus, a global buckling response after the event (i.e. after build and after support release) seems unlikely. However, the final shape of the plates exhibited snap-back behaviour in most cases, i.e. they were bi-stable structures with two mechanically stable configurations. The smaller area and aspect ratio of the layers built in the portrait orientation compared to the landscape orientation will have resulted in the laser revisiting each region of the part more quickly. This will have increased the rate of energy input per unit volume. However, previous work has shown that, above a minimum threshold, the rate of energy input has an insignificant effect on the level of residual stresses. Fig. 12 Out-of-plane displacements measured at each increment of 5 mm release from baseplate (odd numbers on right) for edge-reinforced portrait plate with in-plane and out-of-plane additional buttresses (see geometry in Fig. 11). The displacements are relative to initial shape of the plate on the baseplate at the end of the build cycle and measured on the front face (step 0) Hence, it seems likely that the differences in shape the plates built in the landscape and portrait orientations result from geometric effects associated with constraining the long and short edges respectively via their attachment to the baseplate; as well as from the differential thermal strains between layers in the longitudinal or transverse directions respectively. The larger number of layers required to build the reinforced plate in the portrait orientation appears to make the plate more susceptible to delamination as demonstrated by the requirement to use in-plane buttresses to prevent delamination, which were not necessary for the plates built in the landscape orientation. The smaller aspect ratio (i.e. 130:1 compared to 230:1) of the layers built in portrait orientation led to smaller out-of-displacements and flatter plates. It is difficult to predict from the data acquired whether the use of buttresses would be effective for other aspect ratios and in other materials, though it seems likely. However, the data presented in Figs. 8, 9, 11 and 12 should be invaluable in future work to develop computational models of the build process and the associated development of residual stresses. Conclusions Geometrically reinforced thin plates have been built using laser-powder bed fusion (L-PBF) and their shapes measured using stereoscopic digital image correlation before and after release from the baseplate of the AM machine. The results were used to draw the following conclusions:
Perspectives of socioeconomically disadvantaged parents on their children's coping during COVIDโ€19: Implications for practice Abstract Disruptions caused by COVIDโ€19 have the potential to create longโ€term negative impacts on children's wellโ€being and development, especially among socioeconomically disadvantaged children. However, we know little about how socioeconomically disadvantaged families are coping with the pandemic, nor the types of support needed. This study presents qualitative analysis of responses to an openโ€ended question asking parents how children are coping with the restrictions associated with COVIDโ€19, to identify areas in which these cohorts can be supported. Four main themes were identified: health concerns, schooling difficulties, social isolation and adjustment to restrictions. Health concerns included exacerbation of preโ€existing health conditions, fear about the virus, difficulty getting children to understand the pandemic and increased sedentary behaviour. Schooling difficulties referred to the challenges of home schooling, which were behavioural (e.g. difficulty concentrating) and logistical (e.g. technology). Social isolation, expressed as missing friends, family and/or institutions was common. Finally, parents expressed that children experienced both positive adjustments to restrictions, such as spending more time with family, and negative adjustments such as increased screen time. Many responses from parents touched on topics across multiple themes, indicating a need for comprehensive, holistic assessment of children's and families' needs in the provision of support services. The content of the themes supports calls for resources to support children and families including increased financial and practical accessibility of social services, physical health and exercise support, mental health support and COVIDโ€19 communication guides. little about how socioeconomically disadvantaged families are coping with the pandemic, nor the types of support needed. This study presents qualitative analysis of responses to an open-ended question asking parents how children are coping with the restrictions associated with COVID-19, to identify areas in which these cohorts can be supported. Four main themes were identified: health concerns, schooling difficulties, social isolation and adjustment to restrictions. Health concerns included exacerbation of pre-existing health conditions, fear about the virus, difficulty getting children to understand the pandemic and increased sedentary behaviour. Schooling difficulties referred to the challenges of home schooling, which were behavioural (e.g. difficulty concentrating) and logistical (e.g. technology). Social isolation, expressed as missing friends, family and/or institutions was common. Finally, parents expressed that children experienced both positive adjustments to INTRODUCTION In addition to the health impacts of COVID-19 infection for children, several concerns have been raised about the indirect impacts of the pandemic on children around the world, particularly those impacts arising from school closures and social distancing requirements (Toros, 2021). As well as their primary function of education provision, schools address many of children's basic needs, including food (Dunn et al., 2020), physical activity (Guan et al., 2020) and health and mental health services (Golberstein et al., 2020). In addition to school-based services, COVID-19 and its restrictions have reduced accessibility of formal and informal community-based supports for both children and parents. Reduced access to core services for families and children as a result of COVID-19 and its related restrictions places increased pressure on parents as they guide themselves and their children through an unprecedented global health crisis, all the while seeking to maintain family functioning and meet children's social and emotional development needs. The stakes are high for parents, as mid-pandemic events and their immediate effects have potential long-term consequences for their children's development. Even short-term interruption of access to essential services can lead to long-term negative impacts on educational attainment, physical health and psychosocial functioning (Dunn et al., 2020;Rundle et al., 2020). The immediate effects of the pandemic are already playing out: 34.7% of parents of children aged 0-12 in a US study reported that their children's behaviour had changed during the first few weeks of the pandemic, including increased sadness, depression and loneliness (Lee et al., 2021), 41.5% of parents of children aged 7-13 in a Turkish study reported that their children had gained weight (also in the first few weeks of the pandemic) (Adฤฑbelli & Sรผmen, 2020), and several studies across the world have found that the majority of parents report that their children have had increased screen time and decreased physical activity (Adฤฑbelli & Sรผmen, 2020;Lee et al., 2021). The impacts of COVID-19 restrictions are expected to be more pronounced among lower socioeconomic status children (Cowie & Myers, 2021). Lower socioeconomic status children are more likely to need support to access and use devices and the internet to facilitate online restrictions, such as spending more time with family, and negative adjustments such as increased screen time. schooling, and are less likely to have appropriate study spaces in their homes, which may contribute to widening gaps in literacy and numeracy between socioeconomically advantaged and disadvantaged children (Evans & English, 2002;Van Lancker & Parolin, 2020). In addition, in the absence of the structured mealtimes and school nutrition programmes, lower socioeconomic status children are at higher risk of weight gain than their advantaged counterparts (Rundle et al., 2020). The risk of weight gain is further compounded by the lower physical activity arising from shelter-in-place instructions, increased screen time and lack of access to playgrounds and schoolyards (Guan et al., 2020). This study explores the perceptions of a sample of socioeconomically disadvantaged parents of their children's coping during the COVID-19 pandemic. A 'parent' in this context refers to anybody with primary care of children under 18, including grandparent and other family member carers, foster carers, biological parents and other caregiving arrangements. The parents and their children were residing in Perth, Western Australia at the time of the study. During the study period, between May and July 2020, Australia's international border and Western Australia's state border were closed. Though COVID-19 daily cases never exceeded 400 (among a population of 2.6 m), restrictions were in place throughout the study period including purchase limits on grocery items, the closing of pubs, churches and gyms and the shifting of community services online. These restrictions are relevant to the present study as, although necessary in the face of an unprecedented health crisis, they did limit the support available to both parents and children. While essential services such as emergency food and financial relief remained accessible in modified formats (e.g. delivery of food hampers rather than a publicly accessible food bank) and many non-essential services such as mental health and parenting services implemented virtual service delivery options, these options are not direct substitutes. As noted above, people experiencing socioeconomic disadvantage are less likely to have the privacy and space required for effective service delivery at home, nor adequate internet speed and bandwidth for regular virtual service delivery. In addition, less tangible informal supports that arise from social interaction with peers and workers at services are difficult to replicate in the absence of in-person interaction. Accordingly, families navigated an uncertain and unexpected situation without access to the services that they relied on in 'normal' circumstances. In addition to restrictions on community and social services, schools were fully closed for a brief period (3 weeks), followed by 2 weeks of optional in-person attendance or online schooling at the parent's preference, then full in-person attendance was required. Mask wearing was not mandated or recommended by public health officials for any age groups in 2020 in Western Australia. The decision to close schools was unprecedented in recent Australian history, and meant that children had to rapidly adapt to significant disruptions to their routines, online learning and loss of play and social opportunities. Similarly, parents had to learn how to facilitate online schooling for their children, and for many this occurred at the same time as transitioning to working from home themselves. While schools were closed for a short period of time, the duration was unclear in the midst of the closures and, once they reopened, there was also uncertainty around whether closures would occur again. Australia is a wealthy country with quite strong social security, thus the experience of socioeconomic disadvantage is very different to that in countries that do not have such benefits. In addition, to help mitigate the financial impact of COVID-19 on people, the Australian Government introduced wage subsidies for those who were employed and supplements to welfare payments. In terms of per-capita expenditure and expenditure as a proportion of Gross Domestic Product, these were some of the most extensive economic support measures worldwide (International Monetary Fund [IMF], 2021). Accordingly, Australia may represent a 'mild' case of the possible impacts of COVID-19 on children due to the minimal initial spread of COVID-19 and the rapid implementation of strong economic relief for individuals, and the present study is generally most relevant to high-income countries. However, the impacts of the pandemic arising from the interruption to services and institutions (such as schools) and general instability and uncertainty are likely to be experienced, albeit to varying extents, by children across the world. It is important to note that not all of the impacts of the pandemic have been negative. Many families, in studies conducted across the world, reported in the early stages of the pandemic that they were able to spend more time with their children and enjoy more quality family time (Lateef et al., 2021;Lee et al., 2021). Despite this, large proportions of parents met criteria for major depression and anxiety (Cameron et al., 2020;Lee et al., 2021) and parenting stress was common, particularly associated with homeschooling (Lateef et al., 2021;Lee et al., 2021). Further, positive impacts of COVID-19 were more likely to be experienced by two-parent families whose employment and income was unaffected (Lateef et al., 2021). In turn, the negative effects of the pandemic on parents are more likely to be present among lower socioeconomic parents. Mothers with low socioeconomic status are more likely to be younger, unemployed, experience depression and experience high levels of parenting stress, and less likely to have social supports on which to lean in general, let alone during a pandemic (Menon et al., 2020). Further, parents' access to support services was also constrained during COVID-19: only 21.5% of parents of children aged 0-8 in an online, primarily Canadian sample presenting with clinically relevant depression or anxiety during the pandemic had accessed counselling in the month prior (Cameron et al., 2020). COVID-19 presents an array of contextual stressors that impede effective parenting for all parents, but especially those of lower socioeconomic status. In addition to increased parental risk factors and the practical issues around schooling and socialisation, socioeconomically disadvantaged families are more likely to rely on school-and community-based programmes to fulfil basic needs such as food and healthcare (Dunn et al., 2020;Golberstein et al., 2020). Economic stimulus measures during COVID-19 such as increases to welfare payments may alleviate the financial burden of meeting basic needs, however, food shortages, transport difficulties and nongovernment service closures are still likely to impede lower socioeconomic families' ability to meet their needs (Dunn et al., 2020;Services Australia, 2020). Further compounding the practical realities of parenting during COVID-19, lower socioeconomic status parents are subject to more risk factors and have fewer supports to facilitate effective parenting during COVID-19. However, lower socioeconomic status does not universally result in suboptimal parenting and poor children's outcomes. More responsive and stimulating parenting mitigates the impact of social risk factors on children's outcomes, and parents can be supported to develop such parenting skills and styles (Burchinal et al., 2006). Parental functioning is thought to be determined, in general terms, by the psychological resources of parents, characteristics of children and sources of stress and support (Belsky, 1984). In line with this, as a contextual stressor that further strains the resources available to parents at the same time that many external supports are inaccessible, the pandemic is likely to have negative impacts on parental functioning. Parental education (parental skill building programmes) and parental support (initiatives and interventions to address particular concerns of or for parents) can enhance parents' psychological resources and provide support for managing contextual stressors (Miller & Sambell, 2003). However, it is extremely important that parenting education and support does not blame or problematise parents, and instead empowers them to utilise their unique skills and manage the particular needs they have and experiences they manage that affect parenting, such as employment, financial issues, partner relationships and one's own health and mental health (Wade et al., 2022;Smith, 1997;Cottam & Espie, 2014). In this regard, few studies have examined how lower socioeconomic status children and families are coping with the pandemic, leaving gaps in our understanding of the types of support that are most needed and wanted by these families. Therefore, this study provides data on families' experiences of the pandemic, and thus contributes to the sparse evidence of children's responses during pandemics (Jiao et al., 2020;Lateef et al., 2021). Building this evidence base is crucial, both because of the long-term flow-on effects of COVID-19 and other pandemics on children's development, and because it is anticipated that outbreaks of novel viruses will become a persistent feature in our global future (Scudellari, 2020). Accordingly, identifying risks, needs and opportunities for parents navigating these circumstances is critical to minimise the negative impact on parents and children. Study context Data were collected as part of a longitudinal study on entrenched disadvantage in Perth, Western Australia. Potential participants were identified by partner non-government service delivery agencies as having two or more of the following 'eligibility criteria' for hardship and entrenched disadvantage: reliance on welfare payments, unstable housing, unemployment or underemployment, physical or mental disability or mental health issues, inadequate social support and low education. Interested participants were invited to a service convenient to them to complete a Baseline survey (after providing informed consent). Baseline surveys were completed between November 2018 and April 2019. The Baseline surveys revealed that the vast majority of participants were experiencing multiple disadvantages across the abovementioned domains (see Seivwright & Flatau, 2019 for further information about the full study sample). The second annual wave of survey data collection of the study was underway when COVID-19 and its associated restrictions came into play. In response to this, the research team added questions to the survey about the impacts of COVID-19 and re-engaged study participants that had already completed their second wave survey to complete the supplementary COVID-19 questions. Participants were provided with an AUD25 gift card or direct bank deposit to compensate them for their time completing the COVID-19 supplementary survey. Data were collected between May and July 2020. Ethical approval was obtained from The University of Western Australia Human Research Ethics Committee (RA/4/20/4793). Sample A total of 158 people completed the COVID-19 survey. Of these, 86 (54.4%) had one or more children under the age of 18 in their care (ฮผ = 2.08, ฯƒ = 1.42, range = 1-8), and form the sample of parents in the current study. One parent's response was not categorised as it referred only to adult children. Table 1 provides the demographic characteristics of the sample of parents. Instrument Data were collected using a survey instrument on the Qualtrics software platform. The survey comprised mostly quantitative questions about a range of factors in participants' lives, such as housing, employment, health, mental health and social connections. In addition to the quantitative questions, some open-ended questions were included in the survey instrument to capture selected subjective experiences in greater depth. All surveys were facilitated by an interviewer over the phone or via video call. Interviewers were instructed to type out open-ended responses verbatim. Given the unprecedented nature of the pandemic and the lack of clarity about how it was going to unfold, an exploratory qualitative approach was determined to be appropriate for exploring early impacts on children through the eyes of their parents. Accordingly, the data for this manuscript are participants' responses to the open-ended question: 'We are interested in how COVID-19 is affecting children. How are your children coping with the COVID-19 restrictions?' Analysis Each response was subject to line-by-line coding by one author (AS). The coding was inductive, such that each response was analysed line-by-line and labels (open codes) assigned that describe the theme observed. Codes were non-exclusive, as a participant's response could touch on multiple themes. The open codes derived from the line-by-line coding were then grouped thematically into higher level themes. The coding schema, comprising the open codes and their higher level themes, was then provided to another author (ZC) who coded each response according to the Employed, but away from work 4 (4.7) Unemployed and actively seeking work 8 (9.3) schema, noting any open codes that were missing from the schema. A high level of agreement (75.3%) between coders was achieved. ZC and AS then discussed their codes for each response and collaboratively decided upon the final schema (presented in Figure 1, below). All coding was undertaken in Excel. RESULTS A total of 26 open codes emerged from analysis of the open-ended responses. These were grouped into four themes: health concerns (k = 40), schooling difficulties (k = 20), social isolation (k = 34) and adjustment to restrictions (k = 67). Health concerns Several parents noted health concerns for their children arising from COVID-19 restrictions. For some (k = 3) these pertained to physical health; two parents noted physical health consequences of sedentary behaviour, reflected in comments such as 'Gaining weight because they are not going out doing anything. They are eating me out of house and home'. Related to food, another parent commented 'I had to feed her fast foods because the stores were getting empty of things'. Additionally, children's comorbidities were a concern for some (k = 3) parents. This meant taking extra precautions around social contact, and particularly around school: 'she has asthma so have to be extra careful', 'has a few disabilitiesโ€ฆnot comfortable going back to school due to health issues', and 'โ€ฆ [Son] was off school (haemophiliac) since they started to pull kids out of school. He has a low immune system'. Concerns related directly to COVID-19 were quite common among children, as reported by their parents, presenting as difficulty understanding the virus (k = 13) and fear of the virus (k = 11). Parents reported that they had difficulty getting their children to understand the virus and, in particular, the reasons underpinning the restrictions on their activities. This was attributed to age by the parents whose children were not yet school aged (k = 2) and had difficulty understanding COVID-19, reflected in statements such as 'Because they're all under 4, they don't fully understand what's going on'. Understanding why the virus meant that they could not see friends and family seemed to be the biggest issue that children had difficulty reconciling: 'She didn't understand why she couldn't go to her friends or school. She was feeling really lonely', 'He was really confused to start with. He couldn't understand why we couldn't go out and hug people', and 'they don't understand why, why we can't spend time with my sister and my mum'. Some difficulty was also reported around adhering to hygiene and social distancing guidelines: 'Very hard for them -they like to go out, they want to touch everything, have to keep them washing their hands, have to keep the house very clean'. In one case, providing clear explanations of the virus and precautions to prevent it led to anxiety when a co-parent did not adhere to precautions: Their father took them four times to the supermarket in 1 day in March and they were distressed as I'd explained why it was risky and I wasn't taking them. They were apprehensive if we did go out and someone came too close, even now they do not like people coming too close (Parent comment). Parents reported that their children 'โ€ฆwere scared', 'โ€ฆreally worried about it', and 'a bit paranoid'. Fear of the virus presented itself in multiple ways. Parents said their children 'Became afraid to go out'. 'โ€ฆdidn't want to leave the house', and 'โ€ฆwere apprehensive if we did go out'. Some children 'โ€ฆ were very worried that they would catch it', whereas others were worried family members would contract the virus. Parents attributed these fears to the media around COVID-19, noting that news reports contributed greatly to children's fears about the virus: F I G U R E 1 Qualitative coding schema At the beginning it was all over the news and they did not take it seriously until they saw people getting sick and dying, really affected them. Became afraid to go out. When it was time to be able to go out, they refused cos were too frightened (Parent comment). At first, when all there was, was doom and gloom on the news she got quite worried and upset, because it was like 'if you get it, you're going to die'. It got to the point where she was scared to hug her dad when he visited (Parent comment). One parent managed their child's stress by not putting 'โ€ฆ the news on when he was home' and making 'a positive game out of it for him', as they had 'discovered it was how [they] conducted [themselves] that bounced off him, so tried to stay fun and stress free'. The mental health consequences of isolation and sedentary behaviour were also apparent: 'because of the isolation it has bought out different mental health issues with my two children' [sic]. Parents reported that the COVID-19 restrictions exacerbated their children's pre-existing mental health conditions (k = 5), such as anxiety, ADHD, autism spectrum disorders and suicidality. One parent of four children mentioned space was an issue, as they have 'no big back yard' for their 'two children [with] high functioning ADHD, so [it has] been hard for them to be active'. Parents also reported that their children 'Struggled with no exercise or going out cos of his anxiety so out every day walking the dog to help with his mental health', and that they 'had to increase the children's anti-anxiety medication'. Short-term mental health effects associated with COVID-19 restrictions were also evident in the form of stress and anxiety (k = 7), and causing children to act out (k = 8). Stress and anxiety emerged through statements such as 'She was having some anxiety as she doesn't like being told she can't do things such as have play dates with friends', 'children are panicking', and 'getting stressed at home'. Acting out varied in severity, from 'getting snippy', to being 'off the planet. He's been misbehaving. Very disruptive in the class and answering back to the teachers', to becoming 'very intolerable. My son did a runner for 24 hours'. Schooling difficulties The transition to home schooling created difficulties for some children with respect to difficulty focusing (k = 6), struggling with the learning format (k = 7) and falling behind with schoolwork (k = 4). Difficulties focusing during home school were expressed in statements such as 'in school they can really put their mind inside, you know the whole time when they're in school but when you're at home, she said she's always distracted by so many things', and 'Online school was difficult to get them motivated'. Struggles with the learning format were articulated in vague statements such as 'not being ready for online learning' and 'one child couldn't cope', and more specific statements such as 'Struggled with learning at home. Technology didn't work, needed support to get the computer to work', and 'My daughter is in Year 12 so it actually caused a lot of problems for her. Not being able to speak to her teacher, not being able to see her friends, doing everything on line which is hard for her'. With respect to falling behind on schoolwork, one parent said it was their children that were concerned: 'The twins feel they have missed out on work at school and they were worried that they would be far behind when they went back to school', while the remaining three parents simply stated that their children were behind at school. Notably, only one parent reported that their child benefited from home school. Several parents (k = 13) noted positive impacts of remaining in or returning to in-person school or day care. Returning to school alleviated boredom for some: 'He felt bored and the boredom does not last because he is already gone back to school [sic]', 'Son was bored every day, but since being back at daycare it has made it a lot easier', and 'They were not happy with the home school because they were bored and restless. They have been much happier since the parents said they can go back to school'. The benefits of social interaction were also noted, and the academic benefits of returning were also evident: 'now she's at school on the campus, she says she's doing well and able to cope with all the homework and assignments'. Social isolation Many parents made statements that indicated that their children experienced the effects of social isolation. For some, this was about missing friends (k = 15) and missing family (k = 11), evident in statements such as 'She found it really hard -she's a very social kid and she loves her friends', and 'Not liking it, wants to see family and friends'. For some (k = 3), the restrictions impacted upon custody arrangements, indicated by statements such as '[daughter] is a bit down because of restrictions on visits (through DCPFS) [Department of Child Protection and Family Services]', and a grandparent carer noted that 'They really felt upset about not seeing their dads, my sons'. The restrictions and their effects also disrupted home routines for six parents, for example, two parents reported disruptions to sleeping routines and others made statements such as 'When lockdown happened routine went out the window', and 'Everything was contradictory, I'd always tried to limit screen time, now they have it a lot more'. Social isolation extended to institutions, with some (k = 10) parents reporting that their children missed going to school: 'Really sad, grandson really missed going to school', and 'My child is frustrated not being able to go to school and see his friends'. Some (k = 5) parents reported that their children's access of non-government services had changed during the COVID-19 pandemic. For two, this was not expressed as a negative change, but the remaining three indicated it was negative through statements such as 'we've tried zoom and she's not keen on it' and 'it's been very hard to meet her needs. She's going to [mental health service] but they only do video calls'. Adjustment to restrictions Over one third (k = 31) parents reported that at least one of their children were coping well, such that they experienced no or minimal apparent ill effects from COVID-19 and its restrictions. Five of them attributed this to the child being too young to understand, three reported that their children were successfully maintaining their social connections online, and 20 reported that they were 'coping well' or 'took it really well'. Some parents reported that their children had had a positive experience, having more family time (k = 3), enjoying it (k = 5), so much so that for some, 'their whole thing was party time, big holiday' and 'thinks they are on holiday, staying in PJ's all day', and enjoying being at home (k = 5). However, some children did not enjoy the restrictions. This was expressed by six parents in statements to the effect of 'not happy'. Many children were not happy about being stuck inside (k = 25). Being stuck inside affected children in several different ways, for some it was simply frustration with being 'cooped up', others struggled with a lack of activity -'they were pretty much housebound apart from going on walks. They are active children and found this difficult', 'It's negatively impacting them all, they are going crazy at home, they need to go out and interact', and 'They miss swimming lessons, missing doing activities outside'. Boredom (k = 7) was expressed quite bluntly with statements such as 'My daughter found it very boring', 'Very bored' and 'She got bored'. Being confined to the home increased screen time (k = 7) for the children. Increased screen time was expressed as a positive for the children of two respondents, indicated by statements 'she did a lot of work on it and [it] kept her busy', and 'still having contact with friends over games on the computer, that helped him'. The remaining five parents expressed concerns about passive screen time: 'watching a lot of TV and spending more time inside the house and as a result doing less physical activity', 'Playing a lot of video games. Watching more TV than usual. Less physical contact'. DISCUSSION This study explored parents' perceptions of children's coping with COVID-19 and its restrictions. The themes that emerged provide contemporaneous early evidence for many of the impacts predicted by clinicians and public health experts, and provide further insight into how these impacts manifest. They also point to several opportunities for increased support to mitigate the impact of external stressors brought about by COVID-19 on effective parenting. Health concerns were prominent among our sample. Several parents reported concerns about their children's lack of physical activity and, in some cases, weight gain. Naturally, the focus of schools has been the transfer of core curriculum to online learning modes. However, given the physical and mental health benefits of exercise and the increased need for those benefits in light of the impacts of COVID-19, there is a clear need for physical education and healthy lifestyle resources to facilitate children's health and well-being while adhering to physical distancing and other public health advice (Guan et al., 2020;Rundle et al., 2020). Concerns about exacerbation of pre-existing physical and mental health conditions were evident, reflecting the higher prevalence of such conditions among socioeconomically disadvantaged cohorts (Spencer et al., 2015). In addition, many parents made mention of fear and anxiety about the virus among their children. Studies have found that 'emotion contagion' among families is prevalent during COVID-19, such that children of anxious and/or depressed parents are more likely to experience symptoms of anxiety and depression (Adฤฑbelli & Sรผmen, 2020;Lateef et al., 2021;Lee et al., 2021). This suggests a strong need for support for both parents' and children's health, particularly mental health. Accessibility is crucial and difficult in light of limitations to in-person contact and the sensitive nature of some supports such as psychological services, as well as the lower financial means that lower socioeconomic households have to acquire private services. A range of support services and modes of delivery, such as telehealth services, after hours phone services, and peer support, are needed to meet the varied needs and accessibility challenges for parents and children, particularly those who are socioeconomically disadvantaged (Golberstein et al., 2020;Lee, 2020;Wang et al., 2020). In addition to mental-health specific support services, there is evidence that online family leisure games can serve to mitigate the anxiety and stress associated with home confinement (Manzano-Leรณn et al., 2021). The stress and anxiety that were common in our sample often resulted in acting out, in turn causing distress for parents. Online parenting resources may help support parents by increasing their awareness of particular parenting issues, techniques and methods (ลženol & รœstรผndaฤŸ, 2021). Clear communication would also help to assuage fears about COVID-19 and difficulty understanding it, which were very common among our sample. Dalton and colleagues note that effective communication about COVID-19 must be simple, honest and age-appropriate (Dalton et al., 2020). Age appropriateness refers not just to simplicity of language but consideration of children's comprehension of cause, illness and blame or guilt. They suggest to first understand what children believe so that information can be tailored to be meaningful to them. However, general guidelines and materials for COVID-19 communication at different ages, that are accessible through different channels (e.g. online, television, hard copy brochures), could support parents in communicating to their children. Schooling difficulties were extremely common, with only one participant reporting that their child benefited from home schooling. Many parents reported that their children struggled with the online format, focusing at home and falling behind. They also reported that their children missed going to school. Given the limited evidence on the contribution of school closures to control of coronaviruses, the direct experiences of children and parents support that suggestion that the full spectrum of consequences should be considered in policy decisions relating to school closures (Viner et al., 2020). When school closures are necessitated, it is important to ensure that parents are adequately supported, as taking on the unanticipated additional role of educator can be a significant stressor for parents and therefore children (Lateef et al., 2021). For example, though most parents in the study of Lee et al. (2021) reported receiving educational resources from their children's school, parenting stress was common and parents with depression and anxiety felt less equipped for home schooling. Accordingly, support required will vary from parent to parent, and will not be limited to educational resources, instead including mental health support, parenting support and perhaps even instrumental support, such as meal preparation to relieve stress. Feelings of social isolation, such as missing friends and family were very common. Given that high functioning families and social support from peers are thought to be protective factors for adolescent development (Orben et al., 2020) and mitigating factors against the health effects of disadvantage (Braveman & Gottlieb, 2014), more research is required on how to support young people's relationships during physical separation. Services and resources that reduce family stress and increase young people's self-esteem would also facilitate this. Somewhat unsurprisingly given the resilience of children, several parents report that their children had adapted well, were spending more time with family and enjoyed being at home and the novelty of the situation. However, even among many of those who were coping well, boredom, increased screen time and feelings of being stuck inside were common. The fact that many children and (as these reports came via parents) families enjoyed more time together, even under such stressful circumstances, suggests that families may benefit from more quality time together in general. This may be facilitated by flexible working arrangements and ideas and opportunities to participate in activities as a family, especially those that do not involve screen time in light of parents' concerns about screen time, which are certainly not unique to the pandemic or this study (Wade et al., 2022). These activities may include sports and other games, walks and hikes, cooking classes or art, craft or language classes (in line with individual families' interests and available resources). Overall, the findings point to a need for resources to support children, families and communities in maintaining physical and psychological well-being during COVID-19. As these findings are derived from a lower socioeconomic sample, they support existing literature suggesting that the need for such resources is particularly pronounced among socioeconomically disadvantaged children and families, who are more likely to experience these ill effects and less likely to have the means to address them without support. Particular areas requiring attention include mental health support for both parents and children, physical activity plans, maintaining social connections and resources for communicating with children at different age levels about COVID-19. In terms of how practitioners and policy-makers may action these findings and their implications, consideration should be given to the range of factors that may be affecting children's wellbeing through holistic, individualised assessment and treatment. For example, a mental health practitioner seeing a child should enquire about indirect factors that may be impacting the child's mental health such as low physical activity and explore the reasons behind it (e.g. anxiety about catching the virus, lack of awareness about the benefits of exercise, embarrassment about proficiency, depression). Similarly, a teacher who notices a child struggling to focus should seek to understand why, and consider that factors such as anxiety may contribute. At the government and service policy level, there are clear, broad suggestions that may benefit parents and children generally and through pandemics and other major stressful events. These include ensuring that there are adequate mental health and parenting supports for parents, and that these supports are flexible and accessible (e.g. can be accessed in different ways, and at different times including outside of business hours). Past literature has been extremely critical of pushes for parental education and support programmes underpinned by rhetoric of parental blame and value judgements (Cottam & Espie, 2014;Cullen et al., 2016;Smith, 1997). Accordingly, parental support options should be strengths-based and person-centred, such that parents identify the support that they need and are helped to identify the internal and external resources available to meet their support needs, as opposed to prescriptive programmes that provide a generic set of skills without consideration of whether those skills are present, wanted or needed. Drawing on the Belsky (1984) model, these supports would increase awareness and utilisation of psychological resources among parents, and help to manage and reduce the impact of contextual stressors. Further, in addition to general government advice about what to do to stay safe and to keep children safe (in a pandemic and in other contexts), governments and public health officials could produce materials that guide parents about how to talk about these measures with their children. This study is exploratory and represents only one of the many pieces of research that need to be undertaken on the short-and long-term impacts of COVID-19 on families. Though examining the impacts on children through the perspectives of their parents is common (e.g. Adฤฑbelli & Sรผmen, 2020;Lee et al., 2021) and valuable as parents are the main providers of support and guidance for children, supplementary research undertaken with children directly would likely uncover additional support needs. In addition, future research should endeavour to examine differences in experiences among children by sex and age, and give consideration to other contextual variables such as the depth and nature of disadvantage present and availability of resources such as social support. In terms of further research on parents, while a sample of 85 responses is substantial for qualitative data, generalisability of findings would be enhanced by quantitative studies on larger samples. In addition, it is possible that parents with stronger feelings about the question asked provided more detailed responses, thus future qualitative studies should be designed to allow further probing into participants' responses. It is also important to note that data for this study were collected in the early stages of the pandemic and, though the pandemic persists at the time of writing, it is in quite a different stage. For example, particularly in high-income countries, vaccination rates have reduced the risk of death and serious illness, potentially reducing some of the virus-related anxiety that parents in our study reported in their children. In addition, the social and mental health benefits of schooling have been emphasised in academic, political and public spheres thus school closures are generally now considered to be 'last resort' measures. On the other hand, the longevity of the pandemic and the introduction of protections such as mask-wearing in schools may mean that pandemic-related anxiety is still a prominent issue among children and, indeed, their parents. In terms of how the data collection timeframe affects the implications of our results, a number of the themes that emerged related to COVID-19 exacerbating issues that commonly affect people experiencing socioeconomic disadvantage, such as physical and mental health conditions. In that sense, COVID-19 is merely another factor compounding disadvantage and the recommendations for support for these underlying conditions are still pertinent. Themes specifically related to COVID-19, such as virus anxiety and sedentary behaviour due to mandatory home confinement may still be present, but in different forms (e.g. virus anxiety about long COVID and sedentary behaviour due to voluntary home confinement). Finally, this research was conducted in a high-income country with universalised healthcare and a strong social safety net. Accordingly, needs and support options are appropriate to that context and a separate program of research is recommended to examine impacts in lower income countries.
Longitudinal influence of COVID-19-related stress on sexual compulsivity symptoms in Chinese undergraduates Background The coping theory shows that stressful life events are associated with individualsโ€™ psychology/behaviors; meanwhile, the coronavirus disease of 2019 (COVID-19) pandemic is known to have impacted individualsโ€™ physical and mental health. Prior studies revealed that undergraduates have many sexual behavior and emotion disorders, which may be impacted during an isolation period, such as the one brought by COVID-19. However, few studies have explored the longitudinal associations between COVID-19-related stress and sexual compulsivity symptoms (SCS), and the mediating effect of emotions (i.e., depression and anxiety) on this relationship. This longitudinal study aimed to investigate these associations. Methods We employed a cross-lagged design (2020/2/12: Time 1, 3219 participants; 2020/6/6: Time 2, 2998 participants) and recruited Chinese undergraduates through an online system to respond to a survey. Results Our results showed that COVID-19-related stress at Time 1 directly influenced SCS at Time 1, and there was an indirect influence via depression and anxiety at Time 1. COVID-19-related stress at Time 1 positively correlated with depression, anxiety, and SCS at Time 2, and the first could directly and positively predict SCS at Time 2. Moreover, albeit depression at Time 2 was negatively linked to SCS at Time 2, anxiety at Time 2 enhanced the effect of COVID-19-related stress on SCS. Conclusions Our findings extend the literature on SCS, showing that the higher the COVID-19-related stress, the higher the SCS, and the longer-lasting effect was associated with anxiety in undergraduates. Furthermore, depression does not mediate the relationship between COVID-19-related stress and SCS. Background A growing number of studies have found that undergraduates have high levels of sexual compulsivity symptoms (SCS) and have highlighted their impact on individuals' physical and mental health [1][2][3]. SCS comprise individuals' sexuality and sexting behaviors and sexual behavioral intention. Increasingly, young college students have been experiencing a number of psychological and behavioral disorders that are influenced by many factors, such as life events, negative stress, and coping strategies [4][5][6][7]. Concurring, the ecological systems theory [8] has pointed out that people's SCS are influenced by many factors, which can be summarized into two categories: outside environmental factors and individual coping strategies [9][10][11]. In China, the outbreak of the coronavirus disease of 2019 (COVID-19) has greatly impacted individuals' physical and psychological health. Particularly, most undergraduates became isolated, be it either in their homes or in diminished school environments, leading to a lack of interrelationships with society; this can be designated as a negative life event. According to the coping theory, negative life events influence individuals' psychological states and behaviors [12][13][14]. For example, young people who experience negative life events tend to have a higher SCS [9,10], and undergraduates may present more impulsive sexual behaviors and psychological states owing to negative life events [1,2,4,15]. However, the underlying mechanism behind the effect of negative life events on people's sexual behaviors and psychology have yet to be more thoroughly explored. Based on the stress theory, a person's coping ability is influenced by stressful life events [16], with negative life or huge stressful events being able to influence SCS [13,14,[17][18][19]. Therefore, we hypothesized that COVID-19related stress would correlate with SCS. In view of aforementioned findings, the literature provides evidence not only on the direct effect of life events on individual behaviors and psychology, but also on the indirect effect of the first on the latter via coping style [20,21] and via depression and anxiety [22]. Moreover, studies have shown that emotion can explain the mechanism behind the association between stress and coping [23]; the effect mechanism of emotion on coping models [24]; that emotion regulation strategies are important to influence the effect of stressful life events on individual behaviors [23,25]; and the effect mechanism of emotions and social culture on individual sexual behavior and motion [26][27][28]. Additionally, studies have found that depression and anxiety are associated with sexual behavior, and the consumption of sexual content and pornography in undergraduates [29,30]. However, the correlation and effect mechanisms of stressful life events, emotion, and sexual behavior have yet to be explored. A longitudinal study showed the importance of experiencing life changing events, as they were strongly associated with individuals' sexual behaviors [31]; another study noted that longitudinal research is the best method for analyzing people's sexual developmental patterns [32]. Nonetheless, there is a scarcity of literature on the relationship between COVID-19-related stress and SCS, particularly regarding the potential longitudinal effects. Along with the COVID-19 outbreak, the Chinese government enforced a range of quarantine policies; particularly, the isolation policy impacted the mode of living and the mental health of most Chinese people, having a unique social significance within the period. Prior studies have suggested that social policies and cultural factors can impact people's psychology and behavior [15], especially when interrelationships are affected, highlighting that young undergraduates may be eager to communicate with others, peers, or the opposite sex. However, in events that evoke the need for isolation (e.g., the COVID-19 pandemic), they become forced to isolate or remain in a diminished school environment, having to bear with the strains evoked by such stressful events. Amid this, undergraduates may also suffer with the stress that this isolation imposes on their sexuality, including both sexual psychology and behavior. The cognition-motivation theory shows that life events influence individuals' emotions and behaviors [33], and based on the coping model, emotion recognition has a relational meaning, which thereby influences individuals' motivations and desires; these, in turn, evoke emotions, which thereafter influence individuals' innate behavioral tendencies, and these tendencies tend to be consistent with how people cope. Hence, the literature shows that individuals' behaviors are consistent with their individual characteristics and with the environments in which they live; namely, SCS may be influenced by both life events and individuals' emotional factors. Although previous studies have revealed that COVID-19-related stress influences individual psychology and behavior [34], and assumed that COVID-19-related stress was associated with SCS (as were other negative life events), the emotion recognition theory indicates that emotion can also play a mediating role in the relationship between stressful life events and SCS. Meanwhile, fewer studies have examined the longitudinal influence of emotion on the relationship between COVID-19-related stress and SCS [35]. Therefore, based on the perspective of coping and social psychology, the current study aimed to longitudinally investigate the effect of COVID-19-related stress on SCS. We used a cross-lagged analysis to explore the link between COVID-19-related stress (Time 1) and SCS (Time 2), and the mediating role of depression and anxiety (at Time 1 and 2) in this link (Fig. 1); we hypothesized that depression and anxiety would mediate the link. We believe that our findings are significant for both the literature and society, as follows: for the latter, they can help promote undergraduates' physical and mental health; for the former, they can serve as references for the development of theories on positive psychology and for the better understanding of negative life events and their long-term effects on individuals. Participants and procedure Study participants were undergraduate students from five universities in China who were recruited online via a smartphone app-an epidemic network reporting system of the universities. Each one of the five universities was from a different region in China (east, west, north, south, and middle). Both students living at home and on campus were included in the study. The questionnaires were completed on February 12, 2020 (Time 1) and June 6, 2020 (Time 2). In total, 3219 undergraduates participated at Time 1; after 4 months (Time 2), only 2998 of these undergraduates completed the questionnaires. In Time 2, 730 of the participants were male (24.3%) and 2268 were female (75.7%); their age range was 17-24 years (M = 20.5, SD = 1.6); 4 people reported belonging to sexual minorities (e.g., gay/homosexual, bisexual, same gender loving, or men who have sex with men, n = 4; 0.13%), and 2994 were heterosexual (99.87%). All questionnaires were anonymously completed twice, with participants being identified via specific codes that were individually assigned at Time 1. Since this was a longitudinal study, we only included in the analyses the data of the undergraduates who completed the questionnaire twice; we found no significant differences between the participants who were lost to follow-up and participants included in the analyses, in regard to the COVID-19related stress and SCS variables. The design and procedures of this research were concordant with the ethical standards set forth by the research Committee of Beijing Normal University. First, participants read the instructions of the study and its procedures, which included the questionnaire content and informed consent. In the instructions, participants were informed that participation in the study was voluntary, and if they wished to proceed they would have to provide consent. COVID-19-related stress It was assessed using the COVID-19-stressing questionnaire, an 8-item, self-rated questionnaire that examines individual psychological behaviors regarding COVID-19related stress (e.g., "I take my time to be careful and not get infected with COVID-19," "I feel more and more nervous as my body temperature rises," "I fear other people once I know that they were infected with COVID-19," "I feel bored when everyone is talking about COVID-19"). It is responded through a 5-point Likert scale, ranging from 1 (lowest) to 5 (highest), with higher scores indicating higher COVID-19-related stress. In this study, the following were the parameters related to validity, reliability, and goodness-of-fit for this scale: Cronbach's ฮฑ = 0.772, and ฯ‡ 2 (6.925)/df (4) = 1.731, p = 0.000, CFI = 0.998, TLI = 0.988, RMSEA = 0.025. These indicated that this measure had good reliability and validity. COVID-19-related stress questionnaire was developed using items derived from the Life Event Scale (LES. 1986). In addition, the original questionnaire was preliminary used on a sample comprising 260 undergraduate students, and the results showed that the COVID-19-related stress questionnaire displayed good overall reliability and validity. Depression It was measured using the Zung Self-rating Depression Scale (SDS), a 20-item scale that assesses individuals' depression in the last 2 weeks. It is rated on a 4-point scale, ranging from 1 (no or very little time) to 4 (most or all the time), and the sum score reflected undergraduates' depression levels-higher scores denoted higher depression. In this study, the following were the parameters related to validity, reliability, and goodnessof-fit for this scale: Cronbach's ฮฑ = 0.959, and ฯ‡ 2 (158.72849)/df (34) = 4.668, p = 0.000, CFI = 0.990, TLI = 0.982, RMSEA = 0.057. Anxiety It was measured using the Zung Self-rating Anxiety Scale (SAS), which is a 20-item scale that examines individuals' psychological anxiety feelings in the last 2 weeks. It is rated on a 4-point scale, ranging from 1 (no or very little time) to 4 (most or all the time), and reverse questions were scored in reverse. In this study, the following were the parameters related to validity, reliability, and SCS We used the Sexual Compulsivity Scale [36] to assess SCS, which is a 10-item measure used to evaluate sexuality-related problems, including people's sexual thoughts, sexting and sexual behaviors, and sexual behavioral intention (e.g., "My sexual thoughts and behaviors are causing problems in my life," "My desires to have sex have disrupted my daily life," and "I sometimes get so horny I could lose control"). It is rated on a 4point scale (1 = Not at all like me; 4 = Very much like me), and the sum score reflected SCS levels-higher scores denoted more SCS. In this study, the following were the parameters related to validity, reliability, and goodness-of-fit for this scale: Cronbach's ฮฑ = 0.815, and ฯ‡ 2 (82.825)/df (17) Sociodemographic and COVID-19-related data Participants were asked to provide some demographic information, including gender, sexual identity (gay/ homosexual, bisexual, straight/heterosexual, same gender loving, men who have sex with men, or other), age, and place of residence; and information on their status regarding the COVID-19 pandemic, including whether they lived either with confirmed or suspected COVID-19 patients, and whether they were in home-isolation or hospital control-isolation. Moreover, since we aimed to assess undergraduates' SCS, we deemed that participants' sexual identities would not be of great influence in the variable of interest. Statistical analysis We used SPSS 22.0 and Mplus 7.0 for data analysis. First, we performed descriptive statistics and correlation analyses. Then, while controlling for sex and age, we conducted the structural equation modeling method to investigate the mediation of depression and anxiety in the relationship between COVID-19-related stress and SCS. Finally, we selected bias-corrected (BC) bootstrapping, a non-parametric resampling procedure, to test potential indirect effects; this was conducted based on prior research [37]. When zero was not in the 95% confidence interval (CI), the indirect effect would be significantly different from zero at a p < 0.05 (two-tailed). Common method variance To test the potential effect of common method variance, we included a procedural methodology (i.e., all questionnaires were completed anonymously; all questionnaires had good reliability and validity, serving to reduce or avoid systematic errors as much as possible; some items were scored in reverse in the questionnaires; and the sample was recruited from different universities) and used the Harman's single factor test. Results showed that there were seven factors with eigenvalues of more than 1, with the first factor having explained 24.84% of the variation, which is less than the critical value of 40% [38]. Thus, there was not a significant common method variance in this study. Descriptive statistics and correlation analyses The descriptive statistics and correlation analyses are reported in Table 1. Participants' gender correlated only with depression, anxiety, and SCS at Time 2. Meanwhile, participants' age correlated with COVID-19-related stress only at Time 1. At Time 1, albeit COVID-19related stress was positively correlated with depression and anxiety, it was negatively correlated with SCS. At Time 2, COVID-19-related stress, depression, and anxiety were all positively related to SCS; however, all variables were negatively related to depression at Time 2. Moreover, COVID-19-related stress, depression, and Table 1 The descriptive and correlation analyses (n = 2994). Not: *p < 0.05, ***p = 0.001. COVID-19 meaning COVID-19 stress; SCS meaning sexual compulsive symptom anxiety at Time 1 were all positively related to SCS at Time 2; however, SCS at Time 1 was negatively related to SCS at Time 2. Namely, COVID-19-related stress, depression, and anxiety affected individuals' SCS at both periods. Meanwhile, having higher COVID-19-related stress at Time 1, and having higher anxiety at Time 1, denoted that individuals would have higher SCS. Mediating effect analyses In Model 1, we tested the mediating effect of depression and anxiety in the studied relationship at Time 1. We conducted confirmatory factor analysis to test the goodness-of-fit, with the results being as follows: (Fig. 2). In Model 2, which Time 1 COVID-19-related stress and two times' depression, anxiety effect on SCS (Time 1, Time 2), and depression, anxiety, and Time 1 SCS as mediating roles between Time 1 COVID-19 and Time 2 SCS. By estimating the mediation model with a multigroup analysis, we found no significant difference between Time 1 and Time 2, nor between the male and female groups, for all paths and variances that were examined. Thus, we could test the relationship in the final analyses, while controlling for sex and age. We also conducted a relevant analysis on the other demographic and sociological information collected, and we found that it was not significantly related to stress, depression, anxiety and SCS, and it was not controlled in the SEM analysis. Further, we examined the mediating effect by bootstrapping with 5000 samples; since there were no zeroes in the CI, the mediating effects were shown to be significant. Results showed that COVID-19-related stress at (Table 2). Summarizing, our findings revealed that anxiety and depression have different effects in the relationship between COVID-19-related stress and SCS, with only anxiety having a longer-lasting mediating effect on the relationship between the two. Discussion We utilized a cross-lagged design to examine the longitudinal associations between COVID-19-related stress, depression, anxiety, and SCS. Particularly, we examined the mediating role of depression and anxiety in the relationship between COVID-19-related stress and SCS in Chinese undergraduates. Our results revealed the following: that higher COVID-19-related stress was associated with an increase in SCS; that COVID-19-related stress enhances anxiety and SCS; that anxiety mediates the association between COVID-19-related stress and SCS, being a facilitator of the influence of the first on the latter; and that depression did not have a significant mediating effect in the studied relationship. Thus, COVID-19related stress predicted long-term SCS, anxiety showed an indirect effect on the relationship between COVID-19-related stress and SCS, and depression did not show such indirect effect. Our results showed that COVID-19-related stress at Time 1 could significantly, directly, and positively predict undergraduates' SCS at Time 2; namely, individuals with higher stress in February, 2020, showed higher SCS 4 months later. This finds consonance in the literature, which showed that negative life events associated with SCS [2,13,17,18]. Thus, our study participants, who lived with confirmed or suspected COVID-19 patients, had to bear with higher levels of stress, perceived more negative stress, and dealt with negative emotions (i.e., depression and anxiety). Our findings also generally validated the stress theory and the ecological systems theory [16,39]. Furthermore, our results showed that the environment (i.e., living in isolation) can influence people's SCS; this finds consonance in the literature [10]. Specifically, our results suggested that individuals with higher stress have more SCS, concurring with numerous past studies [1][2][3][4]. Moreover, since COVID-19-related stress was associated with longitudinal SCS in our results, they confirmed the assumptions put forth by the psychosexual development theory; in it, individuals' sexual psychology and behavioral intentions are influenced by personal traits and the outer environment [40]. Our results also demonstrated that COVID-19-related stress at Time 1 was positively related to depression and anxiety at Time 1, and to anxiety at Time 2; thus, COVID-19-related stress acted as a negative life event, having a significant effect on individuals' emotions. This finding concurs with a previous study showing that COVID-19 -related stress is a negative life event, and thus evokes negative emotions [34]. Furthermore, our findings highlighted that anxious undergraduates could experience an enhanced effect of COVID-19-related stress on SCS, whereas depression was negatively associated with SCS, even if it did not significantly mediate the studied relationship. This finding concurs with the ecological systems theory [8], Which states that individuals' SCS can be influenced by outside events and individual emotions [9][10][11]. Namely, anxiety and depression are different emotions, and albeit the first can mediate the relationship we examined, both emotions were shown to significantly affect SCS. Furthermore, as described above, our results showed that anxiety at Time 2 mediated and enhanced the effect of COVID-19-related stress at Time 1 on SCS at Time 2, in that anxious individuals with higher COVID-19 related stress would incur in higher SCS later on. Consistently, studies have shown that anxiety can mediate the relationship between life events or stress and emotion or behaviors [9,[41][42][43][44], and that depression could not boost individuals' coping ability toward stressful life event [45]. These results are in line with those of other studies that showed that individuals' psychological resources may protect mental health, albeit anxiety symptoms could also decrease individuals' coping efficacy [24] and ability to cope with negative life events [46]. Notwithstanding, our results also showed that both anxiety at Time 1 and depression at both Times 1 and 2 did not significantly mediate the relationship between COVID-19-related stress at Time 1 and SCS at Time 2. Our results and discussions generally confirm that COVID-19-related stress can have a significant, positive, and direct effect on Chinese undergraduates' SCS, and that anxiety acts as a partial mediator in this relationship. Furthermore, we revealed that COVID-19-related stress in February, 2020, was the main factor affecting SCS in June, 2020, with anxiety being an enhancing factor in this association. When coping with outer stress or stressful life events, we are often very likely to incur in anxiety symptoms, whether we like them or not; during a stressful moment such as the COVID-19 outbreak, our results showed that individuals who have higher levels of perceived COVID-19-related stress may have higher anxiety, leading to more SCS. Implications First, we revealed the longitudinal effect of COVID-19related stress on SCS; COVID-19-related stress was regarded as a powerful coping factor for mental health, and its longitudinal effect was important for individuals, indicating that COVID-19-related stress had a delayed effect on negative life events. Second, our findings highlighted that the roles of anxiety and depression in the studied relationship differed; only anxiety mediated the relationship between COVID-19-related stress and SCS. This serves as background knowledge about coping with stressful life events and about people's psychosexual development, which can be used to inform the development of more suitable public policies. Finally, our study contributes to the scientific understanding and the promotion of health coping models, as we showed the links between negative stress evoked by life events and anxiety. Limitations and directions for future research First, our sample comprised undergraduate students who were recruited online; namely, it did not include other portions of the Chinese population (children, elderly, and other age people), evoking problems related to the external validity of our results. Meanwhile, this manuscript mainly objective explore the mechanism of COVID-19-related stress on SCS for the vast majority of students, so the participants are heterosexual. Second, although we employed a longitudinal-design, it only covered a four-month period (2020/2/12-2020/6/ 6). Consequently, the long-term mechanism of COVID-19-related stress on SCS still needs to be further explored in the following days. Third, the longitudinal effect of COVID-19-related stress could also affect other important individual variables, such as self-concept, resilience, and sexuality attitude; thus, future studies should explore the effect mechanisms in these alternative variables based on the ecological systems theory. Fourth, even though support from family, spouse, friends, and the wider community might be particularly important for individuals' mental health status while coping with a negative life event, such as a pandemic, this has not been measured in this study. Therefore, future studies could focus on these factors. Finally, during the isolation period evoked by the COVID-19 outbreak, individual psychology and physical health may have been influenced by many factors; a study, for example, shows the influence of interpersonal communication and news media on psychological and physical health [47]. Thus, future research should explore the effect mechanism of isolation from a biopsychosocial perspective. Conclusions Our major conclusion was that COVID-19-related stress may positively predict long-term SCS, with anxiety being able to facilitate the influence of the first on the latter. Further, we found that undergraduates' perceived COVID-19-related stress was significantly associated with higher levels of SCS and anxiety; thus, stakeholders should pay more attention to the prevention of SCS during isolation periods-such as the one evoked by the COVID-19 pandemic-something that can be done by developing and applying psychological interventions that touch upon undergraduates' emotion.
The effects of genre on the syntactic complexity of argumentative and expository writing by Chinese EFL learners Genre researchers have found that writing in different genres involves different cognitive task loads and requires different linguistic demands. Generally speaking, narratives involve the description of events with a focus on people and their actions within a specific time frame, whereas non-narrative genres focus on making an argument or discussing ideas or beliefs in a logical fashion, thus resulting in distinct language features. However, the vast majority of genre-based studies have either focused on one single genre or made comparisons between narrative and non-narrative writing (mostly argumentative) in academic contexts without examining how EFL writers perform across non-narrative genres. Moreover, the measures used in quantifying the syntactic complexity of writing are varied, leading to inconsistent findings. This study investigated the effects of genre on the syntactic complexity of writing through comparing argumentative and expository compositions written by Chinese English-as-a-foreign-language (EFL) learners over one academic year. Fifty-two participants were asked to write eight compositions (with two genres alternated), four argumentative and four expository, which were parsed via the Syntactic Complexity Analyzer. The results with time as the within-subjects variable showed a significant development of syntactic complexity in both argumentative and expository compositions over one academic year. Meanwhile, the paired-sample t-test with genre as the within-subjects variable exhibited a higher syntactic complexity in argumentative compositions than in expository ones on most of the 14 measures examined at four time points over the year. Additionally, a two-way repeated measures analysis of variance with genre and time as independent variables ascertained an interactional effect of time and genre on some of the 14 measures. The present study tested and verified the impact genre exerts on the syntactic complexity of writing, providing implications for teachers to be more informed in teaching and assessing EFL writing and for students to be more conscious of genre difference in EFL writing. Introduction Over the past decades, research has shown that writing in different genres involves varied cognitive task loads (Kamberelis and Bovino, 1999;Beauvais et al., 2011;Bi, 2020) and requires different linguistic demands (Berman and Nir-Sagiv, 2004;Nippold, 2004;Beers and Nagy, 2011;Biber et al., 2016;Xu, 2020;Atak and Saricaoglu, 2021). Generally speaking, genres can be classified into narratives and non-narratives, and non-narratives can be further divided into expository, argumentative, persuasive, and the like. For the most part, narratives involve the description of events with a focus on people and their actions within a specific time frame, whereas non-narrative genres focus on making an argument or discussing ideas or beliefs in a logical fashion (Tian, 2014). To be specific, narrative essays tend to involve more thirdperson pronouns and use of the past tense; expository essays might contain more use of relative clauses and attributive adjectives in describing the theme; and argumentative essays prefer attributive clauses in demonstrating the statement (Brunner, 1986;Biber and Conrad, 2009;Berman and Slobin, 2013). In writing tasks across genres, English-as-a-foreign-language (EFL) writers are generally faced with a task that can be challenging in two aspects: On one hand, they need to identify the special discourse type in the required communicative context, and on the other hand, they need to utilize all their language resources to craft proper expressions to serve the specific context. For instance, Beauvais et al. (2011) found that students used different writing strategies to meet the demands of different writing genres. Compared to narrative writing, argumentative writing required more cognitive effort. Therefore, the students spent more time planning to write an argumentative text because it required a complex and sophisticated knowledge-transforming strategy. Considering the differences in genres, teachers should both attach importance to writing complexity and take account of the effects of genre differentiation when assessing students' proficiency level and determining their developmental trajectories in the target language (L2). Hyland (2007) contended that "effective teaching recognizes the wants, prior learning, and current proficiencies of students, but in a genre-based course, it also means, as far as possible, identifying the kinds of writing that learners will need to do in their target situations and incorporating these into the course" (p. 152). Likewise, when illustrating the criteria for assessing the Chinese "FLTRP Cup" English writing contest, Tian (2014) proposed that due attention should be paid to the differences between expository writing and argumentative writing in that both genres are expected to effectively address the topic and the task, but expository writing should present a clear thesis in a formal style and an objective tone; while argumentative writing should present an insightful position on the issue, and the position should be strongly and substantially supported or argued. To date, the overwhelming majority of previous empirical studies assessing EFL writing have either focused solely on one single genre or made comparisons between narrative and non-narrative writing in academic contexts without examining how EFL writers perform across non-narrative genres (Qin and Uccelli, 2016). Moreover, in the actual teaching of writing, argumentative writing is generally the preferred option for practicing while other genres of writing are more or less ignored. Therefore, in writing instruction and practice, the need arises for students to have the opportunity to be exposed to different genres of writing and for teachers to provide students with specific and targeted guidance and instruction based on different styles of writing. The present study, through comparing the syntactic complexity of students' argumentative and expository compositions, attempts to contribute to the existing literature in two ways. First, by examining how students write in these two genres, we are able to see the changes in syntactic complexity in Chinese EFL learners' English writing over one academic year. Second, by examining the differences in students' argumentative and expository compositions, we are able to explore whether genre exerts any effects on syntactic complexity in Chinese EFL learners' argumentative and expository compositions and, furthermore, if the effects of genre are retained over the course of 1 year. Genre in writing Text genres are primarily divided into narrative and non-narrative in both academic and non-academic contexts (Brunner, 1986). Narratives focus on events and actions in settings performed by the characters, whereas non-narratives (e.g., argumentative, expository, descriptive) focus on ideas and concepts and express the unfolding of claims and argumentation in a logical fashion (Berman and Slobin, 2013). Genre has much to do with cognitive task complexity in relation to two competing hypotheses, namely the cognition hypothesis (Robinson, 2001(Robinson, , 2007 and the limited attentional capacity model (Skehan, 1998). Different genres place different levels of cognitive demands on students, with narrative being the least cognitively demanding, expository cognitively more demanding than narrative, and argumentative the most cognitively complex (Genung, 1900;Bain, 1967;Weigle, 2002). Yang (2014) explored four genres, namely narrative, expository, expo-argumentative, and argumentative, as regards different levels of cognitive demands and reasoning on the attentional resources of participants, finding that the argumentative essays were significantly more complex in global syntactic complexity features than the essays on the other rhetorical tasks. She interpreted her findings by fitting genre into the resourcedirecting dimension of Robinson's cognitive hypothesis where reasoning and perspective-taking are involved. However, the 375 subjects were of varied proficiencies, including both English majors and non-English majors, and they were assigned different writing tasks, thus creating confusion as to the real cause underlying the differences. When it comes to the actual writing, genre differences are reflected in the way words and phrases are selected as well as in the formation and connection of clauses or sentences that best describe their own characteristics (Biber and Conrad, 2009). Since the purpose of this study is to delve into the differences in syntactic complexity between argumentation and exposition, we shall focus on these two genres only and disregard the others herein. Ruiz-Funes (2015) contended that expository and argumentative genres could be operationalized as the reasoning demands of cognitive task complexity and that they both involve a resource-directing feature of task complexity. Argumentation is a genre that invites a writer to give personal opinions and judgment on a debatable issue or statement and to take a stand on the issue/statement based on facts, generalizations, and reasoning. It is, more often than not, topic-oriented, which requires the writer to impose a logical structure to interrelate ideas in a coherent manner and to organize claims and arguments in a stepwise hierarchical format (Grabe, 2002), where clauses are often used to link ideas and enable complex expressions. By contrast, exposition invites the writer to explain and to provide information about something (not to take a side on something debatable or to argue on the topic) based on facts and generalizations of events and states. The ideas for expository production derive from general world knowledge and academic learning, and expository texts often express the unfolding of claims in containing the theme requirements of noun phrases using nominalizations as clause subjects that condense prior information and present what has already been put forward so that further comment can be made about it (Ravid, 2005). Syntactic complexity in writing In second language research, syntactic complexity can be viewed "as a valid and basic descriptor of L2 performance, as an indicator of proficiency and as an index of language development and progress" (Bultรฉ and Housen, 2014, p. 43). Nonetheless, no consensus has been reached as to how to define the construct of syntactic complexity. In Gaies (1980), syntactic complexity was defined as the ability to express more ideas and more thoughts with the use of fewer words, while Skehan (1996) considered syntactic complexity as more elaborate language and more various sentence patterns. In Wolfe-Quintero et al. (1998), syntactic complexity was taken as the range of forms that surface in language production and the degree of sophistication of such forms, whereas Ortega (2015) stated that "syntactic complexity indexes the expansion of the capacity to use the additional language in ever more mature and skillful ways, tapping the full range of linguistic resources offered by the given grammar in order to fulfil various communicative goals successfully" (p. 82). Similarly, Lu (2011) considered syntactic complexity as the range of syntactic structures produced and the degree of sophistication of such structures, which is the way we chose to define syntactic complexity in the present study. Syntactic complexity has been generally considered essential in assessing the performance of foreign language writing (Wolfe-Quintero et al., 1998;Housen and Kuiken, 2009;Lu, 2011;Staples et al., 2016;Yoon, 2017;Ana, 2018;Kyle and Crossley, 2018). For this reason, many researchers have attempted in the past decades to establish effective and reliable measures of syntactic complexity to assess language progression and judge the writing level and proficiency of learners (De Clercq and Housen, 2017;Ansarifar et al., 2018;Casal and Lee, 2019;Bi and Jiang, 2020;Kim, 2021;Huang et al., 2022;Li et al., 2022). Ortega (2003) showed cumulative evidence concerning syntactic complexity by assessing university students' writing performance and overall proficiency in the target language through synthesizing 25 studies, testifying to the use of syntactic complexity as an indicator of writing quality and language proficiency. Guo et al. (2013) contended that we could, to some extent, anticipate language learners' essay scores by looking at their language characteristics and structures in both integrated and independent writing. Syntactic complexity is often conceptualized as a multidimensional construct, with each dimension requiring different appropriate measures (Norris and Ortega, 2009;Lu, 2011;Bultรฉ and Housen, 2014). Norris and Ortega (2009) proposed that to assess the syntactic complexity of L2 production systematically, researchers should incorporate measures for global complexity, complexity by subordination, complexity via sub-clausal or phrasal elaboration, and possibly complexity by coordination. Bultรฉ and Housen (2014) examined the factors of time and genre together with the relationship to writing quality for a time span of one semester based on corpus essays, suggesting that the effects of time by using measures that detected change or development did not necessarily capture a higher level of language proficiency or better writing quality. Jiang et al. (2019) examined the syntactic complexity of 410 narrative compositions across four writing proficiency levels with both large-grained and fine-grained measures, aiming to arrive at certain indicators that could best discriminate and predict writing proficiency; they concluded that the mean length of production unit and the number of dependent clauses per clause could better predict writing proficiency than other traditional large-grained measures. Since most of the previous studies on syntactic complexity were directed with limited measuring indices (Biber et al., 2011;Bultรฉ and Housen, 2012), such as mean words per T-unit (W/T), words per clause (W/C), or dependent clause ratio (DC/C), it is necessary to measure syntactic complexity with a relatively comprehensive range of indices. Therefore, this study has adopted for analysis the 14 measures Lu (2011) proposed in the Syntactic Complexity Analyzer (SCA), including the six measures Ortega (2003) examined, a further five reviewed in Wolfe-Quintero et al. (1998), and another three recommended in Wolfe-Quintero et al. (1998) for further research. With the 14 measures examined on different aspects over one academic year, this study will be both insightful and informative as regards the developmental trajectories of syntactic complexity in argumentative and expository writing and also the effects exerted by the two genres. Previous studies on syntactic complexity in relation to genre differentiation Measures of syntactic complexity have been proven to be sensitive to the genres of narrative and expository writing (Scott and Windsor, 2000;Beers and Nagy, 2009;Jeong, 2017) and those of narrative and argumentative writing (Crowhurst and Piche, 1979;Beers and Nagy, 2009;Lu, 2011;Qin and Uccelli, 2016;Yoon andPolio, 2016, 2017;Abdel-Malek, 2019;Jagaiah et al., 2020;Xu, 2020;Casal et al., 2021;Lu et al., 2021;Yu, 2021). Scott and Windsor (2000) found that students' (9 and 11 years old) narratives had more clauses per T-unit than the expository summaries, while Crowhurst and Piche (1979) found that student (12 years old) narratives had fewer clauses per T-unit than persuasive essays (argumentative) and were not significantly different from the descriptive texts. Similarly, Ravid (2005) investigated two genres of L1 writing (narrative and expository) by writers ranging from fourth grade to adulthood and argued that expository texts, in which writers focused on abstract concepts and world knowledge, were more challenging to construct than narratives. The results revealed that children were likely to develop more complex language in expository texts despite the greater challenge posed by expository texts compared to narratives, which appeared less cognitively demanding. Yoon and Polio (2017) examined narrative and argumentative essays written by 37 English-as-a-second-language (ESL) students to analyze development over time and genre differences. The results indicated strong genre differences in the area of linguistic complexity, with the syntactic complexity of argumentative essays being distinctly higher than that of narrative ones. In another study, Polio and Yoon (2018) investigated two automated systems, namely the L2 SCA and Coh-Metrix, as a way to capture variation in syntactic complexity across two genres. The results suggested that the syntactic complexity in argumentation was of greater magnitude when compared with that in narratives. Lu (2011) researched a corpus of college second language learners' writing via the automatic SCA through examining 14 objective syntactic complexity measures to quantify second language writing development and progression. The results showed that the conditions of institution, genre, and time imposed a significant impact on the relationship between syntactic complexity and language performance, claiming that the clause was a potentially more informative unit of analysis than the T-unit. In the following year, Park (2012) examined the same 14 syntactic complexity measures as indices of language development in writing by college-level learners of English in Korea, also via SCA, revealing that genre made a great difference in regard to syntactic complexity in L2 learners' writing, further confirming what had been found in Lu (2011). With both genre and students' proficiency taken into account, Beers and Nagy (2009) examined how syntactic complexity measures correlated with overall writing quality in two genres, narratives and expository essays. Their findings suggested that a clausal subordination measure (clauses per T-unit, C/T) was positively correlated with the quality of narratives but negatively with expository essays; conversely, there was no significant effect concerning the length of clause (W/C) on narrative writing, but statistical significance was found on expository writing. In another study, Beers and Nagy (2011) investigated the linguistic development of narrative, descriptive, compare/contrast, and persuasive compositions in different grade levels, and their results supported Crowhurst and Piche (1979) conclusion that narration places the fewest demands and argumentation the greatest demands on writers to make use of their syntactic resources. Similarly, Yan and Zou (2019) examined differences in syntactic complexity in English among writers at two different proficiency levels and explored the relationship between syntactic complexity and compositions with two different genres. Compositions were also evaluated by SCA, gauging syntactic complexity at the global, clausal, and phrasal levels. The results indicated that the difference between the two genres reached a significant level in terms of C/T and complex nominals per clause (CN/C), but there was no significant relationship between syntactic complexity and L2 proficiency levels and no significant interactive effect between the genre factor and proficiency factor. Bae and Min (2020) investigated how syntactic patterns were different among four different genres, namely narrative, comparison, cause-effect, and argumentative, and three English proficiency levels in Korean L2 college students' writing, examining the same 14 syntactic complexity measures as in Lu (2011). They found that syntactic complexity showed significant genre differences though no significant group differences of syntactic complexity were found among the L2 proficiency levels. The findings yielded in previous studies have proven, to various degrees, the effects of genre on syntactic complexity in English writing, pointing to the different types of syntactic structures expected to be mastered in writing in different genres. However, the findings were mostly derived through comparisons between narratives and non-narratives and mostly between the narrative and argumentative genres. To fill the gap, the present study compares argumentative and expository compositions written by Chinese EFL learners over one academic year in an attempt to understand how genres affect syntactic complexities in EFL learners' English writing and whether the genre effects are retained over 1 year. Tool and measures Lu (2010) designed the SCA, which can automatically analyze the complexity measures of L2 writing. The SCA can yield data on 14 measures concerning length of production unit, amount of subordination, amount of coordination, and degree of phrasal sophistication (see Table 1). These 14 measures were previously employed or recommended in the two large-scale research syntheses by Wolfe-Quintero et al. (1998) andOrtega (2003). In 2011, Lu reported the results of his own study with SCA, with regard to syntactic complexity, as fair and independent measuring indices of university students' writing based on a corpus of data. Park (2012) also evaluated the same 14 syntactic complexity measures, in a cross-sectional study, as valid indices of developmental patterns in writing by college learners of English in Korea. With a view to comparing the present study with Lu (2011) and Park (2012), we also adopted the same 14 syntactic complexity measures for analysis, which are further classified into five categories, to gauge the degree of syntactic complexity in Chinese students' L2 argumentative and expository compositions. Participants The current study was positioned in a 1-year-long Comprehensive English course, including Comprehensive English III and Comprehensive English IV, which was given to second-year English majors in a renowned university in Jiangsu province, China. The participants came from two parallel classes of the same grade, with 28 and 31 students, respectively. Of the 59 participants, 52 were girls and seven were boys, aged between 19 and 21, all with about 10 years of formal English learning experience. Moreover, all the courses, including both compulsory and optional ones, for the two classes were given by the same teachers. Because of the absence of a teacher working as an exchange scholar in the United States, the participants were put in the same class for the Comprehensive English course over the whole academic year, thus overcoming other potential intervening causes or variables. Finally, it should be pointed out that because some participants failed to complete and hand in all of the eight assigned compositions, only 52 (out of 59) participants' compositions were valid for analysis in the present study. Data collection and analysis Over the whole academic year, the participants were asked to write four argumentative and four expository compositions. The compositions were all finished in class, with a length of about 300 words and with no access to electronic devices. The writing topics were determined in relation to the contents of the textbook as well as the contemporary issues of the time. The data were collected at a regular time interval of 3-4 weeks, with the genres of argumentation and exposition alternated and the order of the topics counterbalanced to avoid a topic effect. To be specific, the topics for the first, third, fifth, and seventh compositions were argumentative, and those for the second, fourth, sixth, and eighth were expository (Table 2). For each writing task, the scoring of the compositions was done by two experienced raters in accordance with the assessment criteria proposed for Chinese "FLTRP Cup" English writing contest in relation to content, organization, and language (Tian, 2014). However, the feedback given to the participants was concerned only with lexical and grammatical errors and not related to the use of the language structures required by different genres. In the process of data collection, we first coded each of the compositions by all the participants into specific syntactic complexity indices via the L2 SCA (as specified in Table 1). For research question 1, we conducted a one-way ANOVA with time as the within-subjects variable, aiming to examine the developmental changes in syntactic complexities in argumentative and expository compositions, respectively. For research question 2, we first conducted a paired-sample T-test with genre as the within-subjects variable to compare the syntactic complexities of the first argumentative compositions and the first expository ones by the 52 participants written at the beginning of the academic year, the purpose of which was to see whether differences existed between argumentative and expository compositions. After that, comparisons were done consecutively between the second argumentative and second expository, the third argumentative and third expository, and the fourth argumentative and fourth expository compositions to see if the effects of genre had been retained over one academic year. Moreover, we conducted a two-way repeated measures analysis of variance with both genre and time as independent variables and the 14 measuring indices of syntactic complexity as dependent variables, aiming to explore the interactional effects between time and genre on developmental changes in syntactic complexity in argumentative and expository compositions. Results and discussion Changes in syntactic complexity in argumentative and expository compositions by Chinese EFL learners over one academic year To see if there were any changes in syntactic complexity in Chinese EFL learners' argumentative and expository compositions, a one-way ANOVA was conducted with time as the withinsubjects variable in the two genres, respectively, as can be seen in Tables 3, 4. Of the 14 measures examined, there was a linear growth in MLC in both argumentative and expository compositions over time, which concurs with the findings yielded in previous studies (Wolfe-Quintero et al., 1998;Xu et al., 2013). The significant and positive increase in MLC and MLT in argumentative compositions and MLC in expository compositions indicated students' increased ability to produce longer expressions. Moreover, significant increase was also found in CP/C, CP/T, CN/C, and CN/T in both argumentative and expository compositions. Although the production of longer units does not necessarily equal complex and good language, it demonstrates the writer's ability to form them in one sentence or clause by connecting different views and expressions, and the ability to master and use longer clauses and sentences does, to some extent, exhibit a higher level of language proficiency (Beers and Nagy, 2011). On one hand, we may attribute the longer mean length of production unit to the use of more clauses; on the other hand, the production of longer language units can be the result of phrasal elaboration. The increased values of CP/C, CP/T, CN/C, and CN/T in both argumentative and expository compositions suggest that complex phrases, such as attributive phrases, appositive phrases, and adjective phrase, as well as complex nominal expressions, can be used as a good way to increase the length of language production (Ansarifar et al., 2018). It is interesting to see that the first argumentative and first expository compositions both had a relatively higher value of C/T, which means that more clauses were incorporated into the sentences in the students' language production. However, the value of C/T experienced a drastic drop in both argumentative and expository writing over the course of the year, which concurs with Ortega (2015) view that as language learners advance to a higher level, they are more likely to produce language that is typically characterized by the use of phrasal expressions rather than progress at the clausal level. The development of coordination and particular structures in both argumentative and expository compositions, and meanwhile the decreased value of subordination in argumentative and minor increase in expository, indicated that the L2 writers moved toward the development of Frontiers in Psychology 08 frontiersin.org more phrasal components as compared to clausal components as they advanced in writing (Crossley and McNamara, 2014). This finding further corroborates Biber et al. (2011), who claimed that syntactic complexity could be better measured with the measuring indices of noun phrases rather than clauses embedded in sentences and that in writing academic essays, meaning was mostly expressed through the use of complex noun phrases rather than clause-level expressions. Moreover, the developmental trend concluded in the present study is in line with the developmental sequence proposed by Norris and Ortega (2009), where more advanced stages of development were defined by the extended use of nominalization. Likewise, Biber et al. (2013) proposed that the developmental stage of L2 learners is similar to the sequence theory and that there are specific developmental progressions in which grammatical form develops from finite dependent clauses to non-finite dependent clauses and to dependent phrases, and on the other hand, learners' syntactic function progresses from the use of clausal constituents (e.g., direct object or adverbial) to the use of noun phrase modifiers (Biber and Conrad, 2009;Biber and Gray, 2010;Biber et al., 2013). Effects of genre on syntactic complexity in Chinese EFL learners' writing To answer the second research question, the paired-sample t-test was first conducted with genre as the within-subjects variable between the first argumentative and the first expository compositions, and then comparisons were made, consecutively, between the second argumentative and expository compositions, the third argumentative and expository compositions, and the fourth argumentative and expository compositions on all of the 14 measures. Table 5 presents the statistically significant differences between syntactic complexities in argumentative and expository compositions at four time points over the year. Table 5 shows that the values of the argumentative compositions were significantly different from those of the expository ones written by the participants at Time 1. To be precise, on the length of the production units, Through comparing the syntactic complexities in argumentative and expository compositions at four time points, we can conclude that the syntactic complexities of both argumentative and expository compositions in the earlier stage (the first semester in our research) were exhibited more on the measures of subordination and length of production unit, which indicated clausal progression, whereas the later period (the second semester in our research) saw a development of syntactic complexity on the measures of coordination and particular structures, both of which showcased the development of phraselevel complexity. Moreover, the two-way repeated measures ANOVA was also conducted with genre and time as independent variables and the 14 measures of syntactic complexity as dependent variables (shown in Table 6). Along with the exact p-values, this study also reported the effect sizes and partial eta squared (ฮทp 2 ), an appropriate measure for research designs involving withinsubjects factors. Regarding the role that time played on the developmental pattern of syntactic complexity, the findings reveal that there was a significant main effect for time mainly on the measures of the length of production unit, namely MLC (p = 0.000, ฮทp 2 = 0.055), MLT (p = 0.008, ฮทp 2 = 0.029), and MLS (p = 0.043, ฮทp 2 = 0.020), indicating that L2 writers produced longer expression in argumentative compositions than in expository compositions over the course of the year. On the coordination level, the significance was on all the three measures, namely CP/C (p = 0.000, ฮทp 2 = 0.042), CP/T (p = 0.000, ฮทp 2 = 0.048), and TS (p=0,003, ฮทp 2 = 0.031). And similarly, all the three measures on the particular structures were found to be significantly related to time: CN/C Frontiers in Psychology 09 frontiersin.org Frontiers in Psychology 10 frontiersin.org (p = 0.000, ฮทp 2 = 0.047) in coordination; and CN/C (p = 0.003, ฮทp 2 = 0.034) of particular structures. While comparing the findings from the present study with those obtained in previous research, we found that the adoption of different measuring indices may lead to different results in different types of genres. Nonetheless, most previous studies focused only on a few measures in contrasting narrative essays and non-narrative ones, and therefore it is difficult and unpersuasive in regard to determining if the indices taken are representative enough to capture the differences or whether there are other measures that are equally persuasive in demonstrating syntactic complexity. The findings yielded in the present study are consistent with those obtained in Beers and Nagy (2009), who examined three measures, namely C/T, W/C, and W/T, and ascertained genre differences when using W/T and C/T as measures of syntactic complexity. Moreover, our results were comparable to those of Lu (2011), who found higher complexity measures in argumentative essays than in narratives under the condition of controlling for both time and institution on the measure of length of production unit (MLS). On the whole, the majority of measures of statistically significant difference concerning genre effects were detected on the clausal level though the effects of time were exhibited more on the length of production unit, on coordination, and on particular structures, the last two of which being measures featuring the progression of phrase-level expressions. We may safely conclude that participants' language production was affected by the type of writing tasks assigned to them and that between the two non-narrative types of writing, argumentative compositions exhibited higher syntactic complexity than expository compositions, which is in line with previous findings (Bultรฉ and Housen, 2014;Kyle and Crossley, 2018). Put another way, argumentative compositions placed more reasoning demands on participants than expository ones, thus resulting in higher syntactic complexity. Looking at the data in detail, we found that in the first semester (Time 1 and Time 2), the measuring indices were shown to discriminate more on subordination and the length of production unit, both of which were on the clausal level, despite the few measures found to be significantly related in coordination and particular structures. At Time 3 and Time 4, the effects of genre were found to be more on coordination and particular structures. This finding was consistent with the results reported in Lu (2011) and Xu et al. (2013) but partly inconsistent with the results that Ortega (2003) drew. Ortega ascribed the relatively small magnitude of C/T to advanced learners being more likely to simplify the clauses to shorter phrases and supported the claim that C/T may be lower at advanced levels as a result of reduction from clauses to phrases (2003). Lu (2011) found that for clauses per T-unit, significant differences were detected within persuasive compositions, in which there were more clausal-level expressions concerning subordination; whereas in our study, the significance was found in expository compositions rather than in argumentative ones. And for W/C (the corresponding index in our study is MLC), there were more W/C in descriptive compositions than in persuasive ones, but a statistically significant difference was not detected in the other two genres of writing. This was partly inconsistent with our study, in which the argumentative essays demonstrated longer lengths of clauses, with participants producing longer clauses in argumentative than in expository compositions. To probe deeper into the causes underlying the difference, we excerpted some of the argumentative and expository compositions in the following: Argumentative writing: Student A: The booming catering industry has been capturing the hearts of more and more people especially the young, who like eating at stands or restaurants. Anyway, many others make a point to prepare and eat at home. Those who have good reasons to eat out argue that cooking at home wastes time as well as energy. With exhausting or maybe dull routine work, they do not bother to cook anymore. In addition, it seems that cuisine is more palatable and multiple at diners or restaurants, where they can just wait and soon enjoy the meals. Student B: However, it's not the only access to earn more and get promoted. To new graduates, finding a job earlier means more working experience and larger circle of professional friends, from whom they could seek supports when tackling challenges. They also have an edge over master degree (or up) holders in good energy. People in their mid-twenties would be more concentrated in working for they have little affairs to cope with in terms of family and seldom do they have to take a sick leave, compared with some middle-aged people. Expository writing: Student A: I'm preparing for long relaxing, a solo travel which will last for one year. I want to feel the romance of European towns, the spectacle of African savannah and the vast of the ocean. However, the luggage is an important matter which will affect my aesthetic experience. It cannot be too much or lacking. Therefore, besides all kinds of clothes and the cellphone used to pay, contact family and friends and take photos, I am to pick out following necessary articles. Student B: Now as a college student, what I have learned from little experience in school is that the life here prepares for the life later. I entered XXX university last year and tried all kinds of activities I had never tried before, such as working as a volunteer in various corners of the city, participating in school organizations and staying up all night in the study room. I learned how to utilize my kindness to benefit people in need and effectively cooperate with partners. I realized that nobody can succeed with no venture. Thus I would push myself harder. In analyzing the excerpts of participants' compositions, we can see that, compared with expository compositions, there are more clauses in participants' argumentative compositions, such as attributive clauses (โ€ฆthe young, who like eating at standsโ€ฆ), adverbial clauses (โ€ฆthey could seek supports when tackling challenges), and predicative clauses (it seems that cuisine is..). The use of more clauses in argumentative compositions, for one thing, discriminates the value of MLC with that of expository ones, and for another, it partly explains the high magnitude of C/T observed in our study. As with more clauses incorporated into a sentence, it is not surprising that the value of C/T gets higher accordingly. As opposed to more clauses found in argumentative compositions, in expository writing, there were more phrasal-level expressions, such as the romance of European towns, the spectacle of African savannah, the vast of the ocean, working as a volunteer in various corners of the city, participating in school organizations, and staying up all night in the study room. In previous studies, researchers found that expository texts composed by student writers have more complex noun phrases (Ravid and Berman, 2010), more nominalized forms (Schleppegrell, 2001), and more relative and adverbial clauses (Scott and Windsor, 2000) as opposed to narratives, which suggests that the narrative and expository genres require different cognitive demands. The inherently different nature of narrative and non-narrative compositions, and even different types of writing within non-narrative ones, indicates that certain types of texts are believed to be linguistically and cognitively more demanding than others (Schleppegrell, 2001), and the frequency of exposure to different communicative contexts differs remarkably across individuals, leading to overall different structures and different levels of syntactic complexity. In the same vein, learners' language production of argumentative and expository writing in syntax is characterized by distinctive forms and expressions. As Beers and Nagy (2009) stated, "writers need to be aware of stylistic options that will produce the most desirable impression on their readers most. Among these stylistic options are a variety of sentence-level syntactic structures through which meaning can be conveyed" (p. 196). Moreover, in comparing the changes in syntactic complexity of both argumentative and expository writing, we found a positive progression in both genres of writing over the course of the year in most of the measures. But the developmental pattern and mean value of the syntactic complexity measures of argumentative and expository compositions are different. As for the mean length of production unit, all the three measures showed a generally upward trend with minor fluctuation. And the mean value of the MLC of argumentative compositions was consistently higher than that of expository ones despite the progression of both genres, and so was the syntactic complexity of MLS and MLT. As for other clauselevel measures, like subordination, we see a generally decreased value of syntactic complexity in both genres of writing (although no significance was detected), which confirms previous findings that as learners advance to higher levels, they learn to use complex phrases rather than progress in clausal expressions (Ortega, 2003). Unlike the previous measuring indices, the four measuring indices on coordination behave differently in that students achieved more notable progress on CP/T and CP/C in expository writing than in argumentative writing and that the magnitude of C/T experienced a drastic decrease over the academic year. The relatively higher magnitude of C/T at the beginning was probably because when the students were trying to express the relationship among ideas, they were more likely to use more clauses to make their expressions smooth and coherent, such that more clauses were contained in each T-unit (Ortega, 2003). The progression pattern corroborated with other measures on the last set of indices, the particular structures. We found that all the three measures' value increased value and that two of the measures (CN/C and VP/T) distinguished argumentative writing from expository writing positively and significantly. To conclude, of the 14 measures we used to quantify syntactic complexity, it is clear that argumentative compositions and expository compositions are distinctively and significantly different from each other and that learners generally achieved more complex language production in both genres of writing over one academic year. Looking at the interactive effects of both time and genre, in spite of the progression of students' writing in both genres, we found that the syntactic complexity of argumentative compositions was still higher than that of expository ones for the majority of measures and for most of the time points. In Biber and Conrad (2009) explanation, this was due to the fact that different types of writing entail distinct functional requirements, which could result in learners' different language production. Moreover, argumentative writing involves more complex functional demand, and therefore the resources needed for completing it can be more than those required for expository writing. By looking at the developmental trend of the 14 measures over 1 year, we found that the linguistic progression or development proceeded in a similar, if not the same, way across the two genres, which is in line with our hypothesis concerning the syntactic complexity difference. In light of Robinson (2007) framework, we can attribute the difference between the two types of writing to the cognitive demand, which states that increased cognitive complexity (in our study, argumentative writing) imposes great effects on language production. While performing argumentative writing, which is a more complex task with more demands of logical and causal reasoning, participants tended to draw on their attention and memory resources to meet the requirements of more demanding cognitive resources and therefore produced language of relatively higher complexity. More importantly, time and instruction imposed essentially little influence on the linguistic progression between argumentative and expository writing. Therefore, we may safely attribute the different levels and progressions of syntactic complexity in argumentative and expository compositions to the inherently different natures and demands of the two genres of writing, which verifies our hypothesis that the distinctive cognitive resources required by the two types of writing genres play a crucial role in students' writing syntactic complexity. There is one more point worthy of special note in our study. In reading through all the argumentative and expository compositions, we found that when participants were required to produce expository compositions, some of them wrote by explaining their reasons and relating those reasons to instances for discussion. And they imposed a logical structure to interrelate ideas in a coherent manner and to organize claims and arguments in a stepwise hierarchical format, which is a typical way of organizing argumentative writing. We are not sure whether it was the organization of their expository compositions in an Frontiers in Psychology 12 frontiersin.org argumentative way that led to the relatively high value of C/T in the expository compositions, but we do know that some students were not quite aware of the requirements of different structures pertinent to different genres. Therefore, teaching EFL should not be understood as promoting a global English proficiency expected to be equally functional across different genres, but rather specific and genre-targeted instruction should be taken into consideration in the actual learning and teaching. Conclusion The findings of the present study are twofold. On one hand, we found that there was a statistically significant development of syntactic complexity in both argumentative and expository compositions and that the development was found more in phrasallevel measures (coordination and particular structures) than in clausal ones. This finding concurs with Biber and Gray (2010) that complex noun phrases can be much more appropriate measures of syntactic complexity compared to embedded clauses. It also echoes Biber et al. (2013), who highlighted the criticality of considering the distinctive grammatical features of academic register, i.e., phrasal modifiers when conducting syntactic complexity analysis in L2 writing research. On the other hand, genre was found to have affected the development of syntactic complexity in argumentative and expository compositions. To be precise, the syntactic complexity of argumentative essays was significantly higher than that of expository essays in some of the measuring indices, and this finding corroborates the findings of previous studies conducted between narrative and non-narrative genres (Beers and Nagy, 2011;Yoon and Polio, 2017), indicating that even within non-narrative types of writing, significant differences still exist. Moreover, the effects of genre were retained over one academic year, thus confirming the importance of controlling for the effects of relevant learner-, task-, and context-related factors in interpreting between-proficiency differences in syntactic complexity (Lu, 2011), and therefore conflating non-narratives (argumentative and expository in the present study) may overlook these differences. The implications of the study are also twofold. First, the present study may help EFL learners and writing instructors gain a more in-depth understanding of how and in what forms the features of syntactic complexity are demonstrated. The lack of clause-level statistically significant development in both argumentative and expository essays may indicate that academic writing is characteristically dense with non-clausal phrases and complex noun phrases and that therefore phrases may be taken as more appropriate measures of syntactic complexity as opposed to embedded clauses. In academic prose, meaning is, for the most part, condensed into complex noun phrases rather than being expressed in clause forms, and therefore noun phrase modification, such as attributive adjectives and post-noun-modifying prepositional phrases, has a tendency to contribute to essay quality, which requires researchers and teachers to re-examine the traditional focus on clausal embedding and subordination measures based on the assumption that academic writing derives its complexity from the elaborate use of clausal construction and, for another, provides possible ways for them to place emphasis on the learning and application of learners' phrasal-level knowledge. Second, learning a language can be taken as learning a range of distinct types of discourses, which are context-specific (Qin and Uccelli, 2016), and the knowledge and ability acquired in one particular genre practice does not necessarily mean that students can transfer their successful performance from one genre to another. Therefore, English teaching cannot be taken just as a way to promote students' overall language level and performance, which is generally believed to serve the same function in different contexts, and students should also be able to draw inferences from one another. Both teachers and researchers should reflect on the different needs students will encounter in authentic communicative contexts in order to make informed decisions concerning which discourse practice to select for instruction. In the meantime, when assessing students' language performance, especially in different genres of writing tasks, the effects of genre on students' language production should also be taken into consideration since different genres of writing may entail specific syntactic expressions, and the progression of each genre can be distinct. As with most studies, ours was not without its limitations. In the first place, the participants in this study were all from the same class and the same grade, leading to relatively high homogeneity, and so whether the findings of this research are generalizable needs to be further verified. Future research could involve subjects from different language proficiency levels, such as participants from different grades, institutions, and even different L1 backgrounds to investigate the syntactic development in students' writing with language level and genre as interactive factors. Also, comparing results from EFL learners with those of native speakers may generate more robust findings concerning the effects of genre on syntactic complexity. Moreover, although one academic year is not considered short in regard to collecting data, it is more desirable to conduct research over a longer time span. Additionally, we adopted the 14 measures that Lu (2011) used for SCA, and it is undeniable that syntactic complexity includes various sorts of linguistic features, such as discourse features, passive sentences, and other linguistic resources used to distinguish one's own position from that of others. Therefore, it would also be useful to employ more comprehensive and fine-grained measures for comparison. However, it is worth noting that in quantifying syntactic complexity, it is not the case that the more, the better, and therefore choosing indicators that represent a genuine and comprehensive measure of syntactic complexity matters most. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by School of Foreign Languages, Soochow University. The patients/participants provided their written informed consent to participate in this study.
The relation between plasma tyrosine concentration and fatigue in primary biliary cirrhosis and primary sclerosing cholangitis Background In primary biliary cirrhosis (PBC) and primary sclerosing cholangitis (PSC) fatigue is a major clinical problem. Abnormal amino acid (AA) patterns have been implicated in the development of fatigue in several non-hepatological conditions but for PBC and PSC no data are available. This study aimed to identify abnormalities in AA patterns and to define their relation with fatigue. Methods Plasma concentrations of tyrosine, tryptophan, phenylalanine, valine, leucine and isoleucine were determined in plasma of patients with PBC (n = 45), PSC (n = 27), chronic hepatitis C (n = 22) and healthy controls (n = 73). Fatigue and quality of life were quantified using the Fisk fatigue severity scale, a visual analogue scale and the SF-36. Results Valine, isoleucine, leucine were significantly decreased in PBC and PSC. Tyrosine and phenylalanine were increased (p < 0.0002) and tryptophan decreased (p < 0.0001) in PBC. In PBC, but not in PSC, a significant inverse relation between tyrosine concentrations and fatigue and quality of life was found. Patients without fatigue and with good quality of life had increased tyrosine concentrations compared to fatigued patients. Multivariate analysis indicated that this relation was independent from disease activity or severity or presence of cirrhosis. Conclusion In patients with PBC and PSC, marked abnormalities in plasma AA patterns occur. Normal tyrosine concentrations, compared to increased concentrations, may be associated with fatigue and diminished quality of life. Background Primary biliary cirrhosis (PBC) and primary sclerosing cholangitis (PSC) are chronic cholestatic liver diseases characterized by a usually slowly progressive course [1,2]. Many patients remain in good clinical condition for many years but may suffer from fatigue interfering with normal activities and general quality of life during a significant part of their life [3][4][5]. Fatigue is not related to the severity or activity of the liver disease, and its pathophysiology remains unknown [3,4,6]. In several non-hepatological conditions amino acids, in particular tryptophan and tyrosine, have been reported to be involved in the pathophysiology of fatigue [7,8]. Plasma amino acid abnormalities have been studied extensively in patients with liver failure and hepatic encephalopathy [9]. In patients with less advanced liver disease of various etiologies, significant differences with respect to plasma amino acid concentrations and tyrosine metabolism have been reported in comparison with control individuals. These studies were performed more than two decades ago, at a time when fatigue had not been identified as a significant problem in cholestatic liver disease. Thus far, the potential role of abnormalities in amino acid metabolism in fatigue associated with cholestatic liver disease has not been evaluated and relevant data in PSC are completely lacking. The present study aimed to identify abnormalities in plasma concentrations of several amino acids and their relation to fatigue and quality of life in patients with PBC and PSC. Methods The study was approved by our institution's medical ethics committee and informed consent was obtained from each patient. Patients with a diagnosis of PBC (45) or PSC (27) visiting the hepatology outpatient clinic of the Erasmus Medical Center between October 2001 and June 2002 were invited to participate. Exclusion criteria were an age of less than 18 years and incomplete understanding of the Dutch language. As controls with respect to amino acid concentrations, a group of 22 patients with untreated chronic hepatitis C virus infection (HCV) and a group of 73 healthy individuals were included. Fatigue in patients with PBC and PSC was quantified using a visual analogue scale (VAS) and the Fisk fatigue severity scale (FFSS). The FFSS has been validated for use in PBC, and quantifies fatigue in a physical, social and cognitive domain [6,10]. Quality of life was quantified using the SF-36, a widely used quality of life questionnaire [11]. These questionnaires were also obtained from a separate group of 18 age and sex-matched controls, because no questionnaires could be obtained from the previous group of 73 healthy individuals in whom amino acid concentrations were determined. Total serum bilirubin, serum albumin, prothrombin time and serum activities of alkaline phosphatase (AP) and aspartate aminotransferase (AST) were obtained as markers of disease activity and severity. The presence of cirrhosis was determined on the basis of histological and, if not available, clinical criteria (ultrasound findings compatible with cirrhosis if supported by the presence of thrombocytopenia or esophageal varices). Amino acid measurement Immediately after the venapuncture plasma was prepared by a 20 min centrifugation step at 2650 g and stored at -80ยฐC. The amino acids phenylalanine, tyrosine, tryptophan, isoleucine, leucine and valine were measured by means of high-performance liquid chromatography as described elsewhere [12]. The tryptophan ratio, which is the ratio of tryptophan to the summed concentrations of phenylalanine, tyrosine, isoleucine, leucine and valine, was determined as a measure for central availability of tryptophan for serotonin synthesis. The tyrosine ratio was determined as a measure for central availability of tyrosine for dopamine and norepinephrine synthesis and was calculated as the concentration of tyrosine divided by the sum of the concentrations of phenylalanine, tryptophan, isoleucine, leucine and valine. Statistics Testing for differences between groups was performed using Student's t-test and the ฯ‡ 2 test. Correlations were tested using Pearson's correlation method. The normality of amino acid distributions was assessed visually using histograms, and non-parametric tests were used where appropriate. The relations between amino acid concentrations and fatigue scores were tested by calculating correlation coefficients for VAS and FFSS domain scores and plasma amino acid concentrations. In these tests, a pvalue <0.01 was considered to be statistically significant. In order to quantify the impact of the differences in amino acid on fatigue, for those amino acids which significantly correlated with fatigue, patients were divided into groups with amino acid concentrations within the 95% confidence interval for healthy controls and patients with concentrations outside this range. Testing for differences in fatigue, quality of life and laboratory parameters between these two groups was performed using Student's t-test. Multivariate regression analysis including the biochemical tests of disease activity and severity and the presence of histological or clinical cirrhosis was performed in order to assess the independent association of amino acid abnormalities and fatigue. In all tests other than the correlation tests, a two-sided p-value <0.05 was considered statistically significant. Statistical analyses were performed using SPSS (Version 9.0, SPSS Inc, Chicago, IL, U.S.A). Patient characteristics Patient characteristics for patients with PBC and PSC are shown in table 1. As was expected because of the unbalanced sex distribution in these diseases, the majority of patients with PBC were female and the majority of patients with PSC were male. The frequency of cirrhosis, serum bilirubin and albumin and serum activities of alkaline phosphatase, AST and ALT did not significantly differ for patients with PSC or PBC. Table 2 shows the plasma concentrations of amino acids and the tryptophan and tyrosine ratio's for patients with PBC, PSC, HCV and healthy controls. Plasma concentrations of the aromatic amino acids tyrosine and phenylalanine were increased in patients with PBC, whereas in HCV only tyrosine concentration was increased compared to controls. Amino acids in patients and controls In PSC, neither of the aromatic amino acids was increased. Tryptophan concentration was decreased in patients with PBC and HCV. Plasma concentrations of the branched chain amino acids valine, isoleucine and leucine were significantly lower in both patients with PBC and PSC. The tryptophan ratio was significantly decreased in patients with PBC and HCV. The tyrosine ratio was significantly increased in all three patient groups. Within the group of healthy controls, no differences in amino acid concentrations were found for different age groups or sex. Amino acids and markers of disease activity and severity In patients with PBC, significant inverse correlations were present between the branched chain amino acids valine (p = 0.002), isoleucine (p = 0.006) and leucine (p = 0.007) and total serum bilirubin concentrations. Plasma concentrations of the aromatic amino acids tyrosine (p < 0.001) and phenylalanine (p = 0.003) correlated inversely with serum albumin concentrations. There was a significant inverse correlation between plasma valine and the serum activity of AST (p = 0.005). Patients with cirrhosis had significantly increased tyrosine (p = 0.004) and Normal values: bilirubin <18 ยตmol/l, albumin 35-50 g/l, ALT < 41 U/l (male) < 31 U/l (female), AST < 37 U/l (male) < 31 U/l (female), alkaline phosphatase < 117 U/l, prothrombin time < 13 sec. However, all differences in amino acid concentrations retained their significance in when only patients without cirrhosis and with normal bilirubin and albumin were compared to healthy controls. In patients with PSC, no significant correlations were found between any of the markers of disease activity or severity and fatigue or quality of life. Patients with PSC and inflammatory bowel disease had significantly decreased concentrations of valine, isoleucine and leucine compared to patients with PSC alone (p = 0.02). The concentrations of tyrosine, phenylalanine and tryptophan were not significantly different. Amino acids, fatigue and quality of life In patients with PBC a significant negative correlation was found between tyrosine concentrations and all fatigue tests. In addition, in these patients a significant negative correlation between tryptophan concentrations and the cognitive domain of the FFSS was found, whereas trends towards significant correlations were found for the other FFSS domains. For the other amino acids, no correlations with fatigue were found (Table 3). In patients with PSC, no significant correlations between amino acids and fatigue were found. Comparing PBC patients with normal tyrosine concentrations with patients with increased concentrations resulted in significant differences in VAS (p = 0.03), all domains of the FFSS (p = 0.03, p < 0.001 and p = 0.01 for the physical, cognitive and social domains, respectively) and the role functioning physical (the extent to which physical health interferes with work or other daily activities) (p = 0.001), bodily pain (p = 0.001), general health (p = 0.03), vitality (p = 0.004), social functioning (p = 0.005), role functioning emotional (the extent to which emotional problems interfere with work or other daily activities) (p = 0.008) and mental health (p < 0.001) domains of the SF-36 (Figures 1 and 2). In order to assess confounding by disease severity or activity, we performed multivariate analyses for the measurements of fatigue in PBC including plasma tyrosine concentrations and those laboratory tests which correlated with the amino acid, as well as the presence of cirrhosis, although these laboratory tests and the presence of cirrhosis themselves did not correlate with fatigue or quality of life. These analyses showed that only the plasma tyrosine concentration, and not the laboratory tests or the presence of cirrhosis was significantly and independently associated with fatigue. Comparing patients with normal tyrosine concentrations with healthy controls resulted in the following significant differences: VAS (p < 0.001), the physical (p < 0.001) and social (p = 0.004) domains of the FFSS and the physical functioning (p < 0.001), role functioning physical (p < 0.001), bodily pain (p = 0.004), general health (p < 0.001), vitality (p < 0.001), social functioning (p = 0.001), role emotional functioning (p = 0.05) and mental health (p = 0.04) domains of the SF-36. There was no significant difference in the cognitive domain of the FFSS. Comparing patients with increased tyrosine concentrations with healthy controls showed no significant differences in any of the tests except for worse scores in the general health (p = 0.03) and better scores in the mental The mean VAS scores were 6.1 and 3.3 for patients with normal and increased tyrosine concentrations, respectively (p = 0.01). Patients with a VAS score > 5 had a mean tyrosine concentration of 68 ยตMol/l, whereas patients with a score < 5 had a mean concentration of 86 (p = 0.02). Tests for differences in fatigue for patients with normal or decreased tryptophan concentrations did not show significant differences between the two groups. Discussion The present study confirms previous findings that significant differences in plasma amino acid concentrations between patients with PBC and healthy controls do exist [13,14]. We found increased concentrations of the aromatic amino acids tyrosine and phenylalanine and decreased concentrations of tryptophan and the branched chain amino acids valine, isoleucine and leucine. Tyrosine concentration correlated with all measurements of fatigue, whereas tryptophan concentrations correlated only with the cognitive FFSS domain. PBC patients with increased tyrosine concentrations reported less fatigue and better quality of life compared to patients with normal concentrations. For PSC, no previous studies on amino acid patterns are available for comparison. We found significant decreases in the plasma concentrations of the branched chain amino acids, and trends towards decreased tryptophan and increased tyrosine and phenylalanine concentrations. However, in contrast to PBC, no relationship with fatigue was found. In addition, we found that valine, isoleucine and leucine concentrations were even lower in patients with PSC and inflammatory bowel diseases than in patients with PSC alone. To our knowledge, no previous data on amino acid concentrations in inflammatory bowel disease are available for comparison. In several previous studies, mostly on hepatic encephalopathy in patients with advanced cirrhosis, plasma concentrations of amino acids have been studied [9,15]. However, we could identify only two studies including patients with non-cirrhotic PBC. Given the supposedly normal liver function in these patients, these studies somewhat surprisingly found marked differences between patients and controls comparable to those observed in the present study [13,14]. In addition, although the differences appeared to be somewhat smaller, comparable results were obtained in patients with PSC. It remains unclear which mechanisms are responsible for these differences. Although correlations with the markers of disease severity were found, these do not adequately explain the differences in amino acid concentrations, since only a small proportion of the variation in amino acid concentrations could be explained by differences in these markers, and significant differences existed in the majority of patients without cirrhosis and with normal albumin and bilirubin concentrations. Therefore, we suggest other mechanisms, rather than inflammation of the liver or an overall decreased liver function, may be responsible for the noted abnormalities. The nature of these mechanisms, however, remains unknown. Tyrosine and phenylalanine are mainly metabolized in the liver, suggesting that decreased liver function might result in increased plasma levels. The decreased tryptophan concentrations found in our study might be explained by increased use of tryptophan as a result of immune activation [15,16]. We did not analyze dietary factors that supposedly could influence amino acid concentrations. Previous studies found no evidence to suggest that this is a factor of importance [13,14]. Nearly all patients in the present study were being treated with ursodeoxycholic acid while previous studies reporting comparable plasma amino acid patterns in PBC were performed in the pre-UDCA era [13,14]. Therefore, a role for UDCA in causing these altered patterns seems unlikely. Fatigue is a significant problem in many patients with PBC and PSC, and has been studied extensively in recent years [3,4,6,17]. However, so far, no specific etiological or pathogenic factors have been identified. Especially, no relation has been found with laboratory parameters for the activity or severity of the disease or histological stage. An effective medical treatment for fatigue associated with PBC and PSC is not available. Two recent studies specifically addressing PBC-associated fatigue, indicate that treatment with antioxidants is ineffective [18,19]. The present study suggests an association between fatigue and normal tyrosine concentrations in PBC. Concentrations above the 95% confidence interval for healthy controls corresponded with statistically significantly less fatigue and better quality of life scores. Although this suggests that increased tyrosine concentrations may 'protect' against fatigue and normal concentrations may 'cause' fatigue, it may well be that that tyrosine plasma concentration alterations are an epiphenomenon and that both these and fatigue are caused by a so far unknown confounding factor or mechanism. Tyrosine is a precursor in the synthesis of dopa, dopamine, epinephrine and norepinephrine, all of which are important neurotransmitters that might play a role in fatigue. Experimental catecholamine depletion has been reported to worsen fatigue, suggesting that a (relative) lack of tyrosine might be associated with fatigue [20,21]. Further, beneficial effects of tyrosine administration in the prevention of exhaustion and fatigue after physical activity in both animals and humans have been reported [22][23][24]. Since tyrosine concentrations, and not the tyrosine-ratio was significantly associated with fatigue, a peripheral instead of a central role for tyrosine in the development of fatigue is suggested, which is supported by previous findings supporting peripheral mechanisms in the development of PBC associated fatigue [17]. In addition, other mechanisms, which we did not study, such as abnormalities in the hypothalamo-pituitary-adrenal axis, for example abnormal CRH-release, or manganese homeostasis, might be involved in the development of fatigue in these diseases [25]. In addition, cytokine release as a result of an inflammatory response might also play a role, although studies supporting this hypothesis are lacking. Studies into these mechanisms might therefore be of interest. It remains unclear why we found a relation between fatigue and tyrosine only in patients with PBC and not in patients with PSC. Although it is likely that fatigue would have a similar etiology in these diseases, the cause of fatigue remains unknown and therefore different mechanisms may occur in these related diseases. Another explanation might be a lack of power to detect a difference in patients with PSC, because less patients with PSC than with PBC were included. On the other hand, although the relation between tyrosine and fatigue was highly significant and occurred for all measures of fatigue, the current finding could be the result of chance or be an epiphenomenon not involved in the pathogenesis of fatigue. Further studies are therefore required to confirm the present findings and to evaluate the effect of tyrosine suppletion in PBC patients with fatigue. Conclusion In conclusion, we showed that in patients with PBC and PSC, marked abnormalities in plasma amino acids occur. In addition, in patients with PBC, normal tyrosine concentrations, compared to increased tyrosine concentrations, may be associated with fatigue and diminished quality of life. This association was independent from the activity and severity of the disease.
Accuracy of patient setup positioning using surfaceโ€guided radiotherapy with deformable registration in cases of surface deformation Abstract The Catalystโ„ข HD (Cโ€RAD Positioning AB, Uppsala, Sweden) is surfaceโ€guided radiotherapy (SGRT) equipment that adopts a deformable model. The challenge in applying the SGRT system is accurately correcting the setup error using a deformable model when the body of the patient is deformed. This study evaluated the effect of breast deformation on the accuracy of the setup correction of the SGRT system. Physical breast phantoms were used to investigate the relationship between the mean deviation setup error obtained from the SGRT system and the breast deformation. Physical breast phantoms were used to simulate extension and shrinkage deformation (โˆ’30 to 30 mm) by changing breast pieces. Threeโ€dimensional (3D) Slicer software was used to evaluate the deformation. The maximum deformations in X, Y, and Z directions were obtained as the differences between the original and deformed breasts. We collected the mean deviation setup error from the SGRT system by replacing the original breast part with the deformed breast part. The mean absolute difference of lateral, longitudinal, vertical, pitch, roll, and yaw, between the rigid and deformable registrations was 2.4 ยฑ 1.7 mm, 1.3 ยฑ 1.2 mm, 6.4 ยฑ 5.2 mm, 2.5ยฐ ยฑ 2.5ยฐ, 2.2ยฐ ยฑ 2.4ยฐ, and 1.0ยฐ ยฑ 1.0ยฐ, respectively. Deformation in the Y direction had the best correlation with the mean deviation translation error (R = 0.949) and rotation error (R = 0.832). As the magnitude of breast deformation increased, both mean deviation setup errors increased, and there was greater error in translation than in rotation. Large deformation of the breast surface affects the setup correction. Deformation in the Y direction most affects translation and rotation errors. Meanwhile, the technique of tattooing injects a needle into the patient's skin at a few points. This invasive technique of marking the skin is convenient to the radiographer but painful to the patient. 2 Nowadays, inroom imaging technology has the potential to provide an accurate positioning setup. Image-guided radiotherapy (IGRT) is a modern radiotherapy technique that uses ionizing and non-ionizing radiation systems. The standard method of IGRT is cone-beam computed tomography (CBCT) using anatomical landmarks. A nonionizing radiation camera-based or optical tracking system is used to identify the setup point without additional radiation and thus reduce the radiation dose in the setup positioning and online motion monitoring during treatment. [1][2][3] Such surface-guided radiotherapy (SGRT) systems verify and correct the positioning using the skin surface of the patient. An example of commercial equipment used in SGRT is Catalystโ„ข HD (C-RAD Positioning AB, Uppsala, Sweden), which adopts a deformable model. 4 The Cata-lystโ„ข HD system can correct the setup positioning and detect the deviation of the position of the patient before treatment from the position in the computed tomography (CT) simulation. Reference data are obtained by a treatment planning system and imported to the Cata-lystโ„ข HD system for comparison with the actual position. The Catalyst scanner scans and creates a live image. The operation of surface matching adopts a deformable algorithm to match the reference image and live image. The results of correction are then calculated as absolute and relative corrections. Additionally, the Catalystโ„ข HD system corrects the posture error to adjust the extremity and chin positions relating to the treatment area; for example, the shoulder, arm, and chin positions for breast cancer. The system adopts a modified deformable iterative closest point (ICP) algorithm to create a deformable node graph from the reference surface. The algorithm finds point-by-point correspondence between target and reference surfaces and applies a transformation to the reference surface. [5][6][7] The Catalystโ„ข HD system is used in the treatment of all cancer regions, including the head, thorax, abdomen, and extremities. 4 In particular, the optical surface system is widely used in radiotherapy for breast cancer patients. Breast-conserving surgery (BCS) followed by radiation therapy is the standard treatment for early breast cancer. However, the clinical effect of a course of radiotherapy includes the acute or late toxicity of a high dose for normal tissue; examples of effects are changes in texture and color of the irradiated area, fibrosis, breast shrinkage, osteoporosis, and pulmonary problems. 8 Breast edema affects the breast shape in cancer patients. Additionally, breast deformation contributes to rotational errors in the setup positioning in all three directions. Breast deformation reportedly occurs for women having undergone BCS and affects the dose distribution, with large deformations potentially resulting in the underdosing of the target volume. [9][10][11] In the case of surface deformation, the deformable image registration algorithm of the Catalystโ„ข HD system is used by adopting a deformable reference mesh based on the ICP algorithm for each node, and the mesh is fitted to a deformable model and the surface shape is reconstructed. 7,12 The challenge in using the Catalystโ„ข HD system is accurately correcting the positioning setup when the body of the patient has deformed. This study evaluates the effect of breast deformation on the accuracy of a surface imaging system. SGRT system The Catalystโ„ข HD is used for SGRT. In clinical use, this system assists in adjusting the positioning of the patient during setup, monitors the positioning of the patient, and assists with gating and deep inspiration breath-hold (DIBH) during treatment. The SGRT system has three cameras oriented at intervals of 120 โ€ข . Near-invisible violet patterns are projected as a color map representing posture error onto the surface of the patient and measured. The patterns are captured by the three cameras and a model of the external surface of the patient is reconstructed. The software interfaces to the program of the linear accelerator and gives the direction of positioning correction toward the reference setup. This system does not require temporary marks or permanent tattoos on the patient's skin. 4,13,14 The algorithm of the SGRT system adopting non-rigid registration assumes a correspondence of the original and deformation surfaces and conducts matching through a geometrical transformation. If the distance between the source and target point sets is greater than 2 cm, the stiffness of the object is reduced and the iterative improvement algorithm restarts optimization to obtain improved correspondence that provides better results. 5,6 Physical breast phantom The correlation between the mean setup error as determined by the SGRT system and breast deformation was analyzed using physical breast phantoms. The shapes of the physical breast phantoms were designed using data from six patients from the radiotherapy department at Kanazawa University Hospital. All data in this study were from breast cancer patients treated to the lumpectomy because radiotherapy treatment can cause skin deformation. 9,10 The Institutional Review Board approved this retrospective study, which did . Three patient cases were used in investigation and three in validation. We knew the breast volumes, as delineated by radiation oncologists, from the treatment plans of the six patients. The investigated breast volumes of 300, 445, and 1315 cm 3 were, respectively, considered to be representative of small, medium, and large breasts. Likewise, the validation breast volumes of 340, 435, and 750 cm 3 were, respectively, considered to be representative of small, medium, and large breasts. The assigning of breast size followed the baseline of the breast tissue volume median in Japanese mammography examinations. 15 To make the physical breast phantom, we imported a CT DICOM file of the patient data to 3D Slicer (version 4.11.2), which is open-source software for medical image processing. We created a three-dimensional (3D) model of the patient outline from a raw CT image using the segment editor module and exported it into Blender (version 2.8.3). We then separated the body and breast part. The breast volume was not exactly the volume from the treatment plan because the junction between the breast part and body part should lie in the same plane for close alignment when we replace the original breast part with the deformed breast part (Figure 1a). The magnitude of breast deformation ranged from โˆ’30 to 30 mm. In the investigated cases, we deformed the ipsilateral side by โˆ’30, โˆ’15, โˆ’6, โˆ’3, 0, 3, 6, 9, 15, and 30 mm using software. The investigated small breast was not large enough to shrink and was thus only extended to 30 mm. The medium breast was shrunk by 15 mm and extended to 30 mm. The large breast was shrunk and extended to 30 mm. We therefore collected 25 datasets for investigation, including six datasets for the small breast, nine datasets for the medium breast, and 10 datasets for the large breast. In validation, both sides of the breasts were deformed in the range of ยฑ30 mm; that is, by 0, 10, and 30 mm for the small and medium breasts and by โˆ’20, โˆ’10, 0, 10, and 30 mm for the large breast. There were thus 20 validation datasets, including six datasets for the small breast, six datasets for the medium breast, and eight datasets for the large breast. The extension and shrinking range of breasts is from โˆ’6 to 15 mm in a clinical situation 10 but was widened to ยฑ30 mm for the range test. We designed the direction of breast deformation based on research about breast deformation in patients during radiotherapy. 9,10 Breasts were deformed using the smooth proportional edit mode in Blender. We selected an area for vertical deformation. However, it is noted that there was also a small deformation effect in the surrounding area ( Figure 1b). The data for all parts were exported to a da Vinci 1.0 Pro 3D printer (XYZ Printing, Taiwan). The printing material was a filament of polylactic acid. The physical breast phantoms for investigation ( Figure 2a) and validation ( Figure 2b) were sprayed with paint to give a skin tone color. The deformation difference of the breast piece between the Blender and the actual size was within 3 mm. Lead balls as CT imaging markers were placed at six points, namely two points in the midsection and four points on the two lateral sides at the top and bottom. Data acquisition We quantified the deformation of physical breast phantoms by comparing the surfaces of the original and deformed breasts. Each combination of the body part and breast parts was scanned with an Aquilion LB CT scanner (Canon Medical, Tokyo, Japan). The scan parameter settings were a slice thickness of 2 mm, tube voltage of 120 kV, and tube current exposure time of 300 mA s. The size of the physical breast phantom was slightly reduced by printing with the 3D printer. Therefore, the contour of each original physical breast phantom from the 3D printer in the CT images was used as a reference surface for positioning correction with the SGRT system. After we contoured each original physical breast phantom, the CT images with the phantom contour were transferred to a Monaco treatment planning system (Elekta AB, Stockholm, Sweden). The isocenter was placed on the chest wall, near the center of the base of the breast, below the junction of the F I G U R E 2 Physical breast phantoms with deformable breast parts: (a) three breast phantoms for investigation and (b) three breast phantoms for validation body and breast part to achieve a similar isocenter for each test. We then transferred the planning data and reference surface to the linear accelerator and SGRT system. The CT data for all physical breast phantoms, including the original and deformed phantoms, were imported to 3D Slicer for evaluation of the deformation. The segment editor module was used to define the region of interest as only the breast region manually. We then converted the region of interest to the label map volume using the segmentation module. The model-to-model distance module of 3D Slicer computed the distance between the reference and deformed surfaces. The source model (reference surface) was deformed to match the target model (deformed surface). If the deformed surface is the same as the reference surface, all vector lengths of the displacement vector field are zero and the displacement magnitude approaches zero. In contrast, the similarity between the two images decreases as the vector lengths increase from zero. 16,17 The mesh statistics module of 3D Slicer was used to obtain the maximum value in the field point-to-point along X, Y, and Z directions to analyze in which direction deformation most affects the mean setup error. The shape population viewer module was used to visualize and display the 3D surface deformation with scalars and vectors ( Figure 3). The deformation of the breast part was evaluated using the maximum value in each direction according to where D breast is the deformation of the breast part and X max , Y max , and Z max are, respectively, the maximum deformations in X, Y, and Z directions. To assess the positioning accuracy of the SGRT system, the physical breast phantom was setup on a 6D treatment couch by matching the isocenter defined in the radiotherapy treatment plan to the isocenter defined in the linac room. In registration, an XVI CBCT system (Elekta AB) equipped on the linac acquired images to verify the phantom positioning setup. Additionally, the CT imaging markers on the phantom were used to check the registration of the CBCT image with the CT images of the treatment plan using an intensity-based method with automatic and manual matching. The 6D treatment couch physically corrected the phantom positioning according to the result of the registration. The workflow for collecting the data from the SGRT system is shown in Figure 4. We used the masking tape to attach the body part of the physical breast phantom to the treatment couch for stability during changing the breast piece. The optical camera settings of the SGRT system were an integration time of 4000 ยตs, gain of 350%, and surface averaging of 6 s, ensuring the same setup setting for the simulation of the patient. The surface tolerance setting was 10 mm and displayed in a submillimeter unit for position correction. We setup the boundary of the scan volume to fit the physical breast phantom for small/standard/large size, ensuring the same setting for consistency during collecting the datasets. The original breast part was replaced with each of the deformed breast parts in assessing the cases of breast deformation. In the position correction, the contour of the physical breast phantom from radiotherapy treatment planning was used as a reference, and the surface image of the physical breast phantom on the treatment couch was taken as the actual image. The SGRT system calculated and displayed the sixdegree-of -freedom errors for correction. There were 25 datasets for investigation and 20 datasets for validation. We conducted the measurement three times at intervals of 10 s to account for instability and realized continual real-time guidance using the SGRT system. The positioning setup error was represented as lateral, longitudinal, and vertical translational errors and pitch, roll, and yaw rotational errors. After we obtained the setup error for each dataset, we averaged the data to obtain the translation and rotation errors in each direction. Data analysis We compared the deformable and rigid registrations from analyzing tool in the Catalystโ„ข HD system. The where XD is the deformable registration value, XR is the rigid registration value, and n is amount of data. The MAD was compared in lateral, longitudinal, and vertical translational errors and pitch, roll, and yaw rotational errors. The pair t-test was used to determine statistically significant differences, and a p-value less than 0.05 was considered to show statistical significance using SPSS version 20.0 (SPSS, Chicago, IL, USA). The mean translation error obtained from measurement by the SGRT system (MT measured ) and the mean rotation error obtained from measurement by the SGRT system (MR measured ) were calculated as where lat, lng, and vrt are, respectively, the lateral, longitudinal, and vertical translation errors and pitch, roll, and yaw are, respectively, the pitch, roll, and yaw angular errors. We constructed a graph of the calculation of the mean translation error (MT cal ),calculation of the mean rotation error (MR cal ), and D breast calculated using Equation (1). X max /Y max /Z max and MT measured /MR measured were used to plot a linear regression model. The regression F I G U R E 5 Mean absolute difference in lateral/longitudinal/vertical translational errors and pitch/roll/yaw rotational errors coefficient of the linear regression model was adopted to find the weight factors of X max /Y max /Z max for the calculation of MT cal /MR cal using SPSS version 20.0. The correlation between the X max /Y max /Z max deformations and MT measured /MR measured was expressed as a coefficient of regression (R), and a p-value less than 0.05 was considered to show statistical significance. In testing the accuracy of the calculation of MT cal /MR cal , the correlations of MT measured /MR measured and MT cal /MR cal were analyzed using the coefficient of determination (R 2 ) from the scatter plot. We then used the linear trend of the calculations of MT cal /MR cal to construct the graph of D breast and MT cal /MR cal . We validated the equation using data obtained for the phantoms. Figure 5). Analysis shows that MT measured had the best correlation with Y max . The R values of the correlation between MT measured and X max /Y max /Z max were, respectively, 0.643, 0.949, and 0.719, showing statistical significance for all directions of deformation (p โ‰ค 0.05). MR measured also had the best correlation with Y max . Comparison of MAD The R values of the correlation between MR measured and X max /Y max /Z max were,respectively,0.586,0.832,and 0.711, showing statistical significance for all directions of deformation (p โ‰ค 0.05) ( Figure 6). Equations for MT cal /MR cal are obtained using the weight factors of X max /Y max /Z max obtained from linear regression analysis: MR cal = โˆ’0.029X max + 0.282Y max โˆ’ 0.076Z max + 0.319, (6) The R 2 values of the correlation of MT measured / MR measured and MT cal /MR cal were 0.978 and 0.934, respectively (Figure 7). These R 2 values were close to 1, and the calculated data thus correlated strongly with the measurement data of the SGRT system. These results confirm that it is possible to predict MT cal /MR cal using the above equations. We use the linear trends of Equations (5) and (6) to obtain the relations between MT cal /MR cal and D breast : where D breast is the deformation of the breast part from Equation (1). In validation, the values of MT measured /MR measured obtained for validation cases were close to MT cal /MR cal . Among the 20 validation datasets, the difference between MT measured and MT cal was within ยฑ5 mm for 16 datasets and more than ยฑ5 mm for four datasets. Among the four datasets having a difference between MT measured and MT cal greater than ยฑ5 mm, one dataset was for the original breast size and the others were for D breast greater than 29 mm. The difference between MR measured and MR cal was within ยฑ5 โ€ข for 17 datasets and more than ยฑ5 โ€ข for three datasets, all of which had D breast greater than 30 mm. DISCUSSION Among the 45 datasets for the investigation and validation cases, the difference between MT measured and MT cal was within ยฑ5 mm for 91% of the datasets and that between MR measured and MR cal was within ยฑ5 โ€ข for 93% of the datasets (Figure 8). The R 2 values of the correlation showed the calculated data was strongly correlated with the measurement data and had good accuracy within ยฑ5 mm/ยฑ5 โ€ข . Hence, our equations can estimate the mean translation and rotation errors in clinical situations. There were four datasets for which MT measured obtained in validation cases exceeded ยฑ5 mm. The one validation dataset for which the difference between MT measured and MT cal exceeded ยฑ5 mm had MT measured of 5.4 mm for the original breast size. The correction of the setup positioning of the original breast size should ideally be near zero. However, many factors, such as the uncertainty in the manual setup and registration due to the flexing of the physical breast phantom in the junction area, contributed to positioning setup error in this study. There were three datasets for which the difference between MT measured and MT cal exceeded ยฑ5 mm and three datasets for which the difference between MR measured and MR cal exceeded ยฑ5 โ€ข when the deformation of the large breast exceeded 29 mm. The deformable registration restarted optimization when the distance between the source and the target point sets exceeded 2 cm. 5,6 This could be due to uncertainty arising in the Catalystโ„ข HD system. The uncertainty in the processing system used in this study was similar to that in the study of Walter et al., 3 who analyzed thoracic, abdominal, and pelvic regions with the Catalystโ„ข HD system. They found that the registration includes uncertainty because of the difficulty in detecting different shapes. 18 The deformable algorithm of the Catalystโ„ข HD system using a simple physical breast phantom may show different magnitudes of effect of breast deformation on the mean translation and rotation errors. D breast has linear correlations with MT measured /MR measured ; that is, an increase in the magnitude of breast deformation increases MT measured /MR measured . The breast deformation relates to deformation in multiple directions but both MT measured and MR measured were most correlated with Y max . The performance of the deformable registration results shows the MAD had the most value in vertical direction when breast deformation. This could be due to breast deformation, which includes swelling and shrinkage, being largely in the vertical direction 9,10 ( Figure 9). A comparison of our results with previously published results shows that we obtained the same result as Meyer et al., 13 who characterized the Catalystโ„ข HD system for breast size. A change in breast size is usually seen in the vertical direction because of the geometry of breast deformation. However, our results are slightly greater than their results because of the adoption of different methods of simulating breast deformation and different populations. We derived appropriate equations for estimating the accuracy of the SGRT system for breast deformation. The equations were introduced to find the mean setup error when we know the maximum deformations in X, Y, and Z directions. The weight factors for constructing the equation are derived using the physical breast phantom. However, the weight factors used to construct an equation from the real breast deformation of a patient may be different. Ono et al. 19 calculated the setup margin for left-sided breast radiotherapy during DIBH. They found the optimal planning target volume (PTV) margin as 3.59 mm from analyzing the systematic and random errors. This margin was useful for the setup of the breast cancer patients. Hence, the overall displacement tolerance should be within ยฑ3.59 mm. In this study, we found that breast deformation of 7 mm affected the accuracy of the SGRT system with a tolerance of 3.59 mm. In practice, it is difficult to analyze maximum deformations in X, Y, and Z directions. However, the breast deformation can be estimated breast deformation in the Y direction from measurement on the CBCT image. The 2D vector magnitude of Y direction was defined by the maximum distance difference between the two registrations of the breast surface contour. The increase of magnitude in the Y direction leads to more position setup error. During a course of treatment, staff should take care when setting up a breast cancer patient because changes in the breast shape affect the mean setup error. After the patient setup was verified by CBCT imaging and the mean setup error exceeds 3.59 mm, the breast of the patient may have deformed. We can confirm by the magnitude in Y direction from CBCT images. We represented breast deformation in terms of X max , Y max , and Z max . The minimum and average values in the X, Y, and Z directions are not close to the magnitudes of deformation created with the software in Section 2.2 whereas the maximum values are closer. However, there are various factors relating to breast deformation and the mean setup error, such as the appropriate selection of a reference surface, 13 breast volume, posture of the patient, body mass index, and location of the breast treatment. 10,20 It is noted that there were limitations to the physical breast phantom because we designed the direction of deformation using software. The direction of deformation is more complex for a real patient than for the physical breast phantom and cannot be predicted. Therefore, the six cases examined in this study may not represent all clinical situations. In addition, we selected the area of deformation manually, which may introduce uncertainty into the evaluation of deformation when comparing the original and other breast sizes. One point to consider is that we cannot know exactly X max , Y max , and Z max direction separately. However, the concept of this study can be adopted in testing the performance of other SGRT systems that can detect a deformed surface using a deformable model. CONCLUSIONS This study described the effects of the deformation of the breast surface on the positioning accuracy of radiotherapy and found that deformation occurs more in translation than in rotation. We found the magnitude of breast deformation affects the positioning accuracy of the Catalystโ„ข HD system. We derived an equation for estimating the error of the Catalystโ„ข HD system and found that the accuracy of the SGRT system is within 3.59 mm when the breast deformation is less than 7 mm. Additionally, when the breast of a breast cancer patient is deformed, the non-rigid registration of the Cat-alystโ„ข HD system handle a large surface deformation that affects the setup.
The distribution of the quasispecies for the Wright-Fisher model on the sharp peak landscape We consider the classical Wright-Fisher model with mutation and selection. Mutations occur independently in each locus, and selection is performed according to the sharp peak landscape. In the asymptotic regime studied in [3], a quasispecies is formed. We find explicitly the distribution of this quasispecies, which turns out to be the same distribution as for the Moran model. Introduction The concept of quasispecies first appeared in 1971, in Manfred Eigen's celebrated paper [8]. Eigen studied the evolution of a population of macromolecules, subject to both selection and mutation effects. The selection mechanism is coded in a fitness landscape; while many interesting landscapes might be considered, some have been given more attention than others. One of the most studied landscapes is the sharp peak landscape: one particular sequence-the master sequence-replicates faster than the rest, all the other sequences having the same replication rate. A major discovery made by Eigen is the existence of an error threshold for the mutation rate on the sharp peak landscape: there is a critical mutation rate q c such that, if q > q c then the population evolves towards a disordered state, while if q < q c then the population evolves so as to form a quasispecies, i.e., a population consisting of a positive concentration of the master sequence, along with a cloud of mutants which highly resemble the master sequence. Eigen's model is a deterministic model, the population of macromolecules is considered to be infinite and the evolution of the concentrations of the different genotypes is driven by a system of differential equations. Therefore, when trying to apply the concepts of error threshold and quasispecies to other areas of biology (e.g. population genetics or virology), Eigen's model is not particularly well suited; a model for a finite population, which incorporates stochastic effects, is the most natural mathematical approach to the matter. Several works have tackled the issue of creating a finite and stochastic version of Eigen's model [1], [5], [6], [10], [11], [12], [13], [14], [15]. Some of these works have recovered the error threshold phenomenon in the case of finite populations: Alves and Fontantari [1] find a relation between the error threshold and the population size by considering a finite version of Eigen's model on the sharp peak landscape. Demetrius, Schuster and Sigmund [5] generalise the error threshold criteria by modelling the evolution of a population via branching processes. Nowak and Schuster [13] also find the error threshold phenomenon in finite populations by making use of a birth and death chain. Some other works have tried to prove the validity of Eigen's model in finite populations by designing algorithms that give similar results to Eigen's theoretical calculations [10], while others have focused on proposing finite population models that converge to Eigen's model in the infinite population limit [6], [12]. The Wright-Fisher model is one of the most classical models in mathematical evolutionary theory, it is also used to understand the evolution of DNA sequences (see [7]). In [3], some counterparts of the results on Eigen's model were derived in the context of the Wright-Fisher model. The Wright-Fisher model describes the evolution of a population of m chromosomes of length โ„“ over an alphabet with ฮบ letters. Mutations occur independently at each locus with probability q. The sharp peak landscape is considered: the master sequence replicates at rate ฯƒ > 1, while all the other sequences replicate at rate 1. The following asymptotic regime is studied: In this asymptotic regime the error threshold phenomenon present in Eigen's model is recovered, in the form of a critical curve ฮฑฯˆ(a) = ln ฮบ in the parameter space (a, ฮฑ). If ฮฑฯˆ(a) < ln ฮบ, then the equilibrium population is totally random, whereas a quasispecies is formed when ฮฑฯˆ(a) > ln ฮบ. In the regime where a quasispecies is formed, the concentration of the master sequence in the equilibrium population is also found. The aim of this paper is to continue with the study of the Wright-Fisher model in the above asymptotic regime in order to find the distribution of the whole quasispecies. It turns out that the resulting distribution is the same as the one found for the Moran model in [4]. Nevertheless, the techniques we use to prove our result are very different from those of [4]. The study of the Moran model relied strongly on monotonicity arguments, and the result was proved inductively. The initial case and the inductive step boiled down to the study of birth and death Markov chains, for which explicit formulas could be found. The Wright-Fisher model is a model with no overlapping generations, for which this approach is no longer suitable. In order to find a more robust approach, we rely on the ideas developed by Freidlin and Wentzell to investigate random perturbations of dynamical systems [9], as well as some techniques already used in [3]. Our setting is essentially the same as the one in [3], the biggest difference being that we work in several dimensions instead of having one dimensional processes. The main challenge is therefore to extend the arguments from [3] to the multidimensional case. This is achieved by replacing the monotonicity arguments employed in [3] by uniform estimates. We present the main result in the next section. The rest of the paper is devoted to the proof. Main Result We present the main result of the article here. We start by describing the Wright-Fisher model, we state the result next, and we give a sketch of the proof at the end of the section. The Wright-Fisher model Let A be a finite alphabet and let ฮบ be its cardinality. Let โ„“, m โ‰ฅ 1. Elements of A โ„“ represent the chromosome of an individual, and we consider a population of m such chromosomes. Two main forces drive the evolution of the population: selection and mutation. The selection mechanism is controlled by a fitness function A : For a given population x, the value F (u, x) is the probability that the individual u is chosen when sampling from x. Throughout the replication process, mutations occur independently on each allele with probability q โˆˆ ]0, 1โˆ’1/ฮบ[ . When a mutation occurs, the letter is replaced by a new letter, chosen uniformly at random among the remaining ฮบ โˆ’ 1 letters of the alphabet. The mutation mechanism is encoded in a mutation matrix M(u, v), u, v โˆˆ A โ„“ . The analytical formula for the mutation matrix is as follows: We consider the classical Wright-Fisher model. The transition mechanism from one generation to the next one is divided in two steps. Firstly, we sample with replacement m chromosomes from the current population, according to the selection function F given above. Secondly, each of the sampled chromosomes mutates according to the law given by the mutation matrix. Finally, the whole old generation is replaced with the new one, so generations do not overlap. For n โ‰ฅ 0, we denote by X n the population at time n, or equivalently, the n-th generation. The Wright-Fisher model is the Markov chain (X n ) nโ‰ฅ0 with state space (A โ„“ ) m , having the following transition matrix: Main result We will work only with the sharp peak landscape: there exists a sequence w * โˆˆ A โ„“ , called master sequence, whose fitness is A(w * ) = ฯƒ > 1, whereas for all u = w * in A โ„“ the fitness A(u) is 1. We introduce Hamming classes in the space A โ„“ . The Hamming distance between two chromosomes u, v โˆˆ A โ„“ is defined as follows: For k โˆˆ { 1, . . . , โ„“ } and a population x โˆˆ (A โ„“ ) m , we denote by N k (x) the number of sequences in the population x which are at distance k from the master sequence, i.e., Let us denote by I(p, t) the rate function governing the large deviations of a binomial law of parameter p โˆˆ [0, 1]: We define, for a โˆˆ ]0, +โˆž[ , Theorem 2.1. We suppose that in such a way that We have the following dichotomy: โ€ข if ฮฑฯˆ(a) < ln ฮบ, then โ€ข if ฮฑฯˆ(a) > ln ฮบ, then Moreover, in both cases, Sketch of proof The Wright-Fisher process (X n ) nโ‰ฅ0 is hard to handle, mainly due to the huge size of the state space and the lack of a natural ordering in it. Instead of directly working with the Wright-Fisher process, we work with the occupancy process (O n ) nโ‰ฅ0 . The occupancy process is a simpler process which derives directly from the original process (X n ) nโ‰ฅ0 , but only keeps the information we are interested in, namely, the number of chromosomes in each of the โ„“ + 1 Hamming classes. The state space of the occupancy process is much simpler than that of the Wright-Fisher process, and it is endowed with a partial ordering. The occupancy process will be the main subject of our study. We fix next K โ‰ฅ 0 and we focus on finding the concentration of the individuals in the K-th Hamming class. We compare the time that the occupancy process spends having at least one individual in one of the Hamming classes 0, . . . , K (persistence time), with the time the process spends having no sequences in any of the classes 0, . . . , K (discovery time). Asymptotically, when ฮฑฯˆ(a) < ln ฮบ, the persistence time becomes negligible with respect to the discovery time, whereas when ฮฑฯˆ(a) > ln ฮบ, it is the discovery time that becomes negligible with respect to the persistence time. This fact, which already proves the first assertion of theorem 2.1, is shown in [3] for the case K = 0; the more general case K โ‰ฅ 1 is dealt with in the same way as the case K = 0, and the proof does not make any new contributions to the understanding of the model. Therefore, we will admit this fact and focus on the interesting case ฮฑฯˆ(a) > ln ฮบ. The occupancy process The occupancy process (O n ) nโ‰ฅ0 will be the starting point of our study. It is obtained from the original Wright-Fisher process (X n ) nโ‰ฅ0 by using a technique known as lumping (section 4 of [3]). Let P m โ„“+1 be the set of the ordered partitions of the integer m in at most โ„“ + 1 parts: is interpreted as an occupancy distribution, which corresponds to a population with o(l) individuals in the Hamming class l, for 0 โ‰ค l โ‰ค โ„“. The occupancy process (O n ) nโ‰ฅ0 is a Markov chain with values in P m โ„“+1 and transition matrix given by: where A H is the lumped fitness function, defined as follows and M H is the lumped mutation matrix: for b, c โˆˆ { 0, . . . , โ„“ } the coefficient The state space P m โ„“+1 of the occupancy process is endowed with a partial order. Let o, o โ€ฒ โˆˆ P m โ„“+1 , we say that o is lower than or equal to o โ€ฒ , and we Stochastic bounds In this section we build simpler processes in order to bound stochastically the occupancy process (O n ) nโ‰ฅ0 . We will couple the simpler processes with the original occupancy process and we will compare their invariant probability measures. Lower and upper processes We begin by constructing a lower process (O โ„“ n ) nโ‰ฅ0 and an upper process (O K+1 n ) nโ‰ฅ0 in order to bound stochastically the original occupancy process (O n ) nโ‰ฅ0 . In other words, the lower and upper processes will be built so that for every occupancy distribution o โˆˆ P m โ„“+1 , if the three processes start from o, then The new processes will have simpler dynamics than the original occupancy process. Let us describe loosely the dynamics of the lower process. As long as there are no master sequences present in the population, the lower process evolves exactly as the original occupancy process. As soon as a master sequence appears, all the chromosomes in the Hamming classes K +1, . . . , โ„“ are directly sent to the class โ„“. Moreover, as long as the master sequence remains present in the population, all mutations towards the classes K + 1, . . . , โ„“ are also sent to the Hamming class โ„“. The dynamics of the upper process is similar, this time with the Hamming class โ„“ replaced by the class K + 1. The rest of the section is devoted to formalising this construction. Let ฮจ O be the coupling map defined in section 5.1 of [3]. We modify this map in order to obtain a lower map ฮจ โ„“ O and an upper map ฮจ K+1 O . The coupling map ฮจ O takes two arguments, an occupancy distribution o โˆˆ P m โ„“+1 and a matrix r โˆˆ R, where R is the set of matrices of size m ร— (โ„“ + 1) with coefficients in [0, 1]. The Markov chain (O n ) nโ‰ฅ0 is built with the help of the map ฮจ O and a sequence (R n ) nโ‰ฅ1 of independent random matrices with values in R, the entrances of the same random matrix R n being independent and identically distributed, with uniform law over the interval [0, 1]. Let us define two maps ฯ€ โ„“ , ฯ€ K+1 : P m โ„“+1 โ†’ P m โ„“+1 by setting, for every o โˆˆ P m โ„“+1 , We denote by W * the set of occupancy distributions having at least one master sequence, i.e., and we denote by N the set of occupancy distributions having no master sequences, i.e., . The occupancy distributions o โ„“ enter and o โ„“ exit are the absolute minima of the sets W * and N . We define the lower map ฮจ โ„“ O by setting, for o โˆˆ P m โ„“+1 and r โˆˆ R, Likewise, we define the occupancy distributions , which are the absolute maxima of the sets W * and N . We define an upper map ฮจ K+1 O by setting, for o โˆˆ P m โ„“+1 and r โˆˆ R, The coupling map ฮจ O is monotone -lemma 5.5 of [3]-i.e., for every pair of occupancy distributions o, o โ€ฒ and for every r โˆˆ R, We use the lower and upper maps, along with the i.i.d. sequence of random matrices (R n ) nโ‰ฅ0 , in order to build a lower occupancy process (O โ„“ n ) nโ‰ฅ0 and an upper occupancy process The proof is similar to the proof of proposition 8.1 in [2]. Dynamics of the bounding processes We study now the dynamics of the lower and upper processes in W * . Since the calculations are the same for both processes, we take ฮธ to be either K + 1 or โ„“, and we denote by (O ฮธ n ) nโ‰ฅ0 the corresponding process. For the process are transient, and the states in N โˆช (W * \ T ฮธ ) form a recurrence class. Let us take a look at the transition mechanism restricted to N โˆช (W * \ T ฮธ ). Since We define the projection ฯ€ : P m โ„“+1 โ†’ D by setting, for o โˆˆ P m โ„“+1 , ฯ€(o) = (o(0), . . . , o(K)) . The remaining non-diagonal coefficients of the transition matrix are null. The diagonal coefficients are chosen so that the matrix is stochastic, i.e., each row adds up to 1. Let us denote by p ฮธ (z, z โ€ฒ ) the above transition matrix and let us compute its value for z, z โ€ฒ โˆˆ D such that z 0 , z โ€ฒ 0 โ‰ฅ 1. We introduce some notation first. For d โ‰ฅ 1 and a vector v โˆˆ R d , we denote by |v| 1 the L 1 norm of v: We say that a vector s โˆˆ D is compatible with another vector z โˆˆ D, and we write s โˆผ z, if We say that a matrix b โˆˆ N (K+1) 2 is compatible with the vectors s, z โ€ฒ โˆˆ D, We now use the transition mechanism of (O ฮธ n ) nโ‰ฅ0 in order to compute the value of p ฮธ (z, z โ€ฒ ): where p ฮธ (z, s, b, z โ€ฒ ) is the probability that, given Z ฮธ n = z: โ€ข for i โˆˆ { 0, . . . , K }, s i individuals from the class i are selected, and m โˆ’ |s| 1 individuals from the class ฮธ are selected. The probability of this event is โ€ข for i, j โˆˆ { 0, . . . , K }, b ij individuals from the class i mutate to the class j, and s i โˆ’ |b(i, ยท)| 1 individuals from the class i mutate to the class ฮธ. For i โˆˆ { 0, . . . , K }, the probability of this event is โ€ข for j โˆˆ { 0, . . . , K }, z โ€ฒ j โˆ’ |b(ยท, j)| 1 individuals from the class ฮธ mutate to the class j, and m โˆ’ |s| 1 โˆ’ |z โ€ฒ | 1 + |b| 1 individuals from the class ฮธ do not mutate to any of the classes { 0, . . . , K }. The probability of this event is Finally, Bounds on the invariant measure For every function g : [0, 1] โ†’ R, Let now g : [0, 1] โ†’ R be an increasing function such that g(0) = 0. Thanks to proposition 3.1, the following inequalities hold: for all n โ‰ฅ 0, Taking the expectation and sending n to โˆž we deduce that Next, we seek to estimate the above integrals. The strategy is the same for the lower and upper integrals; we set ฮธ to be either K + 1 or โ„“ and we study the invariant probability measure ยต ฮธ O . We will rely on the following renewal result. Let E be a finite set and let (X n ) nโ‰ฅ0 be an ergodic Markov chain with state space E and invariant probability measure ยต. Let W * be a subset of E and let e โˆˆ E be a state outside W * . We define ฯ„ * = inf{ n โ‰ฅ 0 : X n โˆˆ W * } , ฯ„ = inf{ n โ‰ฅ ฯ„ * : X n = e } . Proposition 3.2. For every function f : E โ†’ R, we have The proof is standard and similar to that of proposition 9.2 of [2]. We apply the renewal result to the process (O ฮธ n ) nโ‰ฅ0 restricted to N โˆช (W * \ T ฮธ ), the set W * \ T ฮธ , the occupancy distribution o ฮธ exit and the function o โ†’ g |ฯ€(o)| 1 /m . We set Applying the renewal theorem we get . Whenever the process (O ฮธ n ) nโ‰ฅ0 is in W * \ T ฮธ , the dynamics of the first K + 1 Hamming classes, ฯ€(O ฮธ n ) nโ‰ฅ0 , is that of the Markov chain (Z ฮธ n ) nโ‰ฅ0 defined at the end of the previous section. Let us suppose that (Z ฮธ n ) nโ‰ฅ0 starts from z ฮธ enter โˆˆ D, where z โ„“ enter = (1, 0, . . . , 0) and z K+1 enter = (m, 0, . . . , 0). Let ฯ„ 0 be the first time that Z ฮธ n (0) becomes null: ฯ„ 0 = inf{ n โ‰ฅ 0 : Z ฮธ n (0) = 0 } . Since the process (O ฮธ n ) nโ‰ฅ0 always enters the set W * \ T ฮธ at the state o ฮธ enter , the law of ฯ„ 0 is the same as the law of ฯ„ โˆ’ ฯ„ * for the process (O ฮธ n ) nโ‰ฅ0 starting from o ฮธ exit . We conclude that the trajectories ฯ€(O ฮธ n ) ฯ„ * โ‰คnโ‰คฯ„ and Z ฮธ n 0โ‰คnโ‰คฯ„ 0 have the same law. Therefore, Thus, we can rewrite the formula for the invariant probability measure ยต ฮธ O as follows: . The objective of the following sections is to estimate each of the terms appearing in the right hand side of this formula. Replicating Markov chains We study now the Markov chains (Z โ„“ n ) nโ‰ฅ0 and (Z K+1 n ) nโ‰ฅ0 . The computations are the same for both processes, we take ฮธ to be either K + 1 or โ„“ and we study the Markov chain (Z ฮธ n ) nโ‰ฅ0 . We will carry out all of our estimates in the asymptotic regime We will say that a property holds asymptotically, if it holds for โ„“, m large enough, q small enough and โ„“q close enough to a. For p, t โˆˆ D, we define the quantity I K (p, t) as follows: We make the convention that a ln(a/b) = 0 if a = b = 0. The function I K (p, ยท) is the rate function governing the large deviations of a multinomial distribution with parameters n and p 0 , . . . , p K , 1โˆ’|p| 1 . We have the following estimate for the multinomial coefficients: The proof is similar to that of lemma 7.1 of [3]. For r โˆˆ R K+1 , we denote by โŒŠrโŒ‹ the vector โŒŠrโŒ‹ = (โŒŠr 0 โŒ‹, . . . , โŒŠr K โŒ‹). โ€ข For any subset U of D and for any ฯ โˆˆ D, we have, for n โ‰ฅ 0, โ€ข For any subsets U, U โ€ฒ of D, we have, for n โ‰ฅ 0, lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a Proof. We begin by showing the large deviations upper bound. Let U, U โ€ฒ be two subsets of D and take z โˆˆ mU. For n โ‰ฅ 0, Thanks to the estimates on p ฮธ , we have, for m โ‰ฅ 1, where C(K) is a constant depending on K but not on m. For each m โ‰ฅ 1, let z m , s m , z โ€ฒ m โˆˆ D, b m โˆˆ { 0, . . . , m } (K+1) 2 be four terms that realise the above minimum. We observe next the expression lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a Since D and [0, 1] (K+1) 2 are compact sets, up to the extraction of a subsequence, we can suppose that when m โ†’ โˆž, If ฮฒ is not an upper triangular matrix, or if, for some j โˆˆ { 0, . . . , K }, |ฮฒ(ยท, j)| = t j , the limit is โˆ’โˆž. Thus, the only case we need to take care of is when ฮฒ โˆˆ B(t). In this case, we have lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a Optimising with respect to ฯ, ฮพ, ฮฒ, t, we obtain the upper bound of the large deviations principle. We show next the lower bound. Let ฮพ, t โˆˆ D and ฮฒ โˆˆ B(t). We have We take the logarithm and we send m, โ„“ to โˆž and q to 0. We obtain then Moreover, if t โˆˆ U o , for m large enough, โŒŠtmโŒ‹ belongs to mU. Therefore, lim inf โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a 1 m ln P Z ฮธ n+1 โˆˆ mU Z ฮธ n = โŒŠmฯโŒ‹ โ‰ฅ โˆ’I(ฯ, ฮพ, ฮฒ, t) . We optimise over ฮพ, ฮฒ, t and we obtain the large deviations lower bound. A similar proof shows that the l-step transition probabilities of (Z ฮธ n ) nโ‰ฅ0 also satisfy a large deviations principle. For l โ‰ฅ 1, we define a function V l on D ร— D as follows: โ€ข For any subset U of D and for any ฯ โˆˆ D, we have, for n โ‰ฅ 0, โ€ข For any subsets U, U โ€ฒ of D, we have, for n โ‰ฅ 0, lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a Perturbed dynamical system We look next for the zeros of the rate function I(r, ฮพ, ฮฒ, t). We see that We define a function F = (F 0 , . . . , F K ) : D โ†’ D by setting, for r โˆˆ D and k โˆˆ { 0, . . . , K }, Replacing f by its value in the above formula, we can rewrite, for 0 โ‰ค k โ‰ค K, The Markov chain (Z ฮธ n ) nโ‰ฅ0 can be seen as a perturbation of the dynamical system associated to the map F : Let ฯ * be the point of D given by: Proposition 4.4. We have the following dichotomy: โ€ข if ฯƒe โˆ’a โ‰ค 1, the function F has a single fixed point, 0, and (z n ) nโ‰ฅ0 converges to 0. Proof. For k โˆˆ { 0, . . . , K }, the function F k (r) is a function of r 0 , . . . , r k only; we can inductively solve the system of equations For k = 0, we have The equation F 0 (r) = r 0 has two solutions: r 0 = 0 and r 0 = ฯ * 0 . For k in { 1, . . . , K }, we have F k (r) = r k if and only if We end up with a recurrence relation. If the initial condition is r 0 = 0, the only solution is r k = 0 for all k โˆˆ { 0, . . . , K }, whereas if the initial condition is r 0 = ฯ * 0 , the only solution is r k = ฯ * k for all k โˆˆ { 0, . . . , K }, this last assertion is shown in section 2.2 of [4]. It remains to show the convergence. We will show the convergence in the case ฯƒe โˆ’a > 1, z 0 0 > 0. The other cases are dealt with in a similar fashion, or are even simpler. We will prove the convergence by induction on the coordinates. Since the function F 0 (r) = ฯƒe โˆ’a r 0 (ฯƒ โˆ’ 1)r 0 + 1 is increasing, concave, and satisfies F 0 (ฯ * ) = ฯ * 0 , the sequence (z n 0 ) nโ‰ฅ0 is monotone and converges to ฯ * 0 . Let k โˆˆ { 1, . . . , K } and let us suppose that the following limit holds: By the induction hypothesis, there exists N โˆˆ N such that for all n โ‰ฅ N and i โˆˆ { 0, . . . , k โˆ’ 1 }, |z n i โˆ’ ฯ * i | < ฮต. We have then, for all n โ‰ฅ N and for all ฯ โˆˆ [0, 1], F (ฯ) โ‰ค F k (z n 0 , . . . , z n kโˆ’1 , ฯ) โ‰ค F (ฯ) . We define two sequences, (z n ) nโ‰ฅN and (z n ) nโ‰ฅN , by setting z N = z N = z N k and for n > N z n = F (z nโˆ’1 ) , z n = F (z nโˆ’1 ) . Thus, for all n โ‰ฅ N, we have z n โ‰ค z n k โ‰ค z n . Since F (ฯ) and F (ฯ) are affine functions, and for ฮต small enough their main coefficient is strictly smaller than 1, the sequences (z n ) nโ‰ฅN and (z n ) nโ‰ฅN converge to the fixed points of the functions F et F , which are given by: We let ฮต go to 0 and we see that which finishes the inductive step. Comparison with the master sequence In section 3, in order to build the bounding occupancy processes, we have fixed an integer K โ‰ฅ 0 and we have kept the relevant information about the dynamics of the occupancy numbers of the Hamming classes 0, . . . , K. Let us call (ฮ˜ โ„“ n ) nโ‰ฅ0 and (ฮ˜ 1 n ) nโ‰ฅ0 the lower and upper occupancy processes that are obtained for K = 0, and let us call, as before, (O โ„“ n ) nโ‰ฅ0 and (O K+1 n ) nโ‰ฅ0 the lower and upper occupancy processes corresponding to K > 0. Let us define the following stopping times: We have constructed the processes in such a way that they are all coupled and the following relations hold: if the four processes start from the same occupancy distribution o โˆˆ W * , then These inequalities are naturally inherited by the Markov chains derived from the occupancy processes; let (Z โ„“ n ) nโ‰ฅ0 and (Z K+1 n ) nโ‰ฅ0 be the Markov chains associated to the processes (O โ„“ n ) nโ‰ฅ0 and (O K+1 n ) nโ‰ฅ0 , as in the end of section 3.2. Likewise, let (Y โ„“ n ) nโ‰ฅ0 and (Y 1 n ) nโ‰ฅ0 be the Markov chains associated to the processes (ฮ˜ โ„“ n ) nโ‰ฅ0 and (ฮ˜ 1 n ) nโ‰ฅ0 . The state space of the Markov chains Let us define the following stopping times: Let z โˆˆ D be such that z 0 โ‰ฅ 1, let the Markov chains (Z โ„“ n ) nโ‰ฅ0 , (Z K+1 n ) nโ‰ฅ0 start from z, and let z 0 be the starting point of the Markov chains (Y โ„“ n ) nโ‰ฅ0 , (Y 1 n ) nโ‰ฅ0 . The inequalities between the occupancy processes translate to the associated Markov chains as follows: The occupancy processes (ฮ˜ โ„“ n ) nโ‰ฅ0 , (ฮ˜ 1 n ) nโ‰ฅ0 , along with the associated Markov chains (Y โ„“ n ) nโ‰ฅ0 , (Y 1 n ) nโ‰ฅ0 , have been studied in detail in [3]. Thanks to the relations just stated, we will be able to make use of many of the estimates derived in [3]. Let ฮธ be K + 1 or โ„“ and let us call V the cost function associated to the Markov chain (Y ฮธ n ) nโ‰ฅ0 . We will make use of the following results from [3]: โ€ข either s = t = 0, โ€ข or there exists l โ‰ฅ 1 such that t = F l (s), โ€ข or s = 0, t = ฯ * . Let ฯ„ (Y ฮธ ) be the first time that the Markov chain (Y ฮธ n ) nโ‰ฅ0 becomes null: Concentration near ฯ * We show next that, when ฯƒe โˆ’a > 1, asymptotically, the Markov chain (Z ฮธ n ) nโ‰ฅ0 concentrates in a neighbourhood of ฯ * . Let us loosely describe the strategy we will follow. The Markov chain (Z ฮธ n ) nโ‰ฅ0 is a perturbation of the dynamical system associated to the map F . The map F has two fixed points: 0 and ฯ * . The fixed point 0 is unstable, while ฯ * is a stable fixed point. The proof relies mainly on two different kind of estimates. We estimate first the typical time the process (Z ฮธ n ) nโ‰ฅ0 needs to leave a neighbourhood of the region { z โˆˆ D : z 0 = 0 }; since the instability at 0 concerns principally the dynamics of the master sequence, we will be able to make use of the estimates developed in [3] by means of the inequalities stated in section 4.3. We estimate then the time the process (Z ฮธ n ) nโ‰ฅ0 spends outside a neighbourhood of the region { z โˆˆ D : z 0 = 0 } and ฯ * . Since (Z ฮธ n ) nโ‰ฅ0 tends to follow the discrete trajectories given by the dynamical system associated to F , it cannot stay a long time outside such a neighbourhood. This fact will be proved with the help of the large deviations principle stated in the previous section. This estimate will help us to bound the number of excursions outside a neighbourhood of ฯ * , as well as the length of these excursions. We formalise these ideas in the rest of the section. In order to simplify the notation, from now on we omit the superscript ฮธ and we denote by P z and E z the probabilities and expectations for the Markov chain (Z n ) nโ‰ฅ0 starting from z โˆˆ D. Lemma 4.7. For all ฮด > 0, there exists c > 0, depending on ฮด, such that, asymptotically, for all z โˆˆ D such that z 0 โ‰ฅ 1, we have Proof. Let (Y โ„“ n ) nโ‰ฅ0 be the Markov chain defined in section 4.3. Let ฯ„ (Y โ„“ ) be the first time that the process (Y โ„“ n ) nโ‰ฅ0 becomes null: By the remarks in section 4.3 we can see that As shown in lemma 7.7 of [3], this last probability is bounded from below by 1/m c ln m , which gives the desired result. Lemma 4.8. For all ฮด > 0, there exist h โ‰ฅ 1 and c > 0, depending on ฮด, such that, asymptotically, for all r โˆˆ D such that r 0 โ‰ฅ ฮด, we have Proof. Let ฮด > 0 and let us define the set For each r โˆˆ K there exists an integer h r โ‰ฅ 0 such that F hr (r) โˆˆ U(ฯ * , ฮด/4). By continuity of the map F , for each r โˆˆ K there exist also positive numbers ฮด r 0 , . . . , ฮด r hr such that ฮด r 0 , . . . , ฮด r hr < ฮด/2 and The family { U(r, ฮด r 0 ) : r โˆˆ K } is an open cover of the set K; since K is a compact set, we can extract a finite subcover, i.e., there exist N โˆˆ N and r 1 , . . . , r N โˆˆ K such that Let us set h = max{ h r i : 1 โ‰ค i โ‰ค N }, For n โˆˆ { 1, . . . , N } we take ฮด rn hr n +1 , . . . , ฮด rn h to be positive numbers such that, as before, Let us define We have then, for any r โˆˆ K, Passing to the complementary event, We For all r โˆˆ U kโˆ’1 , we have F (r) โˆˆ U k , the previous infimum is thus strictly positive. Since h is fixed, we conclude that lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a which finishes the proof of the lemma. Corollary 4.9. Let ฮด > 0. There exist h โ‰ฅ 1, c โ‰ฅ 0, depending on ฮด, such that, asymptotically, for all r โˆˆ D \ (D ฮด โˆช U(ฯ * , ฮด)) and for all n โ‰ฅ 0, we have The proof is carried out by dividing the interval { 0, . . . , n } in subintervals of length h and using the estimate of lemma 4.8 on each of the subintervals. We will not write the details, which can be found in the proof of corollary 7.10 of [3]. We define next a sequence of stopping times in order to control the excursions of the Markov chain (Z n ) nโ‰ฅ0 outside U(ฯ * , ฮด). We take T 0 = 0 and 2ฮด) . . . . . . We have then Taking the absolute value, we obtain We need to control the sum on the right hand side. Let us define, for n โ‰ฅ 0, We can now rewrite the sum as follows: Let ฮท > 0 and let us take t ฮท m as in proposition 7.11 of [3]: where V is the cost function governing the dynamics of the master sequence, as defined in section 4.3. We decompose the previous sum as follows: Let z 0 โˆˆ D such that z 0 0 โ‰ฅ 1. Since the estimates are the same for every starting point, we do not write the starting point in what follows: the probabilities and expectation are all taken with respect to the initial condition Z 0 = z 0 , unless otherwise stated. We take the expectation in the previous inequalities and we obtain Thanks to the estimates developed in section 7.3 of [3], we know that lim mโ†’โˆž E(1 ฯ„ 0 >t ฮท m ฯ„ 0 ) = 0 . We estimate next the term We have, by the Cauchy-Schwarz inequality, If 1 โ‰ค k โ‰ค N(ฯ„ 0 ), then T kโˆ’1 < ฯ„ 0 and Z T kโˆ’1 (0) > 0. Thanks to the Markov property, Again, the proof is done by dividing the interval { 0, . . . , n(โŒŠc ln mโŒ‹ + h) } in subintervals of length โŒŠc ln mโŒ‹ + h and using the estimates of lemma 4.12 on each subinterval, as in corollary 7.10 of [3]. Thanks to corollary 4.13, asymptotically, for all z โˆˆ D K such that z 0 โ‰ฅ 1, Let us set ฮฑ = 1 โˆ’ 1 2m c ln m , t = โŒŠc ln mโŒ‹ + h . We have: Therefore, asymptotically, for all z โˆˆ D K such that z 0 โ‰ฅ 1, The last inequality holds since k! โ‰ฅ (k/e) k . We choose ฮท such that 0 < ฮท < c/3. Thanks to the preceding inequality, lim sup โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a Theses estimates, along with the result of proposition 4.6, imply that which concludes the proof of proposition 4.10. Synthesis The first statement of theorem 2.1 is proved in [3] for the case of the master sequence, K = 0. The proof for the case K โ‰ฅ 1 does not involve any new arguments or ideas for a better understanding of the model; it is a straightforward generalisation of the proof for the case K = 0. Thus we deal only with the second statement of theorem 2.1. Let us suppose that ฮฑฯˆ(a) > ln ฮบ. As shown in [3], the following estimates hold: Thus, since we are studying the case ฮฑฯˆ(a) > ln ฮบ, lim โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a, m โ„“ โ†’ฮฑ On one hand, g being a bounded function, the above identity readily implies that lim โ„“,mโ†’โˆž, qโ†’0 โ„“qโ†’a, m โ„“ โ†’ฮฑ On the other hand, using proposition 4.10, we see that = g(ฯ * 0 + ยท ยท ยท + ฯ * K ) . Reporting back in the formula at the very end of section 3.3, we conclude the proof of theorem 2.1.
Detailed Assessment of Modulation Strategies for Hexverterโ€“Based Modular Multilevel Converters : Modular multilevel converters are playing a key role in the present and future development of topologies for mediumโ€“toโ€“highโ€“power applications. Among this category of power converters, there is a direct ACโ€“AC modular multilevel converter called โ€œHexverterโ€, which is well suited to connect threeโ€“phase AC systems operating at different frequencies. This topology is the subject of study in this manuscript. The complete Hexverter system is composed of an Hexverter power converter and several control layers, namely, a โ€œvirtual V 2C controllerโ€, a branch current controller in a twoโ€“frequency dq reference frame, a modulator, and a voltage balancing algorithm. The paper presents a thorough description and analysis of the entire Hexverter system, providing research contributions in three key aspects: ( i ) modeling and control in a uni๏ฌed twoโ€“frequency dq framework; ( ii ) developing a โ€œvirtual V 2C controllerโ€ to dynamically account for Hexverterโ€™s active power losses allowing to achieve active power balance on the ๏ฌ‚y; and ( iii ) a comparative evaluation of modulation strategies (nearest level control and phase dispositionโ€“sinusoidal pulse width modulation). To this end, a detailed switched simulation was implemented in the PSCAD/EMTDC software platform. The proposed โ€œvirtual V 2C controllerโ€ is evaluated through the measurement of its settling time and calculation of active power losses. Each modulation technique is assessed through total harmonic distortion and frequency spectrum of the synthesized threeโ€“phase voltages and currents. The results obtained suggest that the control scheme is able to properly regulate the Hexverter system under both modulation strategies. Furthermore, the โ€œvirtual V 2C controllerโ€ is able to accurately determine the active power loss, which allows the assessment of the ef๏ฌciency of the modulation strategies. The nearest level control technique yielded superior ef๏ฌciency. Introduction Modular multilevel converters (MMCs) have been during the last years, and will continue to be in the near future, a trending research topic. To better process the electrical power, MMCs can be used where two or three level power converters are used today. This is essentially due to multiple advantages, such as, (i) inherent fault tolerance, sometimes called redundancy: a faulty module can be bypassed without affecting the converter operation; (ii) application in medium and high power levels; (iii) the high scalability: the maximum/minimum voltage can be easily modified by increasing/reducing the number of power modules; (iv) better quality of output power; and (v) comparatively low switching frequency. Conversely, a drawback of these topologies is that proportional to the number of levels, complex challenges appear in the development process of a controller system. Among the multilevel topologies available in the literature, there is one multilevel topology suitable to connect two different three-phase AC systems, in particular when these AC systems run at two different frequencies. It is called Hexverter, and it was firstly introduced in 2010 [1]. Since then, a number of control approaches, including its current control in the โ€ข Hexverter modeling and control in a unified two-frequency dq framework; โ€ข The proposal and evaluation of a "virtual V 2 C controller" to dynamically account for Hexverter's active power losses, allowing one to achieve active power balance on the fly; โ€ข Detailed assessment of modulation strategies through total harmonic distortion of synthesized voltages and currents. This manuscript is organized as follows. The Hexverter principle of operation is presented in Section 2. The modeling and control approach in a unified two-frequency dq framework is described in depth in Section 3. Modulation strategies NLC and PD-SPWM are thoroughly described in Section 4. The proposed "virtual V 2 C controller" is presented and derived in Section 5. The integration of the Hexverter-based system is shown in Section 6. Simulation results of synthesized voltages, currents, and performance of the voltage balancing algorithms are discussed in Section 7. Similarly, the active power losses obtained by the "virtual V 2 C controller" are discussed in Section 8. In addition, a detailed assessment of spectrum and harmonic content of synthesized voltages and currents is thoroughly presented in Section 9. Finally, conclusions are summarized in Section 10. Hexverter Topology Typically, the Hexverter is the interphase to connect two different AC three-phase systems operating at two different frequencies, e.g., the supply three-phase grid and an electrical three-phase machine. Its topology is depicted in Figure 1. On the contrary to a back-to-back configuration of an AC-DC-AC modular multilevel power converter, the Hexverter has no central DC link. Furthermore, it features six identical branches forming an hexagonal ring. Since, this topology require the SMs to synthesize positive and negative voltage, each of these branches consist of n identical H-bridge (full-bridge) power modules series, connected with a branch inductor L b and a branch resistor R b . Two phases of system {abc} are connected by two branches to a single phase of system {123}. As depicted in Figure 1, all connected branches form a loop allowing a circular current i cir to flow. This current is called circulating current flowing through all branches m, and it is determined by (1): As depicted in Figure 1, the AC phase voltages of Hexverter are not referenced to ground, and the phase voltages of system {abc} are referenced to phase voltages of system {123}, and vice versa. In addition, a voltage difference v og between both star-point potentials is set. In one hand, considering positive sequence for the supply and load of balanced AC three-phase systems, respectively, single phase voltages {a} and {1} are fully described by (2)-(4): v a =v abc cos(ฯ‰ abc t), Voltages of system {123} have a phase difference of ฮธ v 123โˆ’abc at t = 0 to system {abc} and p times its voltage magnitude. In the other hand, sinusoidal currents of phases {a} and {1} are characterized by (5) and (6): Taking into account Equations (1)-(6), voltages and currents of branch (m โˆˆ {1, 2, . . . , 6}) are characterized by (7) and (8): Modeling and Control Approach in a Unified Two-Frequency dq Framework Recalling that by controlling the six branch voltages, i cir can be adjusted either (i) into dynamic operation to accomplish a given energy adjustment in all branches or (ii) into steady state operation by minimizing i cir to a minimum value [4]. Reference [3] reported that branch power transfer between adjacent branches P adj is function of the difference between reactive power of both AC three-phase systems. Two options to deal with this issue are listed below: โ€ข (i) Using the "adjacent power" adjustment approach (9): (Q abc โˆ’ Q 123 )or; โ€ข (ii) Adjusting both reactive powers to the exact same value. In this manuscript, both reactive power references are set equal to zero and the Hexverter-based system is operating in steady state. Hexverter Frequency Components {abc} From the circuit depicted in Figure 1, considering the superposition principle, only frequency components of the AC three-phase system {abc} are evaluated. State-Space Equations {abc} Side Performing some mathematical derivations, the next two differential equations are obtained: Moreover, definition of the well-known cosine-based Park transformation matrix T dq xyz is as follows: (12): Following similar steps, i abc,dq b246 is determined by (13): Hexverter Frequency Components {123} In this case, only frequency components of AC three-phase system {123} are considered. State-Space Equations {123} Side Next, by performing some mathematical manipulations, two differential equations are derived: Control Approach From (12), the next differential equations are obtained: making use of a change of variable as follows: two independent and decoupled equations are obtained, which stand for dq components of branch currents {135} at frequency {abc}; those are described by (21): Similar mathematical manipulations can be conducted with Equations (13), (15) and (16) }. This set of equations is an equivalent and decoupled representation of the former set of differential equations that can be managed and transformed into the Laplace domain. Afterwards, by applying techniques from [16], a suitable control scheme in a unified two-frequency dq framework for a Hexverter-based system is elaborated. indicates dq components of currents flowing through branches {135} at frequency {abc}. The same notation applies for the rest of the state variables. Each branch current controller outputs a three-phase modulating signal labeled as m abc b135 , m abc b246 , m 123 b135 , and m 123 b246 . These signals are then de-multiplexed and recombined as follows: m b1 = m abc b1 + m 123 b1 , m b2 = m abc b2 + m 123 b2 , . . . , m b6 = m abc b6 + m 123 b6 . Afterwards, these signals are augmented by the reference branch voltage nV * C , generating reference branch voltages v * bsm , which are suitable inputs for the modulator. A general schematic of a single branch current controller is depicted in Figure 2. From it, {xyz} stands for frequency {abc} or {123}, respectively. Figure 2 shows two main subsystems marked as "power-to-current" and "branch current control". As depicted, the branch current's error is driven to zero through a decoupled PI compensator. Modulation Strategies This section is devoted to describe in depth two modulation techniques that will further be assessed when implemented into the Hexverter-based system. Nearest Level Control Some of the features that make NLC an attractive option to modulate a modular multilevel converter are its (i) comparatively low switching frequency, (ii) it is simple to implement, and (iii) it is remarkably suitable for a power converter that require a large number of levels [18][19][20]. Notice the objective of NLC is to determine "how many" SMs per branch n m are going to be connected/bypassed at any given time. A detailed diagram depicting the implementation of NLC is shown in Figure 3. First, branch reference voltage v * bsm , containing two frequency components ( f abc , f 123 ), is the input. Right after, it is divided by a SM reference voltage V * C and rounded. Then, variable n m indicating the number of SMs to be inserted/bypass for each branch m is obtained. In the end, in regard to positive or negative values of n m , variables n upm and n downm are calculated. Recalling, each full-bridge submodule contains a capacitor that is set to a reference voltage denoted as V * C . Since each submodule will switch to synthesize AC voltage on its terminals, voltage variations of V C will occur. Therefore, with the objective to minimize V C fluctuations of each submodule, a voltage-balancing algorithm (VBA) utilized by NLC is shown in Figure 4. The sorting process is performed by the use of the merge-sort algorithm, which is an efficient, general-purpose and comparison-based sorting algorithm. It was proposed in 1945 by John von Neumann [21]. As illustrated, the inputs are (i) the number of SMs n upm , (ii) measurements of capacitors' voltage comprising each branch V Cim , and (iii) measurements of currents flowing through each branch i bm . Thus, "which" of the submodules required to be inserted/bypassed for each Hexverter branch when synthesizing positive semi-cycles can be determined. When variable n upm is substituted by n downm in Figure 4,"which" of the SMs required to be connected/bypassed for each Hexverter branch are known. The performance evaluation of NLC VBA is discussed in Section 7.1. Phase Disposition-Sinusoidal Pulse Width Modulation PD-SPWM is an extended version of the standard pulse width modulation strategy. In this case, n number of triangular waveforms [v k =carriers] are employed, shown in Figure 5. Each carrier has an amplitude of |v k | = โˆ’1 + 2kโˆ’1 n , where k โˆˆ {1, 2, . . . , n}. As it can be seen, each carrier features a symmetrical offset with respect to the horizontal zero-axis. These carriers, when compared to a provided sinusoidal reference ยฑv * bsm , are employed to specifically compute the number of series connected H-bridge SMs to be connected/bypassed at any given time. Variable n upm stores values when +v * bsm is used, whereas variable n downm stores values when โˆ’v * bsm is utilized. A flowchart describing the process to determine n upm is shown in Figure 6. The next step is to determine "which" of the SMs will be connected at any given time; to this end, a PD-SPWM VBA is implemented. It is depicted in Figure 7. The sorting process is performed by the use of the merge-sort algorithm. As shown, the inputs are (i) the number of SMs n upm , (ii) the measurements of capacitors' voltage of each branch V Cim , (iii) the measurements of currents flowing through each branch i bm , and (iv) the trigger signals of each branch PD โˆ’ SPWM upm . The internal process of the PD-SPWM VBA is illustrated in Figure 7, where the output is a set of switching signals for each SM comprising any of the Hexverter' branches. If variable +v * bsm is replaced by โˆ’v * bsm in Figure 6, variable n downm and trigger signals PD โˆ’ SPWM downm are calculated. Additionally, by plugging those in Figure 7, instead of n upm and PD โˆ’ SPWM upm , switching signals for each submodule forming a Hexverter branch m are obtained. The performance assessment of PD-SPWM VBA is presented in Each carrier features a switching frequency of f sw = 5 kHz. However, on average, each SM will switch at f sw n = 334 Hz per hyper-period. Be aware that a hyper-period is defined as T h = 1/gcd( f abc , f 123 ). Proposed "Virtual V 2 C Controller" In order for the Hexverter-based system to perform properly, active and reactive power references (P * abc , Q * abc , P * 123 , and Q * 123 ) must be provided. However, as depicted in Figure 10, active power reference P * 123 is dependent of the Hexverter's active power losses (โˆ†P). In general, โˆ†P is composed of (i) energy variations in the elements storing energy (P C and P L ), (ii) active power losses due to the switching (P sw ) and conduction (P cond ) of semiconductors, and (iii) active power losses due to parasitic effects of the Hexverter' elements which are typically modeled as resistors dissipating power (P R ). In this research, an approach to determine active power losses "โˆ†P" is studied and proposed. The main objectives are (i) to achieve active power balance on the fly of the Hexverter-based system and (ii) to keep the submodules' capacitor voltage as close as possible to the given reference, so that almost all the incoming power can be transferred into the load. As shown earlier in this document, the Hexverter topology does not feature a real DC link between the connection of two AC three-phase systems; however, a "virtual DC link" can be modeled by calculating an average DC voltage per submodule of each Hexverter' branch. The DC voltage provided as the reference V * C , which in turn is the initial voltage over each full-bridge submodule before starting the operation of the Hexverter system, can be calculated as follows: Considering only elements storing energy inside the Hexverter system and ideal behavior of the power converter (P sw = 0 ) and (P cond = 0), a general figure of the Hexverter system is shown in Figure 8. By the use of Poynting's theorem, Equation (23) is derived: Hexverter system P abc Since the energy stored over the branch inductors is relatively low in comparison to the energy stored in the capacitors of each submodule, and the active power dissipated by the P R term will add a DC offset, the active power losses "โˆ†P" can be estimated by considering the rate of change in the energy stored in the capacitors only. In other words, Equation (23) becomes: Specifically, an approximation to determine "โˆ†P" is described by: In this work, this fact is used in order to compute the Hexverter's active power losses "โˆ†P". Furthermore, this will achieve an active power balance of the Hexverter-based system on the fly. A general scheme of the so-called "virtual V 2 C controller" is shown in Figure 9. To validate the performance of the proposed "Virtual V 2 C controller" under different scenarios, Test Case I and Test Case II are developed. In Test Case I, the Hexverter-based system is considered to function keeping ideal behavior, in the sense that P sw and P cond are both equal to zero. By contrast, in Test Case II, a more realistic scenario of the Hexverterbased system is assessed when P T sw and P T cond of the IGBT's and P D cond of diodes are taken into account. As described earlier, the calculation of โˆ†P is a necessary condition to compute the active power (P * 123 ) reference value for the {123} side. Once P * 123 and Q * 123 are entered into the "power-to-current" subsystem depicted in Figure 2a Hexverter-Based System Integration A general schematic of the Hexverter-based system is portrayed in Figure 10. It shows the integration of the subsystems' "virtual V 2 C controller" Figure 9; the branch current controllers in a unified dq framework in Figure 2a Figure 1. Initially, the active and reactive power references of the {abc} side (P * abc and Q * abc ) are necessary operational inputs to the "power-to-current" subsystem depicted in Figure 2a, which, in turn, output reference values of branch currents i abc,dq * b135 and i abc,dq * b246 , respectively. Then, โˆ†P obtained from the "virtual V 2 C controller" is subtracted to P * abc to determine active power reference P * 123 . Based on operational conditions reactive power reference (Q * 123 ) is set. These power references are fed into the "power-to-current" subsystem depicted in Figure 2a, outputting reference values of branch currents i 123,dq * b135 and i 123,dq * b246 , respectively. Once x * is complete, it is compared against proper measurements, and its error is fed into the branch current controller shown in Figure 2b. Modulation indices m bm , which are outputs of the branch current controllers, become inputs for a modulator, either NLC or PD-SPWM, see Figure 3 or Figure 6. According to the selected modulation strategy, the modulator outputs the number of submodules to be connected (n m ) at any given time. This is the input for the VBA that in turn generates switching signals for each power submodule comprising each Hexverter branch. Simulation Results Detailed simulations are implemented into the software platform PSCAD/EMTDC [22]. The objective is to verify the operation and performance of the Hexverter power converter under the application of modulations techniques NLC and PD-SPWM. The reader is referred to Table 1, where simulation parameters are listed. Meanwhile, an experimental prototype is being built in the author's laboratory. NLC Simulation Results Recalling that both three-phase systems are labeled as {abc} or {123}, Figure 11 shows the top two sub-figures depicting waveforms corresponding to AC voltages v abc and v 123 . Comparing the provided simulation parameters, both AC voltages show good match in magnitude, frequency, and phase. In addition, at the bottom of Figure 11, two more waveforms are presented. In one hand, v bs1 corresponds to synthesized branch voltage utilizing NLC modulation technique. As observed, it features typical "discrete steps or levels", indicating NLC has been precisely implemented. Moreover, a voltage magnitude nearly of 2 kV can be measured. On the other hand, i b1 depicts current that flows through branch one. A current magnitude close to 10 Amperes is shown. Furthermore, by carefully observing traces of voltage v bs1 and current i b1 , it can be realized that they feature both frequency components ( f abc , f 123 ) of the connected AC three-phase systems. In summary, it can be mentioned that both v bs1 and i b1 are fully compliant to Equations (7) and (8). With regard to the performance of the so-called NLC VBA, Figure 12 illustrates n traces that correspond to measurements of controlled capacitor's voltage. Based on the reported results, it can be stated that NLC VBA is controlling n voltages between a reasonable range of V Ci1 = ยฑ2.5 V. This variation is approximately equal to 1.5% average error in comparison to V * C . NLC VBA achieves a steady-state in about 200 ms. PD-SPWM Simulation Results The top two sub-figures of Figure 13 depict waveforms corresponding to AC voltages v abc and v 123 , respectively. These are compliant with the provided simulation parameters due to the fact that a good match in magnitude, frequencies, and phase is observed. Moreover, the bottom two sub-figures depict waveforms of v bs1 and i b1 , respectively. With respect to v bs1 , it shows a peak voltage of approximately 2 kV. Its trace shows typical on and off switching over the "levels" indicating PD-SPWM has been adequately implemented into the simulation. Current flowing through branch one is shown by trace i b1 . As expected, a peak value of about 10 Amperes can be measured. Consistent with Equations (7) and (8), v bs1 and i b1 contain both frequency components ( f abc , f 123 ) of the connected AC threephase systems. NLC and PD-SPWM Discussion of Results Regarding the implementation of NLC and PD-SPWM into the Hexverter-based multilevel converter and based on simulation results shown from Figures 11-14, three main points can be mentioned. (i) Since, with the naked eye, almost no difference can be observed in both synthesized AC voltages and currents, it becomes necessary to analyze these waveforms in depth. Thus, in order to determine which modulation technique outperform the other in terms of its harmonic spectrum and total harmonic distortion, the reader is referred to Section 9. (ii) On one hand, branch v bs1 voltages show a small difference in the number of levels to synthesize the same Hexverter' terminal voltages; on the other hand, both branch currents i b1 are clearly different, it can be mentioned that the branch current out of PD-SPWM is more distorted than the one measured when the NLC modulation technique is utilized. (iii) A small difference of a 0.3% average error is measured when comparing both voltage balancing algorithms. Performance of "Virtual V 2 C Controller" In order to validate the performance of the "virtual V 2 C controller" two different scenarios are considered. Test Case I In this scenario, the Hexverter power converter is considered a lossless system. In other words, P sw and P cond are both equal to zero. Be aware that the parasitic effects of the Hexverter' reactive elements are modeled into the branch resistor R b . The performance of "virtual V 2 C control" under the NLC modulation technique is depicted in Figure 15. At the beginning of the trace, a transient behavior appears due to the rate of change in energy into the submodule's capacitor and branch inductors; nevertheless, under these transient conditions, the controller is able to achieve and provide a correct active power balance reference for the AC system {123}. As observed, the controller takes approximately 1.25 s to reach steady-state with a โˆ†P value of 184 W. By analyzing Equation (23), this value of โˆ†P parameter corresponds to a DC offset due to the embedded calculation of the P R term. In order to verify the correctness of the calculated โˆ†P value, the reader is referred to Table 1, where the active power reference of the AC system {abc} P * abc = 15 kVA is provided. In the same fashion, performance of "virtual V 2 C control" under PD-SPWM modulation technique is depicted in Figure 16. As expected, a transient behavior of โˆ†P trace appears at the beginning. However, one more time, the controller is able to achieve and provide correct active power balance reference for the AC system {123}. As it is depicted, the controller takes about 1.5 s to achieve steady state with a value of 202 W, that corresponds to a DC offset due to the embedded calculation of P R term. By comparing both โˆ†P parameter values out of both modulation techniques, an active power difference of 18 W is observed. It seems that by utilizing this simulation setup, active power losses are 18 W higher when PD-SPWM is utilized. Test Case II In this scenario, the Hexverter power converter is no longer considered a lossless system. The IGBT transistor part number IRG4BC30KDPbF is selected. Its main parameters are listed in Table 2, and it includes an ultrafast soft recovery diode connected in antiparallel. Since all branch reference voltages coming out of the implemented current controllers, submodules' DC link voltages, and branch current directions are known, the duty cycles for the IGBTs and diodes defined with variables (d T b,mi and d D b,mi ) can be determined [3]. Switching and conduction losses of a single transistor are defined by variables P T sw and P T cond , respectively. These can be estimated by Equations (26) and (27), respectively: Similarly, the conduction loss of a single diode is defined by variable P D cond and can be approximated by Equation (28): Bear in mind,ฤซ T andฤซ D are both equal to the magnitude of branch current |i b,m | ,and V Throw is equal to the given reference voltage for each submodule V * C . Furthermore, in order to determine the mean values for the active power losses, all calculations are performed over a hyper-period defined earlier as T h = 1/gcd( f abc , f 123 ). The functioning of the proposed "virtual V 2 C controller" under NLC modulation technique is depicted in Figure 17. It shows a transient behavior of the โˆ†P trace of approximately 1.25 s; nonetheless, the controller is able to reach steady state conditions with a value of 1.303 kW, while providing active power reference for the AC system {123}. As shown in Figure 18, this value of โˆ†P agrees with an active power reference P * 123 and its measurement. Furthermore, it can be stated that both traces of P * 123 and P 123 are in practice on top of each other. This indicates the performance of the "virtual V 2 C controller". Considering the efficiency equation defined by ฮท = P abc โˆ’โˆ†P P abc ร— 100, the Hexverter power converter seems to be performing at ฮท = 91.31 efficiency. This value obtained agrees with the studies regarding efficiency developed in [3]. Correspondingly, the performance assessment of "virtual V 2 C controller" under the PD-SPWM modulation strategy is shown in Figure 19. After the transient behavior of โˆ†P trace, the controller reaches steady-state in about 1.4 s, with a value of 1.317kW. At the same time, it provides active power reference for the three-phase AC system {123}. This value of โˆ†P agrees with active power reference P * 123 and its measurement, as it is illustrated in Figure 20. Furthermore, it can be stated that both traces of P * 123 and P 123 are practically attached to each other. This indicates the performance of the "virtual V 2 C controller". Under the above circumstances, the Hexverter-based system seems to perform with a value of ฮท = 91.22 efficiency. In summary, by comparing the results obtained from Test Cases I and II, the Hexverterbased system seems to be more efficient when the NLC modulation technique is utilized. Detailed Assessment of Harmonic Spectrum and Total Harmonic Distortion of Voltages and Currents The total harmonic distortion (THD) of any single phase waveform can be estimated by Equation (29): where h 1 accounts for the amplitude of the fundamental frequency, and h k stands for any harmonic's amplitude multiple of the fundamental frequency. Single-Phase Voltage THD Assessment under NLC Five cycles of single-phase voltage v a are depicted in the top-left section of Figure 21. This voltage is measured at Hexverter's terminals labeled as PCC abc (see Figure 1). Its frequency, magnitude, and phase are compliant with simulation parameters. Moreover, the harmonic content of v a is assessed and illustrated in the bottom-left of Figure 21. A set of 160 harmonics labeled as h a,k are shown. As expected, its magnitudes are monotonically decreasing as its harmonic order increases. Its THD is then calculated and equal to 2.21%. Harmonic's number 29th = 1450 Hz and 31th = 1550 Hz are the most representative, featuring a magnitude of approximately 0.30%. In the same fashion, five cycles of single-phase voltage v 1 are shown at the top-right section of Figure 21. This voltage is being measured at Hexverter's terminals labeled as PCC 123 (see Figure 1). By a simple inspection of Figure 21, v 1 looks more distorted compared to v a . This claim is consistent with the evaluation of v 1 harmonic content, which generates a value of 3.97% THD. A set of 160 harmonics labeled as h 1,k are shown in the bottom-right section of Figure 21. Notice that its magnitudes are monotonically decreasing as its harmonic order increases. In this case, harmonic's number 15th = 150 Hz, 79th = 790 Hz, and 129th = 1290 Hz are the most representative, they feature magnitudes of 0.7%, 0.5%, and 0.55%, respectively. Single-Phase Voltage THD Assessment under PD-SPWM Five cycles of single-phase voltage v a are shown in the top-left section of Figure 22. Correspondingly, the top-right section of the same figure shows five cycles of single-phase voltage v 1 . Both voltage waveforms were measured at each PCC, respectively. The harmonic distortion measurement of single phase voltage v a indicates a THD value = 2.50%. It is 13.2% higher in comparison to the THD value obtained by NLC modulation. Similarly, a THD value of v 1 equal to 3.98% is calculated. This former number indicates that, independently of the modulation technique, almost no difference regarding the THD value of v 1 can be observed. Be aware that a harmonic number 15th = 150 Hz, feature the highest magnitude (0.7% and 0.6%, respectively) under both modulation techniques. Single-Phase Current THD Assessment under NLC Single-phase current i a is depicted in the top-left section of Figure 23. This current shows an amplitude of 10 Amperes, which, in turn, is compliant with simulation parameters. Its harmonic content is evaluated and shown in bottom-left section of Figure 23. The THD calculation indicates a number equal to 2.27%. Low-order harmonics, less than 3000 Hz, are the most representative featuring a highest magnitude of 0.46%. A similar trace is shown to indicate i 1 in the top-right section of Figure 23. As expected, this current feature an amplitude of nearly 10 Amperes. Its harmonic content corresponds to a value of 2.94%. Harmonics number 7th, 17th and 35th are the more representative ones, featuring magnitudes fairly close to 0.7%. Single-Phase Current THD Assessment under PD-SPWM Single-phase current i a is depicted in the top-left section of Figure 24. Likewise, the top-right section of Figure 24 shows a single-phase current i 1 . Both traces were measured at each PCC. The harmonic distortion computation of single-phase currents indicate THD values of 3.19% and 4.52%, respectively. THD values of PD-SPWM are higher than NLC in 40.6% and 54% each. Be aware that i 1 harmonic number 5th = 50 Hz features the highest amplitude of 2.3% under PD-SPWM. In summary, all the THD values obtained out of the PSCAD/EMTDC simulations are compliant with international standards IEEE 519 [13] and IEC61000-3-2 [14]. Moreover, it seems that the NLC modulation strategy outperforms the PD-SPWM modulation technique when the THD of synthesized waveforms is considered. In future work, some THD values can be reduced to a minimum by properly implementing modulation techniques such as harmonic elimination and selective harmonic elimination. Conclusions In this manuscript, the operational principle of the direct AC-AC multilevel power converter "Hexverter" was presented. The subsystems (i) branch current controller performing in a unified two-frequency dq framework and (ii) a proposed "virtual V 2 C controller" were integrated to a power converter setup composed of (a) a modulator, (b) a voltage balancing algorithm, and (c) the Hexverter system. The results obtained suggest that the control scheme is able to regulate the Hexverter-based system under both modulation strategies. Moreover, an assessment of total harmonic distortion of AC three-phase voltages and currents was thoroughly developed. It seems that the NLC modulation strategy outperforms the PD-SPWM modulation technique. For instance, the THD of v a is 13.2% higher under PD-SPWM than under NLC. Likewise, THD of i 1 is 54% higher under PD-SPWM than under NLC. Be aware, all the THD values obtained out of PSCAD/EMTDC simulations are compliant with international standards IEEE 519 [13] and IEC61000-3-2 [14]. Validations of proposed "virtual V 2 C control" were presented. According to the results obtained, the "virtual V 2 C controller" was able to accurately determine the active power loss of the Hexverter-based system. Furthermore, by assessing the โˆ†P values of both modulation techniques and under the scenarios of Test Case I and II, the nearest-level control technique yielded superior efficiency. The experimental validation of the analysis presented herein is currently under investigation. The results will be published as they become available. Acknowledgments: The authors gratefully acknowledge the support of both Consejo Nacional de Ciencia y Tecnologรญa (CONACYT), a Mรฉxico's government entity in charge of the promotion of scientific and technological activities, and Universidad Panamericana Campus Guadalajara, in Zapopan, Jalisco, Mรฉxico. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
The Complex Regulation of HIC (Human I-mfa Domain Containing Protein) Expression Human I-mfa domain containing protein (HIC) differentially regulates transcription from viral promoters. HIC affects the Wnt pathway, the JNK/SAPK pathway and the activity of positive transcription elongation factor-b (P-TEFb). Studies exploring HIC function in mammalian cells used ectopically expressed HIC due to undetected endogenous HIC protein. HIC mRNA contains exceptionally long 5โ€ฒ and 3โ€ฒ untranslated regions (UTRs) compared to the average length of mRNA UTRs. Here we show that HIC protein is subject to strict repression at multiple levels. The HIC mRNA UTRs reduce the expression of HIC or of a reporter protein: The HIC 3โ€ฒ-UTR decreases both HIC and reporter mRNA levels, whereas upstream open reading frames located in the 5โ€ฒ-UTR repress the translation of HIC or of the reporter protein. In addition, ectopically expressed HIC protein is degraded by the proteasome, with a half-life of approximately 1 h, suggesting that upon activation, HIC expression in cells may be transient. The strict regulation of HIC expression at the levels of mRNA stability, translation efficiency and protein stability suggests that expression of the HIC protein and its involvement in the various pathways is required only under specific cellular conditions. Introduction The C-terminal region of HIC contains an 81 amino acid domain, which shares 77% identity and 81% similarity with the cysteine-rich C-terminal domain of the protein I-mfa. Hence, the name HIC for Human I-mfa domain Containing protein. I-mfa (Inhibitor of MyoD Family A) inhibits the MyoD family of myogenic transcription factors [1], the Mash2 transcription factor involved in trophoblast differentiation, and TCF3 [2] and LEF1 [3], mediators of the Wnt pathway. Despite of the high homology between HIC and I-mfa, they appear to have different functions. HIC was first identified as a protein that differentially regulates Tat-mediated and Tax-mediated expression of the human T-cell leukemia virus type I long terminal repeat (HTLV-I LTR) and the human immunodeficiency virus type I long terminal repeat (HIV-I LTR) [4][5][6]. HIC has also been reported to affect the Wnt pathway [3], the JNK/SAPK pathway [3] and the activity of positive transcription elongation factor-b (P-TEFb) [4,7,8]. Cigognini et al. recently studied chromosome 7 deletions in myeloid disorders [9]. 27% of acute myeloid leukemia (AML) and myelodysplastic syndrome (MSD) patients presented a chromosome 7 abnormality. The marker that showed the most frequent loss of heterozygosity is adjacent to HIC, hence, HIC has been proposed to be a candidate tumor suppressor gene. Although several studies have demonstrated that HIC is involved in a number of important signalling pathways and cellular processes, the exact role of HIC and the mechanism by which it affects the different pathways is still obscure. To date, studies exploring the role of HIC have been performed on over-expressed protein. No report of endogenous HIC protein has been published. The mRNA encoding HIC contains a 590 nt 59-untranslated region (UTR), a 741 nt coding sequence, and a 3276 nt 39-UTR. Such UTRs are extremely long compared to the average length of UTRs of cellular mRNAs. The average length of the 59-UTR in human mRNAs is 125-210 nt [10,11] and the average length of the 39-UTR is 1027 [10]. Long UTRs are usually involved in posttranscriptional regulation of mRNAs. Post-transcriptional regulation of gene expression provides a key mechanism by which cells can rapidly change gene expression patterns in response to a variety of extracellular signals and disparate biological processes. mRNA-binding proteins interact with unique sequences in mRNAs to coordinately regulate their localization, translation and/or degradation. A common feature of many rapidly degraded mRNAs is the presence of AU-rich elements (AREs) in their 39-UTRs. The sequence of this cis-element is variable, but frequently contains one or moreAUUUA pentameric motifs within or near a U-rich region [12]. Interactions between AREs and their specific binding proteins have diverse effects on target mRNAs. 59-UTRs like 39-UTRs, are deeply involved in posttranscriptional regulation of gene expression through specific mRNA motifs and RNA binding proteins. An increasing number of reports describe regulation of translation of specific mRNAs in response to specific stimuli. These mRNAs often contain a 59-UTR considerably longer than the average cellular 59-UTR [13,14], may contain AUG codons upstream of the initiation codon for the main open reading frame, and have complex secondary structures [15]. Here we show that the expression of the HIC protein is subject to strict repression, reducing its expression to undetectable levels. We demonstrate that the HIC mRNA UTRs reduce the expression of HIC or of a reporter gene in transfected cells. The HIC 59-UTR represses translation of HIC or of the reporter gene in a mechanism involving upstream open reading frames (uORFs), whereas the HIC 39-UTR decreases the mRNA level. Ectopically expressed HIC protein is degraded by the proteasome with a halflife of approximately 1 h, suggesting that HIC protein expression in cells is transient even under conditions that elevate its translation in cells. Expression of HIC mRNA and Protein HIC mRNA is expressed in the spleen, thymus, prostate, uterus, small intestine, peripheral blood leukocytes, but not in the testis and colon [5]. To compare expression of HIC mRNA in various human cell lines, we performed Northern blot analysis. A band of the expected size (,4600 nt) was detected in Saos-2, Karpas 299, HeLa and HF1 cells, but not in A431 or K562 cells ( Figure 1A). We could not detect endogenous HIC protein in any of these cell lines by Western blots using a rabbit polyclonal antibody that we generated against HIC (not shown and Fig. 1B, first lane). This antibody did however detect ectopic HIC over-expressed from a bicistronic vector encoding both GFP and the HIC ORF ( Figure 1B). Western blots for samples expressing ectopic HIC revealed a 32-kDa doublet band of the expected molecular weight, and a higher molecular weight doublet band. The various bands may represent covalently modified protein ( Figure 1B). HIC Untranslated Regions (UTRs) Inhibit the Expression of a Reporter Gene We sought to examine whether the HIC mRNA UTRs affect the expression of a luciferase reporter protein in cells. We prepared constructs encoding the Firefly luciferase (FFL) reporter gene fused to the 59-UTR of HIC (59-UTR-FFL), the full length 39-UTR (FFL-39-UTR), or just 237 nt of the 59 end of the 39-UTR (FFL-39-UTR-237). Saos-2 cells were transfected with each of the constructs. To control transfection efficiency all cells were cotransfected with a plasmid encoding Renilla luciferase (RL). 48 h after transfection the activities of FFL and RL were measured and FFL activity was normalized to RL activity. Fusion of the HIC 39-UTR downstream to FFL reduced FFL activity by 65% (Figure 2A). Fusion of the first 237 nucleotides of the 39-UTR did not have a significant effect on FFL expression. Fusion of the HIC 59-UTR upstream to FFL reduced FFL activity by 75%. Similar results were obtained in all other cell lines examined ( Figure S1). To determine whether the decrease in FFL activity was due to reduced FFL mRNA or protein level, we purified RNA from cells expressing the UTR-FFL constructs and analyzed it on Northern blots using probes for FFL, RL and b-actin mRNAs. FFL mRNA levels were normalized to RL, as a measure of transfection efficiency and to b-actin, as a control for the amount of mRNA. mRNAs encoding FFL, RL and b-actin were detected in all samples ( Figure 2B). Quantification of the intensities of the bands showed similar amounts of mRNA encoding FFL in cells transfected with constructs coding for the FFL gene, the FFL gene fused to the first 237 nucleotides of HIC 39-UTR or the FFL gene fused to HIC 59-UTR. In contrast, mRNA encoding FFL was barely detected in cells transfected with the FFL fused to the full length HIC mRNA 39-UTR ( Fig. 1B and C). Since all mRNAs were transcribed from the same CMV promoter, these results suggest that the low level of the 39-UTR-expressing mRNA is caused by decreased mRNA stability. Both the 39-and 59-UTRs of HIC reduced FFL activity in cells ( Fig. 2A). The fact that the HIC 39-UTR but not the HIC 59-UTR reduced FFL mRNA levels implies that the UTRs inhibit HIC protein expression by two different mechanisms. The HIC 39-UTR causes a decrease in the mRNA level, most likely due to decreased mRNA stability, while the 59-UTR inhibits mRNA translation. The 59-UTR Represses HIC Translation in a Mechanism Involving Upstream Open Reading Frames (uORFs) To determine whether the UTRs affect HIC expression in a manner similar to their effect on reporter FFL expression, we prepared a set of expression constructs encoding different parts of the HIC gene ( Figure 3A). All constructs contained the HIC open reading frame (ORF). The HIC-FL construct contained the full length 4607 nt HIC cDNA. The HIC-ORF construct contained only the HIC coding region. The HIC-1.7 kb construct included the coding sequence as well as parts of the 59 and 39-UTRs. The HIC-59-UTR-ORF construct consisted of the 59-UTR and the coding region, and the HIC-ORF-39-UTR construct included the coding region and 39-UTR ( Figure 3A). We first examined HIC translation in an in vitro translation system. Equal amounts of in vitro transcribed RNA from the various constructs illustrated in figure 3A were translated using a We next examined the effects of the UTRs on HIC protein levels in cells. HEK-293 cells were transfected with expression plasmids encoding the various HIC constructs (illustrated in Figure 3A). To increase the sensitivity of HIC protein detection, cells were metabolically labeled using L-[ 35 S]-methionine and L-[ 35 S]-cysteine for 11 h. HIC was immunoprecipitated from the cell extracts, subjected to SDS-PAGE and visualized by autoradiography. HIC was immunoprecipitated from cells over-expressing the HIC-ORF ( Figure 3B) and, to a lesser degree, from cells transfected with the HIC-ORF-39-UTR construct. No HIC protein was detected in cells expressing the full length 59-UTR (HIC-59UTR-ORF and HIC-full length) or from cells expressing the 39 end of the 59-UTR (HIC-1.7 kb). Furthermore, no endogenous HIC was immunoprecipitated. This data is consistent with the in vitro translation experiment, confirming that the translation of HIC is repressed by the HIC mRNA 59-UTR in transfected cells. Although differing in their translation efficiency, mRNAs that included the HIC-59-UTR as well as mRNAs that did not include the 59-UTR were detected in the polysomal fraction in a polysomal profile analysis ( Figure 3C). Therefore, the HIC mRNA is exported to the cytoplasm and not sequestered in the nucleus. Since the HIC protein is not translated in the cells, ribosomes may also be attached to the non-coding parts of HIC mRNA. The sequence of the HIC mRNA 59-UTR includes three short upstream open reading frames (uORFs) of 13, 8 and 25 amino acid lengths ( Figure 4A). Short uORFs in the 59-UTR affect translation efficiency of many eukaryotic genes [16][17][18][19][20][21][22]. To determine whether the potential uORFs in HIC 59-UTR are involved in inhibition of its translation, we mutated the initiation codons (AUGs) of the uORFs, rendering them non-functional for initiation of translation in a set of constructs encoding the HIC 59-UTR fused to a FFL reporter gene. The 59UTR was cloned into an NcoI site in which the ATG start codon is included, thus all the nucleotides upstream to the ATG belong to the HIC sequence. In each construct, a single upstream AUG (uAUG) initiation codon or combinations of two or three uAUGs were mutated to AUC codons. Saos-2 cells were co-transfected with the various constructs and an RL expressing plasmid as a control for transfection efficiency. Dual luciferase activity was measured 24 h after transfection. Fusion of the HIC 59-UTR upstream to FFL decreased its activity more than 5-fold ( Figure 4B). A mutation in the initiation codon of uORF1 or uORF2 alone did not affect the activity of FFL. However, a mutation that abolished the initiation codon of uORF3 increased FFL activity 1.8-fold. Although neither uORF1 nor uORF2 elimination significantly affected the translation of the reporter gene when each was mutated separately, both mutations had an effect when mutated along with another uORF (uORF1 in combination with uORF3 or uORF2+3 and uORF2 in combination with uORF3 or uORF1+3). FFL activity was further increased (2.4-fold) by mutations in the initiation codons of both uORF2 and uORF3 suggesting an additive effect of the two mutations. Moreover, mutations in all three uORFs, increased FFL activity to a higher extant than the mutations in uORF1 and uORF3 only. Therefore, uORF2 may have a positive role in the inhibition of HIC translation. Surprisingly, mutations in the initiation codons for both uORF1 and uORF3 abolished the increase in FFL activity induced by mutation of uORF3 alone, suggesting that intact uORF1 leads to enhanced translation. Similarly, mutations in all three uORFs increased FFL activity to a lower extent than the mutations in uORF2 and uORF3 only. Therefore, uORF1 may have a positive role in the regulation of HIC translation. We conclude that translation of HIC is repressed by the 59-UTR through uORF3 and uORF2. In contrast, uORF1 may have a positive role in translation of HIC mRNA. Stability of the HIC Protein To study HIC protein stability, we over-expressed HIC in HEK-293 cells using a construct encoding the ORF of HIC (without the UTRs) fused to myc and his tags. Expression of the HIC-myc tag-his tag following transfection of plasmid DNA was relatively low ( Figure 5A) (compared to the expression using viral infection shown in figure 1). Transfected cells were treated with the translation inhibitor cycloheximide, harvested at various time points, and the level of the myc-tagged HIC protein was determined by Western blot. The half-life of exogenous HIC in HEK-293 cells was approximately 1 h ( Figure 5A). Although the level of HIC was low when expressed from a plasmid vector, 18 h treatment with MG132, a proteasome inhibitor, significantly increased the amount of HIC detected in the cell extracts ( Figure 5B). The elevation of HIC levels in cells treated with MG132 suggests that HIC is degraded by the proteasome. Therefore, expression of the HIC protein may be transient and limited by its short half life, as well as by the regulation of its translation and RNA stability. Discussion Gene expression is a dynamic and tightly regulated process. A large number of transcription factors and other proteins regulating DNA methylation and chromatin structure are involved in transcriptional regulation. The transcribed mRNAs are regulated by multiple posttranscriptional mechanisms, including mRNA processing, editing, transport and stability [23][24][25]. Moreover, many mRNAs show lack of correlation between mRNA and protein levels [26][27][28], indicating that these genes are regulated at the level of translation. Changes in transcription rate as well as variations in mRNA decay rate, translation efficiency and protein stability play pivotal roles in regulation of protein expression. HIC mRNA is expressed in several cell lines and tissues, but to date, no expression of endogenous HIC protein has been detected. Previous studies exploring the function of HIC in mammalian cells used ectopically expressed HIC. We have shown here that the expression of HIC is subject to complex regulation. The inability to detect HIC protein in these cells may be attributed to a large degree to repression of its expression. This includes repression at the levels of mRNA stability and of translation, mediated by HIC mRNA 39-and 59-UTRs. In addition, any HIC protein that is translated is rapidly degraded by the proteosome. We have demonstrated that the HIC 59-UTR represses HIC translation, whereas the HIC 39-UTR decreases its mRNA level (Figure 2, 3). uORFs have been found to repress the translation of a number of mRNAs [16][17][18][19][20][21][22]. The presence of a few uORFs, or of an uORF which flanks or is proximal to the main ORF, greatly reduces the probability of reinitiation at the main ORF [29,30]. We found that the third uORF plays the most significant role in the inhibition of translation by the HIC mRNA 59-UTR (Figure 4), and that only when the initiation codon of this uORF was mutated, the effects of mutations in uORF1 and uORF2 could be detected. The second uORF was found to contribute to the inhibition of translation. In contrast, a mutation that abolished the first uORF decreased the effect of mutations in the following uORFs. The finding that the first uORF has an opposite effect on translation efficiency implies that it may facilitate the reinitiation of translation in the downstream ORFs. A similar observation was reported for the translation of ATF4 (activating transcription factor 4), which is regulated by two uORFs with opposing roles [21,31]. Lu et al. [21] and Vattem and Wek [31] suggested that uORF1 facilitates ribosome scanning and increases the reinitiation efficiency at a downstream ORF, uORF2, in unstressed cells or at the more distant main ORF in stressed cells. The diverse impact of the different uORFs can be attributed to several factors, such as the distance between the main ORF and the uORF and the AUG context of the different ORFs. The efficiency of reinitiation improves as the spacing between two ORFs is lengthened [29,30]. The distance between the termination codon of the third HIC uORF and the initiation codon of the HIC ORF is less than 40 nt. It is therefore likely that ribosomes initiating translation at the third uORF are precluded from translating HIC. The distances between the other ORFs are longer: 214 nt between uORF1 and uORF2 and 60 nt between uORF2 and uORF3 ( Figure 4A). This is consistent with our finding that their inhibitory influence on translation of the main downstream ORF is minor. A mutation in uORF1 or uORF2 alone did not affect the activity of FFL. This could be explained by the strong inhibitory effect of uORF3. Mutations in uORF1 or uORF2 may probably affect the translation efficiency of the following uORF, but not the translation of the main ORF. Translation of the main ORF would still be limited by uORF3, which is very proximate to the main ORF. Although mutations in the initiation codons of HIC 59-UTR enhanced translation of FFL, no combination of the mutations in the UORF was able to completely restore the activity of FFL. Thus, while the uORFs certainly have translational regulatory function, other factors found in the 59-UTR of HIC may also repress HIC translation. Another uAUG followed by a stop codon at position 127 of the 59-UTR may also contribute to the inhibition of HIC translation. In mammals, the optimal consensus sequence for initiation is GCCA/GCCAUGG, with a purine at position 23 and a G at position +4 (underlined) being the most important nucleotides [32]. It is likely that the suboptimal context for the HIC translation initiation codon (CGGCCCAUGU) as well as the very long 59 UTR containing other features (e.g. a complex RNA secondary structure) also contribute to the low translation efficiency induced by the 59-UTR of HIC mRNA. Fusion of the HIC 39-UTR to a FFL reporter gene decreased FFL mRNA levels in the cells ( Figure 2B). Since both the control FFL and the FFL fused to the 39-UTR were expressed from a CMV promoter, we attribute the change in mRNA levels to altered mRNA stability. When over-expressed in cells, much more HIC protein was immunoprecipitated from cells transfected with a construct expressing HIC-ORF compared to the construct expressing HIC-ORF-39-UTR ( Figure 3C). Consistent with our results, Young et al. expressed HIC from a construct that contained HIC-ORF only or from a construct that contained the HIC-ORF and the 39-UTR (both lacking the HIC 59-UTR), and detected a much lower HIC protein level when the HIC construct included the 39-UTR [33]. The half-life of an mRNA depends on sequences within the transcript itself, usually located in the 39 UTR, and on RNA binding factors that interact with those sequences [34,35]. Adenylate-uridylate rich elements (AREs) located in the 39-UTR regulate the stability of various mRNAs encoding a wide repertoire of functionally diverse proteins [36][37][38]. AREs are found within Urich regions and frequently contain one or more AUUUA motifs [36,39]. The HIC 39-UTR contains 18 repeats of the AUUUA consensus sequence, 16 of which are dispersed and two of which overlap. 35% of the HIC 39-UTR is comprised of Uridylate nucleotides, some of which are clustered in long stretches (none of the clusters is included in the HIC-1.7 kb construct). Although the presence of AUUUA motifs in an AU-rich region does not guarantee the function of this motif in mRNA stability [40], it would be interesting to determine whether the numerous AUUUA repeats confer instability to HIC mRNA. However, HIC mRNA instability does not seem to be the major regulator of HIC repression. Endogenous HIC mRNA is expressed in various cell lines and tissues, but HIC protein is not detected in any of them. Therefore, expression of HIC protein expression is probably controlled primarily at the levels of translation and protein stability. The level of ectopically expressed HIC protein was greatly increased by treatment with the proteasome inhibitor MG132, indicating that HIC is degraded by the proteasome ( Figure 5A). The half-life of HIC is approximately 1 h ( Figure 5B). Under conditions that stimulate translation, induction of endogenous HIC expression may still be transient due to the relatively short half-life of HIC. Thus, HIC expression is also regulated posttranslationally. Under normal cellular conditions HIC is hardly translated and degradation by the proteasome may only act as a ''backup'' mechanism to prevent its expression. Nevertheless, under other yet unknown conditions in which HIC is translated, the regulation of HIC degradation may be more crucial and significant to the protein level. HIC protein has been shown to interact with the cyclinT1 subunit of P-TEFb and with HIV-I Tat [4,8], and inhibit Tat and p-TEFb dependent transcription from the HIV promoter [4]. In contrast, expression of HIC mRNA has been shown to activate transcription from the HIV-I promoter in a p-TEFb dependent manner [33]. Young et. al. sought to resolve this discrepancy and found that the 39 end of HIC mRNA binds p-TEFb and activates it by displacing 7SK, an inhibitory small nuclear RNA (snRNA) [33]. Thus, HIC plays opposing roles in the regulation of P-TEFb: ectopically expressed HIC protein represses transcription from the HIV-I promoter while HIC mRNA elevates HIV-I promoter dependent translation. Activation of p-TEFb by HIC mRNA is presumably also required for general cellular transcription in the cell. We have evidence for a role for HIC in the stress response (submitted for publication), suggesting that HIC protein may be translated when the cell is subjected to certain forms of stress and/or as a protective mechanism against viral infection. Although the last 314 nt are sufficient for HIC mRNA to activate P-TEFb, an mRNA of 4607 nt is transcribed [33]. Inhibition of HIC translation during normal cell growth, by the long 59UTR may also be required in order to keep HIC mRNA available and functional as an RNA molecule. At times of stress, reduced activation of HIC mRNA may be coordinated with induction of translation of the HIC protein. We have shown here that the expression of ectopically expressed HIC protein is strongly repressed at the levels of mRNA stability, translation and protein stability. We suggest that endogenous HIC is down-regulated through similar mechanisms under normal cellular conditions. The tight suppression of HIC protein at multiple levels of gene expression may be withdrawn under certain, yet unidentified, cellular conditions. Further understanding of the complex regulation of HIC expression will contribute to the understanding of the cellular roles of HIC and the cellular pathways in which it is involved. Materials and Methods Plasmids were prepared as detailed in Table 1. Adenoviruses Expressing HIC Adenoviruses expressing HIC were prepared as described in [42]. In brief, the HIC ORF was cloned into pAdTrack-CMV vector. pAdTrack-CMV-encoding HIC was digested with PmeI and inserted into pAdeasy using recombination in E.coli BJ5183. Cell Culture All tissue culture reagents were purchased from Biological Industries Beit-Haemek LTD (Israel). MG132 was purchased from Calbiochem (USA). Cycloheximide was purchased from Sigma. Cell Transfection and Infection Cells were transfected using polyethyleneimine (PEI) as described previously [43]. Adenoviral infection was performed as described in [42]. Northern Blot Analysis 122610 6 cells were seeded in 10 mm plates. 24-48 h after the plating, cells were transfected with the indicated plasmids. RNA was extracted from the cells using Tri-reagent (Sigma) or the RNeasy kit (Qiagen). 20 mg RNA were resolved on a 1% agarose gel containing 8% formaldehyde and capillary blotted onto a nylon membrane. The blot was then hybridized overnight at 42uC with 32 P-labeled DNA probe, prepared with the Rediprime kit (GE Healthcare, USA) and exposed to film. For densitometry, subsaturation exposures were analyzed using the NIH image 1.61 software. Immunoblotting Equal amounts of protein in Laemmli sample buffer [44] were resolved by SDS-PAGE and electroblotted onto nitrocellulose membranes. Antibodies: Anti-myc tag (9E10) was purchased from Santa Cruz Biotechnology Inc. (USA), Anti-tubulin was from Sigma. The anti-HIC serum was generated by immunization of rabbits with the peptide SGAGEALAPGPVG, comprising the first 13 amino acids of HIC. Immunoprecipitation of Radiolabeled HIC 10 6 HEK-293 cells were seeded in 10 cm plates. The following day, cells were transfected with constructs encoding various parts of the HIC gene. 24 h after transfection, cells were labelled with 100 mCi/ml of Pro-mix (L-[ 35 S]-methionine and L-[ 35 S]-cysteine) (Amersham Pharmacia) for 11 h in DMEM lacking methionine and cysteine (Gibco) supplemented with 10% dialyzed FCS. Cells were washed 3 times with PBS and lysed in 0.1% SDS, 0.5% deoxycholate, 1% NP-40 in PBS supplemented with protease inhibitors (Sigma). Protein content was quantified using the micro Bradford assay [45]. HIC was immunoprecipitated from 1 mg protein lysate using anti-HIC epitope antibody (described above) coupled to protein G Sepharose (GE healthcare, USA). 25 ml of protein G were coupled to 20 ml anti-HIC serum by incubating the mixture for 2 h at 4uC in PBS containing 5% low-fat milk. Excess antibody was removed by washing the beads extensively with PBS. Lysates were incubated with coupled beads for 16 h at 4uC. Immunocomplexes were washed 4 times with lysis buffer. Then, 26Laemmli sample buffer was added to the beads and the samples were boiled for 10 min. Immunocomplexes were resolved by SDS-PAGE and electroblotted. Nitrocellulose membranes were exposed to autoradiography. In Vitro Translation RNAs were transcribed in vitro with T7 polymerase from linearized pcDNA3-HIC constructs using the Riboprobe in vitro Transcription System (Promega, USA). 0.5 mg of each purified transcript was translated using a rabbit reticulocyte lysate system (Promega, USA), according to the manufacturer's instructions, in the presence of 20 mCi Pro-mix (L-[ 35 S]-methionine and L-[ 35 S]cysteine (Amersham Pharmacia Biotech) per reaction. Samples were resolved by SDS-PAGE, electroblotted onto nitrocellulose membranes, and membranes were exposed to autoradiography. Reporter Assays Cells were seeded in 12-well plates (6610 4 /well). 48 h after seeding cells were transfected with the indicated plasmids. A plasmid encoding Renilla luciferase (pRL-PGK) was added to each transfection mixture as a control for transfection efficiency. 24-48 h after transfection cells were lysed with passive lysis buffer (Promega, USA). Firefly and Renilla luciferase activities were measured using a dual luciferase assay kit (Promega, USA).
Catechol cross-linked antimicrobial peptide hydrogels prevent multidrug-resistant Acinetobacter baumannii infection in burn wounds Hospital-acquired infections are common in burn patients and are the major contributors of morbidity and mortality. Bacterial infections such as Staphylococcus aureus (S. aureus) and Acinetobacter baumannii (A. baumannii) are difficult to treat due to their biofilm formation and rapidly acquiring resistance to antibiotics. This work presents a newly developed hydrogel that has the potential for treating bacterial wound infections. The hydrogel formulation is based on an antimicrobial peptide (AMP), epsilon-poly-l-lysine (EPL) and catechol, which was cross-linked via mussel-inspired chemistry between the amine and phenol groups. In vitro studies showed that EPL-catechol hydrogels possess impressive antimicrobial and antibiofilm properties toward multidrug-resistant A. baumannii (MRAB). In addition, cytotoxicity study with the clonal mouse myoblast cell line (C2C12) revealed the good biocompatibility of this hydrogel. Furthermore, we created a second-degree burn wound on the mice dorsal skin surface followed by contamination with MRAB. Our results showed that the hydrogel significantly reduced the bacterial burden by more than four orders of magnitude in infected burn wounds. Additionally, there was no significant histological alteration with hydrogel application on mice skin. Based on these results, we concluded that EPL-catechol hydrogel is a promising future biomaterial to fight against multidrug-resistant bacterial infections. Introduction Burns are a common and serious healthcare problem that require care in specialized units [1]. The nature and extent of burn injuries lead to immunosuppression in patients, resulting in increased susceptibility to various nosocomial pathogens [2]. Burn wounds act differently than traumatic wounds. Burn wounds, especially deep partial thickness and full thickness burns consist of avascular necrotic tissue (eschar), provide a protein-rich environment for bacterial colonization and proliferation [1]. Burn wounds are easily colonized by Gram-positive and Gram-negative bacteria (an average of 5-7 days) present in the hospital environment [2]. In the past few decades, Gram-negative bacteria have emerged as the most common and invasive organism by virtue of their robust antimicrobial resistance and virulence factor [3]. Additionally, biofilm formation acts as an effective barrier against antimicrobial agents and host immune cells, resulting in persistent wound infections [4]. Briefly, EPL and catechol were dissolved in Tris/HCl solution (pH = 8.5, 10 mM). The solution was incubated at 37 โ€ข C for 3 days for the partial oxidation of catecholamine. Thereafter, 500 ฮผl of the solution was added to the 24-well plate and stored at 4 โ€ข C for 2 days to make the hydrogel. PEGDA hydrogel prepared with the same concentration (wt%) prepared by UV cross-linking (Irgacure 2959, 0.1 wt%) was used as a control. All the hydrogels were washed with double distilled water for 4 h to remove the residues that did not take part in the reaction process. SEM of hydrogels The morphology of hydrogels was characterized by SEM. The hydrogels were completely immersed in deionized water overnight until they reached the maximum swelling equilibrium state. They were then lyophilized by a freeze drier (Alpha 1-2LD Plus, Germany Christ) at โˆ’50 โ€ข C for 48 h. The cross-section of dried samples was sputtered with gold and observed under SEM (Quanta 200, Philips-FEI, U.S.A.). Bacterial strain and culture conditions Prior to antimicrobial tests, bacterial strains (MRAB, MRSA) were cultured at 37 โ€ข C in TSB at 200 rpm. Overnight cultured bacteria were moved into fresh medium at a ratio of 1:100 and cultured for another 4 h to attain a log phase optical density (OD: 600 -0.5). Cells were collected by centrifugation, washed twice with sterile PBS and then resuspended in sterile PBS at appropriate concentration of colony forming units (CFU) of bacteria (1 ร— 10 8 CFU/ml). For in vivo studies, bacterial suspension (1 ml) was collected in sterile Eppendorf tubes, centrifuged at 4000ร—g for 4 min and washed with PBS to remove the nutrient culture medium. This process was repeated in triplicate. Finally, the testing bacteria were resuspended in PBS at a concentration of 1 ร— 10 8 CFU/ml. Zone of inhibition study The antimicrobial activity of EPL-catechol hydrogels was evaluated by the Kirby-Bauer (KB) test. MRAB and MRSA bacterial suspensions (1 ร— 10 8 CFU/ml) were prepared as mentioned above. Then, 50 ฮผl of bacterial suspension was transferred to MH agar on a Petri dish. Hydrogels (diameter 5 mm) with various concentrations of EPL and catechol were prepared and transferred to the dish. Sterile filter paper immersed in PBS was used as a control group. The plates were incubated at 37 โ€ข C, and the zone of bacterial inhibition was measured after 24 h of incubation. Evaluation of in vitro contact antimicrobial action of EPL-catechol hydrogel For in vitro, the contact antimicrobial activity of the EPL-catechol hydrogels was evaluated based on the colony count method [32]. The hydrogels were prepared in 24-well plates and rinsed with excess PBS prior to the test. MRAB bacterial suspension was prepared as mentioned above. A total of 300 ฮผl bacterial suspension (1 ร— 10 8 CFU/ml) was added to a 24-well plate containing the control group (PEGDA hydrogel) and the EPL-catechol hydrogel. After a contact time of 24 h, the samples were sonicated for 5 min to detach the adhered bacteria. Then, 50 ฮผl aliquots were collected and plated on MH agar with ten-fold serial dilutions. The MH agar plates were incubated at 37 โ€ข C for 24 h, and the number of colonies was recorded. SEM of MRAB seeded on hydrogel The MRAB suspension was prepared as described above. For in vitro studies, 10 ฮผl aliquots were seeded on the EPL-catechol hydrogel and control gel (PEGDA hydrogel) and incubated for 2 h. EPL-catechol hydrogel and control group (PEGDA hydrogel) were fixed in 2.5% glutaraldehyde at 4 โ€ข C. The samples were fixed in 1% osmium tetroxide for 2 h and then washed with PBS and dehydrated with graded ethanol (30, 50, 70, 90 and 100%). The hydrogel samples with MRAB were sputter-coated with Pt and observed under SEM (Hitachi H-7650, Japan). Antibiofilm assay Mid-log phase MRAB was harvested and resuspended in TSB with 1% glucose at a concentration of 1 ร— 10 8 CFU/ml. The EPL-catechol hydrogel and tissue culture polystyrene (TCPS), used as a control group, were submerged in a 2-ml bacterial suspension for 3 days to ensure biofilm formation. The culture medium was changed every day. Specimens were washed gently with PBS to remove planktonic bacteria. The hydrogels were dried and stained with LIVE/DEAD backlight kit in a dark room following the manufacturer's instructions. The biofilms formed on glass slides and hydrogels were observed under an inverted fluorescence microscope (IX53, Olympus, Japan). The excitation wavelengths of 488 and 561 nm were set for the detection of FITC (green channel) and Rhod (red channel), respectively. The images were analyzed using Zen 2009 software (Japan). In vitro biocompatibility assay C2C12 cells were cultured in DMEM containing 10% heat-inactivated FBS, 1% 100 IU/ml penicillin and 100 ฮผg/ml streptomycin at 37 โ€ข C in a humidified incubator containing 5% CO 2 . Cells confluence reaching 80-90% were digested with 0.25% trypsin and passaged. The cells used in the present study were in a logarithmic growth phase and were passaged between 3 and 6. The cytotoxicity of the EPL-catechol hydrogel was investigated using the standard alamar-Blue assay [33]. Prior to cell seeding, the hydrogel samples were cut into 5-mm diameter disks and placed in 70% ethanol for sterilization. C2C12 cells were seeded on a TCPS plate at a density of 6000 cells/cm 2 for 4 h. The samples were gently placed on to the well, allowing the hydrogels to contact the cells. C2C12 cells were allowed to incubate for 1 and 4 days, and the culture medium was changed on the alternate days. After 1 and 4 days, the wells containing TCPS disks were removed. Then, 100 ฮผl of alamarBlue reagent was added to each well and incubated for 4 h at 37 โ€ข C. A total of 100 ฮผl of the medium in each well was added to a 96-well black plate (Costar). Fluorescence was read using 530 nm as the excitation wavelength and 600 nm as the emission wavelength in a microplate reader (SpectraMax Paradigm, Molecular Devices, U.S.A.). Cells in TCPS plate wells without test materials were used as the control group. Each assay was repeated in triplicate, and the mean was calculated. The viability of C2C12 cells was examined with the LIVE/DEAD assay kit. Cultured cells at 1 and 4 days were stained with the LIVE/DEAD reagent and incubated at 37 โ€ข C for 45 min. Cell adhesion and proliferation were observed with an inverted fluorescence microscope (IX53, Olympus). Animals Pathogen-free BLAB/c mice (6-8 weeks of age, weighing 22-28 grams) were purchased from the Nanjing Biomedical Research Institute of Nanjing University (China). Mice had free access to food and water and were maintained on a 12-h dark/light cycle in a room with a controlled air temperature of 24 + โˆ’ 2 โ€ข C and humidity of 55 + โˆ’ 10%. Mice were allowed to acclimatize to the new environment for a week prior to experimental studies. All animal experiments were approved and conducted under the guidance of the International Association for the Protection of Animal and Experimental Medicine and Laboratory Animal Ethics Committee of Zhejiang University, School of Medicine, Hangzhou, Zhejiang, China. Creation of burn wounds and MRAB infection The dorsal surface of the mice were shaved and depilated with Veet (Reckitt Benckiser) 1 day prior to the creation of burns. Mice were divided into four groups, six mice in each group (n = 6). The following day, mice were anesthetized, and a partial thickness burn (80 โ€ข C for 6 s) was created with a cylinder (diameter 1.5 cm) using a burn creation machine (ZH-YLS-5Q, Shanghai, China). Soon after burn creation, mice were resuscitated with 1 ml sterile saline. Twenty minutes later (allowing the burn wound to cool down), elastic bandage tape was applied and stripped from a 1.5 ร— 1.5 cm 2 dorsal skin area eight times in one direction and eight times in the opposite direction until the skin turned red and glistened with no apparent bleeding (Supplementary Figure S1A). This method is adapted from protocols proposed by Tatiya-Aphiradee et al. [34]. Removing the upper epidermal layer provides a better environment for the attachment and incubation of bacteria (Supplementary Figure S1B). A 20-ฮผl MRAB bacterial suspension in PBS was smeared on to the burn wound with an inoculating loop. The wound was then covered with EPL-catechol hydrogel, followed by polyurethane film (Tegaderm, 3M) application. The control group was treated with PEGDA hydrogel. Microbiological analysis After 1 and 2 days of MRAB-infected burn wounds, mice were anesthetized and killed. Skin samples from control and hydrogel-treated mice were excised using a 5-mm punch biopsy (Tru-Punch, Sklar, U.S.A.). Tissue samples were then homogenized in 600 ฮผl PBS, and ten-fold serially diluted bacterial colonies were enumerated by plating on MH agar and incubated at 37 โ€ข C for 24 h. The results were normalized and expressed in log 10 CFU bacterial load present in 1g of the tissue sample. SEM of MRAB infected mice skin For in vivo studies, mice burn wounds were infected with MRAB followed by treatment with EPL-catechol and PEGDA hydrogel (control) for 2 days. Skin samples were collected and fixed in 2.5% glutaraldehyde and 4% paraformaldehyde followed by a wash with PBS for 30 min at 4 โ€ข C. The samples were fixed in 1% osmium tetroxide for 2 h and then washed with PBS and dehydrated with graded ethanol (30, 50, 70, 90 and 100%). The samples were immersed in propylene oxide and Epon 812 for 1 h followed by polymerization embedding. Samples were cut Histological analysis Mice were divided into four groups, six mice in each group (n = 6). Following the creation of burns on the mice dorsal skin surface, the hydrogels were carefully placed on the affected skin area. Mice were anesthetized and sacrificed on days 1 and 2, and skin samples were fixed in 10% neutral buffered formalin before they were subjected to H&E staining. To evaluate the impact of the EPL-catechol hydrogel on mice skin, mice were divided into four groups, six mice in each group (n = 6), and hydrogels were implanted subcutaneously (Supplementary Figure S2A). The dorsal skin samples were collected on 2 and 5 days of implantation. All samples were processed and embedded in paraffin specimen blocks and sectioned (4-ฮผm thick), followed by staining with H&E. To analyze inflammatory cell infiltration, H&E-stained sections were observed under a light microscope (Nikon Eclipse 80i, Japan) and photographed at ten randomly selected locations. Statistical analysis Data are presented as the means + โˆ’ standard deviation. Statistical analysis was executed using GraphPad Prism 5.0 (GraphPad Software, U.S.A.). P-values were determined using a two-tailed Student's t test. P<0.05 was considered to be significant. Characterization of EPL-catechol hydrogel EPL and catechol were mixed together in appropriate concentrations, and the reactions between them gradually took place. After 3 days, the transparent mixture turned into a brownish color, and the EPL-catechol hydrogels were formed without precipitation (Scheme 1 and Supplementary Figure S3A-D). The hydrogels were freeze-dried, and their microstructures were observed under SEM ( Figure 1). As shown in Figure 1A-C, highly interconnected porous structures of the EPL-catechol hydrogels were observed, and the pore size decreased with increasing EPL concentration. The EPL-catechol hydrogels also exhibited high water content in the range of 87-94%. As shown in Figure 1D, the water content of the EPL-catechol hydrogels also increased as the concentration of EPL increased. There was no significant difference in water content among all three types of EPL-catechol hydrogels (Gel 1, Gel 2 and Gel 3). In vitro antimicrobial activity of EPL-catechol hydrogel The antimicrobial activity of EPL-catechol hydrogels was determined by the KB test (zone of inhibition) and surface contact assay. All three types of hydrogels showed promising antimicrobial activity against both MRAB and MRSA. The zone of inhibition of various EPL-catechol hydrogels against MRAB and MRSA is shown in Figure 2A,B. Generally, the inhibitory zone for MRAB is slightly larger than MRSA. The halos increase in size as the concentration of EPL increases in hydrogels. Gel 3 has the most potent antimicrobial activity and therefore, used in further experimentation. The antimicrobial activity of the hydrogel was further evaluated by adding an MRAB suspension on to the surface of the EPL-catechol hydrogel (Gel 3) at a concentration of 1 ร— 10 8 CFU/cm 2 . PEGDA hydrogel was used as a control. Interestingly, following 24 h of incubation, the EPL-catechol hydrogel showed an almost 100% killing of MRAB compared with the control hydrogel ( Figure 3). A large number of MRAB colonies were observed after contact with the control PEGDA hydrogel ( Figure 3A). However, no viable MRAB colony was detected after contact with the EPL-catechol hydrogel ( Figure 3B). To further verify this phenomenon, the hydrogel samples carrying bacteria were observed under SEM. MRAB was seeded on the surface hydrogels for 2 h and then prepared for observation using SEM. Marked structural alterations were observed in the cell morphology of MRAB in contact with EPL-catechol hydrogel compared with the control PEGDA hydrogel. The bacterial cells of the control group ( Figure 3C) exhibited rounded, rod-like and smooth appearances, whereas the bacteria in contact with EPL-catechol hydrogel showed wrinkled, disrupted and withered surfaces ( Figure 3D). The antibiofilm activity of EPL-catechol hydrogel was also studied using Gel 3. A LIVE/DEAD bacterial cell viability assay was carried out for the visualization of MRAB biofilm formation on the surface of the EPL-catechol hydrogel and control TCPS. The bacterial viability was carried out to confirm the bactericidal effect of our EPL-catechol hydrogel. LIVE cells were stained fluorescent green, while DEAD cells with damaged membrane were stained red [35]. in our studies, MRAB that appeared red were dead, while the green appearance indicated live bacteria. The strong green fluorescence signals shown in Figure 4A,B indicate that a biofilm containing many bacteria were grown on the TCPS (control) after 3 days of incubation, and most of the MRAB were alive. In contrast, only a few sporadic live/dead bacteria cells were observed on top of the EPL-catechol hydrogel ( Figure 4C,D). A significant reduction in MRAB survivability was obtained from the EPL-catechol hydrogel compared with the TCPS (control). These results clearly indicate that bacterial cells adhere to TCPS and proliferate well during incubation and form a biofilm, whereas the EPL-catechol hydrogel successfully inhibited biofilm formation. In vitro biocompatibility of EPL-catechol hydrogel The cytotoxicity of EPL-catechol hydrogel (Gel 3) was assessed with the C2C12 cell line by performing an alamarBlue assay. The hydrogel was directly placed on cells and cultured for 1 and 4 days. As shown in Figure 5, EPL-catechol led to slightly reduced cell viability on days 1 and 4 compared with the control TPCS group. However, there was no statistical significant difference between the control group and the EPL-catechol hydrogel. The absorbance of the cells in contact with the EPL-catechol hydrogel increased with culture time, thus indicating periodic cell growth. The cells proliferated well in the presence of hydrogel, indicating the good biocompatibility of the EPL-catechol hydrogel. In vivo antimicrobial activity of EPL-catechol hydrogel The in vivo antimicrobial activity of EPL-catechol hydrogel against MRAB was determined by measuring MRAB burden in burn wound ( Figure 6). The burn wound ( Figure 6A) inoculated with MRAB followed by EPL-catchol hydrogel application ( Figure 6B) showed that the number of MRAB in the burn wound reached 1.2 ร— 10 9 CFU/g after 24 h in the control group (PEGDA hydrogel). However, MRAB colonies under the EPL-catechol hydrogel dressing were found to be 2.1 ร— 10 4 CFU/g, showing a log reduction of 4.76 from the control group. After 48 h, although the number of MRAB colonies in the control group decreased to 1.1 ร— 10 7 CFU/g, the number of colonies in burn wounds treated with EPL-catechol hydrogel was further reduced to an average of 22 CFU/g. The log reduction between them is 5.70. The application of EPL-catechol hydrogel significantly reduced the bacterial burden on days 1 and 2 of treatment compared with the control group ( Figure 6C, P<0.001). Upon observing the wound appearance, the control group showed yellow slough due to severe infection ( Figure 6D), while the EPL-catechol hydrogel-treated burn wound had improved appearance ( Figure 6F). SEM observations were carried out to observe the bacterial growth in the burn wounds. Fewer bacteria were observed on the burn wound treated with EPL-catechol hydrogel ( Figure 6G) compared with the control group (PEGDA hydrogel), where MRAB proliferated in high numbers, leading to biofilm formation ( Figure 6E). The EPL-catechol hydrogel prevented biofilm formation. In vivo biocompatibility of EPL-catechol hydrogel The in vivo tissue toxic effect was evaluated by topically applying EPL-catechol hydrogel (Gel 3) for 1 and 2 days and a subcutaneous implant for 2 and 5 days (Figure 7). We observed histological alterations via H&E staining of the surrounding tissue sections. Notably, the EPL-catechol hydrogel showed no inflammatory cells infiltration on day 1 and 2 of topical application compared with the untreated control group ( Figure 7A). Likewise, the subcutaneous EPL-catechol hydrogel implantation showed no signs of inflammation and displayed the normal architecture of epidermis, dermis, and subcutaneous tissues similar to the untreated tissue samples on day 2 and 5 of implantation ( Figure 7B and Supplementary Figure S2B,C). These results are consistent with the EPL-catechol hydrogel promising biocompatibility. Discussion Burn patients are at high risk of infection as a result of the nature and extent of burn injuries and prolonged hospital stays. Burn injury compromises skin integrity and immunity, affecting crucial functions such as homeostasis and protection against infection [1]. Hydrogel wound dressings were shown to contribute to wound debridement by the rehydration of nonviable tissue [36], necessary for wound healing. However, current hydrogel wound dressings cannot effectively control microbial growth. In this work, we presented a catechol cross-linked EPL hydrogel with potential antimicrobial action against multidrug-resistant microbes such as MRSA and MRAB. The EPL-catechol hydrogels were prepared by in situ cross-linking of the amine groups of EPL molecules with catechol in a chemical reaction (Scheme 1) [37,38]. During the reaction process, the generation of o-quinone by oxidative conversion of the catechol moiety changes the color into brown [39]. As shown in Scheme 1, the EPL-catechol hydrogel turns brown, suggesting the occurrence of reaction [39]. The SEM reveals the homogeneous microporous structure of the hydrogel. The hydrogel structure became more porous as the concentration of EPL increased ( Figure 1). The increasing EPL concentration of hydrogel also results in an increase in preserving water content above 90%, which could provide a moist environment for faster wound healing. The adjustable EPL plays an important role in the porous structure and water content of hydrogels, which can meet different clinical demands. EPL is valued as a safe, thermostable and most importantly a strong antimicrobial agent. It has been widely used for food preservation and has been approved by the U.S.A., Korean and Japanese administrations [40]. EPL has a broad-spectrum antimicrobial action against both Gram-negative and Gram-positive bacteria [17]. In our in vitro studies, the EPL-catechol hydrogel showed potent antimicrobial effects against clinical MRAB and MRSA bacterial strains. An increase in bacterial inhibition was noticed with the increase in EPL concentration in various hydrogels ( Figure 2). SEM studies showed that MRAB contacting the EPL-catechol hydrogel surface led to cell wall rupture and surface roughness of bacteria indicating the disruption of the bacterial cell membrane (Figure 3). Furthermore, MRAB cells were alive and growing well on TCPS (control) but dead and inhibited on the EPL-catechol hydrogel. This mechanism is consistent with the findings that EPL kills bacteria by adsorbing on to the cell surface and perturbing the cell outer membrane, leading to the abnormal distribution of the cytoplasm and ultimate damage to the microbe [41]. EPL has a positively charged hydrophobic segment of -(CH 2 ) 4 -in each repeating unit of the backbone, interacts with the pathogen's anionic and hydrophobic membrane surface [17]. Thus, SEM and LIVE/DEAD assays collectively suggest that the EPL-catechol hydrogel is effective against MRAB by damaging the bacterial cell membrane ( Figure 4). These results are consistent with previous reports showing that EPL distributes a positive charge and results in cell membrane fracture [29,42]. It is crucial for functionalized hydrogels to possess biocompatibility, which is important for wound healing [43]. Our results showed that the EPL-catechol hydrogel is nontoxic to C2C12 cells and possesses excellent cytocompatibility ( Figure 5). Recently, bioluminescent models are used widely to analyze the antimicrobial activity of drugs and evaluate bacterial resistance to antimicrobial agents [44,45]. Bioluminescent S. aureus models have been studied to assess its pathogenesis in deeper host tissues and for drug efficacy evaluation [46]. Bioluminescent models of bacterial infection and wound sepsis are interesting research approach toward accurate representation of different stages of infection and biofilm formation. Ogunniyi et al. [47], successfully reported bioluminescent S. aureus infection and treatment in partial thickness and second-degree burn. The use of bioluminescent A. baumannii and S. aureus, may provide reliable future models to assess preclinical efficacy of EPL-catechol hydrogel in treating burn wounds infection. However, bioluminescent A. baumannii resistance toward various antimicrobial drugs is yet to be evaluated. In our study we used clinical strain of A. baumannii which has shown resistance to multiple drugs (Supplementary Table S1). Burn wound infection caused by MRAB is a global healthcare issue due to its inherent antibiotic resistance. MRAB have become resistant to carbapenems and to last-resort antibiotics such as colistin and tigecycline [48]. To demonstrate the clinical potential of our designed EPL-catechol hydrogel, the MRAB-infected burn wound was treated with hydrogel for 2 days and examined by SEM. We observed fewer bacteria cells in the EPL-catecol hydrogel-treated burn wound compared with the control group, where a large number of bacteria adhered to the wound bed ( Figure 6). These results clearly demonstrated the in vivo antimicrobial action of EPL-catechol hydrogel. It is important to prepare a hydrogel that nontoxic toward mammalian cells. The subcutaneous implantation of the hydrogel showed no signs of inflammation and displayed normal architecture of the epidermis, dermis and subcutaneous tissues, similar to the untreated control group (Figure 7). Thus, the EPL-catechol hydrogel is a promising wound dressing material.
On the Equivalence of Two Competing Affirmative Actions in School Choice This note analyzes the outcome equivalence conditions of two popular affirmative action policies, majority quota and minority reserve, under the student optimal stable mechanism. These two affirmative actions generate an identical matching outcome, if the market either is effectively competitive or contains a sufficiently large number of schools. Introduction Affirmative action policies are meant to mitigate ethnic and social-economic disparities in schools through providing minority students preferential treatments in the admission process. In practice, one popular policy design is the quota-based affirmative action (majority quota, henceforth), which sets a maximum number less than a school's capacity to majority students and leaves the difference to minority students. Based on [6]'s deferred acceptance algorithm, [3] shows that the student optimal stable mechanism (SOSM) with majority quota is stable and strategy-proof for students. [7] propose the reserve-based policy (minority reserve, henceforth), which gives minority students preferential treatments up to the reserves. Their results indicate that without losing its stability and strategy-proofness, the minority reserve policy unilaterally outperforms its majority quota counterpart under the SOSM in term of students' welfare. Given the prevalence of the majority quota policy in practice, introducing the minority reserve policy to the admission process could possibly evoke substantial political, administrative and cognitive costs to local communities which may offset or even surpass its theoretical efficiency edge. It is thus unnecessary to abandon the majority quota policy if we are able to perceive the causes which lead to the Pareto dominance relation between the majority quota and its minority reserve counterpart. In this note, we first introduce the effective competition condition to assure that for each school with nonzero reserved seats for minorities, the number of minority students who list it as their first choice is no less than the number of its reservations, i.e., competition for each reserved seat is fierce among minority students. Our Proposition 1 shows that the majority quota and the minority reserve generate the same outcome under the SOSM if the matching market is effectively competitive; in other words, the efficiency edge of the minority reserve policy over its majority quota counterpart essentially comes from the possible misallocation of reserved slots to less desirable schools with insufficient competition among minorities. As our effective competition condition is not exclusive, there may have other subsets of finite markets which are not effectively competitive but still generate identical matching outcomes under the two affirmative actions (see Example 2 in Appendix A.1), we further investigate the asymptotic outcome equivalence condition of these two affirmative actions in a sequence of random markets of different sizes. Proposition 2 implies that the probability that the two affirmative actions generate the same matching outcome under the SOSM converges to one when the market contains sufficiently many schools with relatively few reserved seats. In other words, there is no need to distinguish these two affirmative action policies if the social planner can assure a sufficient supply of popular schools to the matching markets. Although our large market setting relies on a number of regularity conditions, this framework has been adopted in several recent analyses on the asymptotic properties of matching mechanisms in different contexts. Among others, [9] claim that strategy-proofness is an approximate Bayes-Nash equilibrium in two-sided one-to-one matching markets. [12] extend this approximate strategy-proofness concept to the many-to-one matching markets. [13] and [4] prove the existence of asymptotically stable matching mechanism in the National Resident Matching Program with both single and married doctors; in particular, [4] improve the growth rate of couples at n 1โˆ’ฮต from ฮต โˆˆ (1/2, 1) in [13] to ฮต โˆˆ (0, 1) by considering a particular sequence of proposals by couples, while preserving the linear growth of hospitals and single doctors. [8] reveal that all stable mechanisms asymptotically respect improvements of school quality (i.e., a school matches with a set of more desirable students if it becomes more preferred by students). [14] shows that the minority reserve policy is very unlikely to hurt any minority students in stable mechanisms. By alternately examining the vanishing of market disruptions from a fraction of agents in probability, [5] suggest that the inefficiency of the deferred acceptance algorithm [6] and the instability of the top trading cycle mechanism [16] remain significant even when the market grows large. School Choice Let S and C be two finite and disjoint sets of students and schools, |S| โ‰ฅ 2. There are two types of students, majority and minority. S is partitioned into two subsets of students based on their types. Denote S M the set of majority students, and S m the set of minority students, S = S M โˆช S m and S M โˆฉ S m = . Each student s โˆˆ S has a strict preference order P s over the set of schools and being unmatched (denoted by s). All students prefer to be matched with some school instead of herself, c P s s, for all s โˆˆ S. Each school c โˆˆ C has a total capacity of q c seats, q c โ‰ฅ 1, and a strict priority order โ‰ป over the set of students which is complete, transitive, and antisymmetric. Student s is unacceptable by a school if e โ‰ป c s, where e represents an empty seat in school c. A school choice market is a tuple G = (S,C , P, โ‰ป, q), where P = (P i ) i โˆˆS , โ‰ป= (โ‰ป) cโˆˆC and q = (q c ) cโˆˆC . Denote P โˆ’i = (P j ) j โˆˆS\i and โ‰ป โˆ’c = (โ‰ป c โ€ฒ ) c โ€ฒ โˆˆC \c . For a given G, assume that all components except P , are commonly known. Since the sets of schools and students are fixed, we simplify the market as G = (P, โ‰ป, q). A matching ยต is a mapping from S โˆช C to the subsets of S โˆช C such that, for all s โˆˆ S and c โˆˆ C : (i) ยต(s) โˆˆ C โˆช {s}; (ii) ยต(s) = c if and only if s โˆˆ ยต(c); (iii) ยต(c) โŠ† S and |ยต(c)| โ‰ค q c ; and (iv) |ยต(c) โˆฉ S M | โ‰ค q M c . That is, a matching specifies the school where each student is assigned or matched with herself, and the set of students assigned to each school. A mechanism f is a function that produces a matching f (G) for each market G. [3] compose the student optimal stable mechanism with majority quota algorithm (SOSM-Q henceforth), in which each school c cannot admit more majority students than its typespecific majority quota q M c โ‰ค q c , q M = (q M c ) cโˆˆC , for all c โˆˆ C . A matching ยต is blocked by a pair of student s and school c with majority quota, if cP s ยต(s) and either |ยต(c)| < q c and s is acceptable to c, or: Affirmative Action Policies A matching ยต is stable with majority quota, if ยต(s) P s s for all s โˆˆ S, and has no blocking pair. The SOSM-Q algorithm runs as follows: Step 1: Each student s applies to her first-choice school (call it school c). The school c rejects s if either q c are filled by students who have higher priorities than s at c, or s โˆˆ S M and q M c are filled by majority students having higher priorities than s at c. Each school c tentatively accepts the remaining applicants until its capacity is filled while maintaining |ยต(c)| โ‰ค q M c , or the applicants are exhausted. Step k: Each student s who was rejected in Step (k โˆ’ 1) applies to her next highest choice (call it school c, if any). Each school c considers these students together with the applicants tentatively accepted from the previous steps. The school c rejects s if either q c are filled by students who have higher priorities than s at c, or s โˆˆ S M and q M c are filled by majority students having higher priorities than s at c. Each school c tentatively accepts the remaining applicants until its capacity is filled while maintaining |ยต(c)| โ‰ค q M c , or the applicants are exhausted. The algorithm terminates either when every student is matched to a school or every unmatched student has been rejected by every acceptable school, which always terminates in a finite number of steps. Denote the new mechanism by f Q , and its outcome in market G q by f Q (G q ), where G q = (P, โ‰ป, (q, q M )). [7] further propose the more flexible student optimal stable mechanism with minority reserve algorithm (SOSM-R henceforth), in which each school c gives priority to minority applicants up to its minority reserve r m c โ‰ค q c , r m = (r m c ) cโˆˆC , and allows to accept majority students up to its capacity if there are not enough minority applicants to fill the reserves. A matching ยต is blocked by a pair of student s and school c with minority reserve, if s strictly prefers c to ยต(s) and either |ยต(c)| < q c and s is acceptable to c, or: (i) s โˆˆ S m , c strictly prefers s to some s โ€ฒ โˆˆ ยต(c); (ii) s โˆˆ S M and |ยต(c) โˆฉ S m | > r m c , c strictly prefers s to some s โ€ฒ โˆˆ ยต(c); (iii) s โˆˆ S M and |ยต(c) โˆฉ S m | โ‰ค r m c , c strictly prefers s to some s โ€ฒ โˆˆ ยต(c) โˆฉ S M . A matching ยต is stable with minority reserve, if ยต(s) P s s for all s โˆˆ S, and has no blocking pair. The SOSM-R algorithm runs as follows: Step 1: Each student s applies to her first-choice school. Each school c first tentatively accepts up to r m c minorities with the highest priorities if there are enough minority appli-cants; it then tentatively accepts the remaining applicants with the highest priorities until its capacity is filled or the applicants are exhausted. The rest (if any) are rejected. Step k: Each student s who was rejected in Step (k โˆ’ 1) applies to her next highest choice (if any). Each school c considers these students together with the applicants tentatively accepted from the previous steps. c first tentatively accepts up to r m c minorities with the highest priorities if there are enough minority applicants; it then tentatively accepts the remaining applicants with the highest priorities until its capacity is filled or the applicants are exhausted. The rest (if any) are rejected. The algorithm terminates either when every student is matched to a school or every unmatched student has been rejected by every acceptable school, which always terminates in a finite number of steps. Denote the new mechanism by f R , and its outcome in market G r by f R (G r ), where G r = (P, โ‰ป, (q, r m )). Denote ฮ“ = (P, โ‰ป, (q M , r m )) the market when we compare the effects of a majority quota policy in market G q and its corresponding minority reserve policy in market G r , where r m c + q M c = q c , โˆ€c โˆˆ C . Outcome Equivalence in Finite Markets The efficiency loss of the majority quota policy, as indicated by [7], essentially comes from a rejection chain initiated by a school with excessive majority applicants over its quota and insufficient number of minority applicants. We first introduce the following condition to guarantee sufficient competition among minority students for each reserved seat. In words, we say an affirmative action is effectively implemented in a school choice market if each school with nonzero reservations has at least as many minority applicants as its reservations in the first step of the matching process. Proposition 1. Consider majority quotas q M and minority reserves r m such that r m +q Proposition 1 indicates that if the reserved seats (i.e., r m c > 0 under the minority reserve policy or q c โˆ’ r m c โ‰ฅ 0 under its majority quota counterpart, โˆ€c โˆˆ C ) are properly allocated among those highly desired schools by the minorities, the two affirmative actions will produce an identical matching outcome as the same set of students is tentatively accepted in each step of the SOSM with either majority quotas or minority reserves. In other words, the efficiency edge of the minority reserve policy over its majority quota counterpart essentially comes from those reserved seats with insufficient competition among minorities. Remark 1. Note that altering from one affirmative action policy to the other is not a mere change of the matching rule (e.g., replacing the existing Boston school choice mechanism with the SOSM in Boston city), it also imposes different priority orders across majority and minority students in schools with excessive majority applicants and insufficient number of minority applicants (see Expression (1) and (2) in Appendix A.2). Although our effective competition condition is not exclusive-there are other subsets of markets which are not effectively competitive but still generate identical matching outcomes under these two affirmative actions (see Example 2 in Appendix A.1), it guarantees an identical alteration on the priorities of each school with nonzero reserved seats. Since a large portion of performance comparison criteria is only applicable to an identical underlying market under different matching mechanisms, the effective competition condition essentially characterizes a subset of markets in which we are able to compare the performance of these two affirmative action policies. Asymptotic Equivalence in Large Markets Motivated by the success of applying asymptotic analysis technique to restore some negative results in finite matching markets [9,12,13,4,8,5], we further explore the asymptotic outcome equivalence condition of these two affirmative actions in a sequence of random markets of different sizes. Define a random market as a tupleฮ“ = ( We assume that A for majorities to be different from B for minorities to reflect their distinct favors for schools. Each random market induces a market by randomly generated preference orders of each student s according to the following procedure introduced by [9]: Step 1: Select a school independently from the distribution A (resp. B). List this school as the top ranked school of a majority student s โˆˆ S M (resp. minority student s โˆˆ S m ). Step l โ‰ค k: Select a school independently from A (resp. B) which has not been drawn from steps 1 to step l โˆ’ 1. List this school as the l th most preferred school of a majority student s โˆˆ S M (resp. minority student s โˆˆ S m ). Each majority (resp. minority) student finds these k schools acceptable, and only lists these k schools in her preference order. A sequence of random markets is denoted by (ฮ“ 1 ,ฮ“ 2 , . . . ), whereฮ“ n = ((S M,n , S m,n ),C n , โ‰ป n , (q M,n , r m,n ), k n , (A n , B n )) is a random market of size n, with |C n | = n as the number of schools and |r m,n | the number of seats reserved for minorities. We introduce the following regularity conditions to guarantee the convergence of the random markets sequence. 1 2 ), ฮป, ฮบ, ฮธ > 0, r โ‰ฅ 1, and positive integers k andq, such that for all n: Definition 2. Consider majority quotas q M and minority reserves r m such that r m , for all c, c โ€ฒ โˆˆ C n ; 6. ฮฑ c = 0, for all c โˆˆ C n with q M c = 0. Condition (1) assumes that the length of students' preferences is bounded from above across schools and markets. This is motivated by the fact that students' reported preference orders observed in many practical school choice programs are quite short; for example, about three quarters of students ranked less than 12 schools among over 500 school programs in New York City [1], whereas only less than 10% of students rank more than 5 schools at the elementary school level out of around 30 different schools in their own walk-zone schools in Boston [2]. Condition (2) requires that the number of seats in any school is also bounded across schools and markets; that is, even some schools tend to enroll more students than others, the difference of their capacities is limited. Condition (3) requires that the number of students does not grow much faster than the number of schools; in addition, there is an excess supply of school capacities to accommodate all students, which is consistent with most public school choice programs in practice. Note that Condition (3) does not distinguish the growth rate between majority and minority students, because minority students are generically treated as the intended beneficial student groups from affirmative action policies rather than race or other single social-economic status; thus, the number of minority students is not necessarily less than majorities. Condition (4) requires that the number of seats reserved for minority students grows at a slower rate of O(n a ), where a โˆˆ [0, 1 2 ). This regularity condition guarantees that any market disruption caused by schools with either a majority quota or its minority reserve counterpart is likely to be absorbed by other schools without affirmative actions when the market contains sufficiently many schools. We examine the alternative regularity conditions with a slower growth of the number of minority students in the Online Appendix, see: http://yunliueconomics.weebly.com/uploads/3/2/2/1/32213417/equal_aa_app.pdf. Condition (5) is termed as moderate similarity in [8], which is also called sufficient thickness in [12,13] or uniformly bounded preferences in [4]. It requires that the popularity of different schools, as measured by the probability of being selected by students from A for majorities and B for minorities, does not vary too much; in other words, the popularity ratio of the most favorable school to the least favorable school is bounded. Condition (6) requires that a majority student will not select a school that can only accept minority students after imposing the quota q M c = 0, as these two affirmative actions trivially induce disparate matching outcomes in any arbitrarily large markets when a majority student applies to a school with zero majority quota. We employ the probability distributions A and B for majority and minority students to illustrate their distinct preferences over schools, especially the exclusion of schools with zero majority quota for majority students as required here. Definition 3. For any random marketฮ“, let ฮท c (ฮ“; f , f โ€ฒ ) be the probability that f (ฮ“) = f โ€ฒ (ฮ“). We say two mechanisms are outcome equivalence in large markets, if for any sequence of random markets (ฮ“ 1 ,ฮ“ 2 , . . . ) that is regular, max cโˆˆC n ฮท c (ฮ“ n ; f , f โ€ฒ ) โ†’ 0, as n โ†’ โˆž; that is, for any ฮต > 0, there exists an integer m such that for any random marketฮ“ n in the sequence with n > m and any c โˆˆ C n , we have max cโˆˆC n ฮท c (ฮ“ n ; f , f โ€ฒ ) < ฮต. We are now ready to present our main argument on the asymptotic outcome equivalence of the majority quota policy and its minority reserve counterpart under the SOSM for a regular sequence of random markets. Proposition 2. The SOSM-Q and its corresponding SOSM-R are outcome equivalence in large markets. Proof. See Appendix A.3. As the two affirmative actions will result in different matching outcomes only when some schools with nonzero reserved seats (i.e., r m,n c under the minority reserve policy and q n c โˆ’ q M,n c under its corresponding majority quota policy for some c โˆˆ C n ) have excessive majority applicants and insufficient number of minority applicants (Proposition 1), Proposition 2 implies that when the reserved seats is not growing very fast in the sequence of random markets, it is very unlikely for any two particular students apply to the same school with nonzero reserved seats under either the SOSM-Q or the SOSM-R when the market contains sufficiently many schools. Remark 2. One key regularity condition in Definition 2 is that we assume the growth rate of reserved seats is lower than n. To see why our asymptotic outcome equivalence result would not hold if the number of reserved seats |r m,n | grows at n a , a โˆˆ [1/2, 1], as n approaches infinity, let us consider a random marketฮ“ n without any reserved seats at first, which clearly generates an identical stable matching ยต n under either the SOSM-Q or the SOSM-R as no schools' priorities are affected by the affirmative actions. We then add reserved seats one at a time intoฮ“ n to measure its effects on ยต n when the reserved seat is either reserved for minorities (i.e., minority reserve) or treated as the admission cap for majority students (i.e., majority quota) in a school c. The first reserved seat r 1 โˆˆ r m,n will initiate a rejection chain under the SOSM-Q but not the SOSM-R if the reserved seat is allocated to a school c that has already accepted more majority students than its majority quota, |ยต n (c) โˆฉ S M,n | > q M,n c , as c's least favorable majority mate (denoted by s 1 ) will not be rejected under the SOSM-R given its flexible admission cap for majorities; however, c is forced to reject s 1 under its rigid majority quota of the SOSM-Q, which makes the rejected majority student s 1 apply to her next favorable school. The rejection chain can cause several students (either majorities or minorities) who were temporarily assigned to some schools continue applying. Since a school c โ€ฒ โˆˆ C \c will not reject any student if it has a vacant position, the rejection chain terminates when a student rejected from her previously matched school accepted by a school with a vacancy. As Condition (3) ensures the excess supply of school capacities, the probability that c โ€ฒ has a vacancy is 1 โˆ’ฮป (for simplicity we assume that ฮป โˆˆ (0, 1), q c = 1 for all c โˆˆ C n , while A and B are both uniformly distributed). Following a similar procedure in [4], we can show that with probability 1โˆ’1/n the number of schools involved in the rejection chain initiated by a single reserved seat is upper bounded by ฮป log n/(1 โˆ’ ฮป). When the second reserved seat r 2 is added to the market, it can also evict matched students from at most ฮป log n/(1 โˆ’ ฮป) schools, among which the probability that it can affect the schools with the two reserved seats r 1 and r 2 is upper bounded by Since we have at mostR = ฮธn a reserved seats, the probability that any school is involved in the rejection chain withR reserved seats is which converges to zero as n goes to infinity. Clearly, the outcomes of the SOSM-Q and its corresponding SOSM-R will not be asymptotically equivalent in large markets for any a โ‰ฅ 1/2, as the probability that a school with reserved seats to be involved in the rejection chains initiated by the rejection of a majority student under the SOSM-Q but not under the SOSM-R will not converge to zero as illustrated above. In other words, our argument is more close to the direct rejection approach used in [13] which prohibits any rejections of couples from their currently matched hospitals with a high probability, as a different application order as in [4] will not change the set of students a school matched with under either the SOSM-Q or the SOSM-R given the deferred acceptance nature of the SOSM based on the Gale-Shapley's original algorithm. Remark 3. Similar to the arguments in [12] (see their Footnote 32) and [8], we can relax the length of preference orders (Condition (1) of Definition 2) to k n = o(log(n)), i.e., the number of schools that are acceptable to each student grows without bound but at a sufficiently slow rate. We preserve Condition (1) because: (i) the main mechanics of our large market model and results are robust to the changes of preference length as long as the slower growth of reserved seats (of Condition (4)) and the moderate similarity (of Condition (5)) are satisfied; (ii) assuming k n โ‰ค k also complies with the observation that most reported preference orders in real world are quite short-ranking many schools to form a lengthy preference order is (physically and mentally) costly for most students. Remark 4. We can also consider other alternative assumptions of our large market model, for example: (i) students can be assigned into groups (geographic regions) with heterogeneous group-specific preference distributions, i.e., Aล = Aลโ€ฒ and Bs = Bsโ€ฒ , for someล,ล โ€ฒ โˆˆ S M ands,s โ€ฒ โˆˆ S m ; (ii) not all but only a sufficiently large subset of schools satisfies the moderate similarity of Condition (5); (iii) even though each student independently draw her preferences from either A or B, students preferences are allowed to remain certain correlations in the sense that their preferences are correlated through a random state variable ฯƒ, but they are still conditionally independent of ฯƒ (e.g., ฯƒ can represent the changes of teaching quality across schools). Provided that our Condition (4) is satisfied, these alternative assumptions will not invalidate our asymptotic outcome equivalence result (see discussions in [12,13,8] under similar large matching market settings, especially Section 2.4 of [8]'s online appendix). Concluding Remarks This note studies the outcome equivalence conditions of the majority quota policy and the minority reserve policy under the student optimal stable mechanism. Our results imply that the same set of students are matched under these two affirmative actions if the social planner can either properly assign each reserved slot to highly competitive schools among minority students, or promise a sufficient supply of desirable schools for both the minority and majority students. Given the transparent priority orders and historical preferences data in many practical school choice problems, a more prudent allocation of the reserved seats is clearly much more cost-effective to accommodate affirmative action policies compared to introducing alternative matching mechanisms to local communities. Therefore, apart from redesigning new matching mechanisms, future research can also work on identifying the corresponding (asymptotic) equivalence conditions of different affirmative actions in other conventional matching mechanisms. A.1 Examples Example 1. (The SOSM-Q and the SOSM-R produce different matching outcomes in an ineffectively competitive market.) Consider the following market ฮ“ = (P, โ‰ป, (q M , r m )) with two schools C = {c 1 , c 2 }, and four students S = {s 1 , s 2 , s 3 , s 4 } where S M = {s 1 , s 3 } and S m = {s 2 , s 4 }. q c 1 = 2 and q c 2 = 2. Schools and students have the following priority and preference orders: Suppose that ฮ“ has the following majority quota and its corresponding minority reserve: (q M c 1 , q M c 2 ) = (1, 0), or correspondingly, (r m c 1 , r m c 2 ) = (1, 2) (i.e., no majority student is acceptable in c 2 ). ฮ“ is ineffectively competitive as no minority student applies to c 2 at the first step of the two mechanisms (i.e., no minorities list c 2 as their first choice). The SOSM-Q and the SOSM-R produce different matching outcomes as: Suppose that ฮ“ has the following majority quota and its corresponding minority reserve: (q M c 1 , q M c 2 ) = (1, 1), or correspondingly, (r m c 1 , r m c 2 ) = (2, 1). The SOSM-Q and the SOSM-R produce the same matching outcome: A.2 Proof of Proposition 1 Consider a market ฮ“ with either the majority quota q M or its corresponding minority reserve r m , such that r m + q M = q. Given the majority quota policy q M , we can split each school c with capacity q c and a majority quota q M c into two corresponding sub-schools, the original sub-school (c o ) and the quota sub-school (c q ), c = (c o , c q ). c o has a capacity of q M c and maintains the original priority order โ‰ป, whereas c q has a capacity of q c โˆ’ q M c and its new priority โ‰ป q c is i.e., c q keeps the same pointwise priority orders as school c for all minority students, but prefers leaving an empty seat (e) to accepting any majority student. Correspondingly, when ฮ“ has the minority reserve, r m = q โˆ’ q M , we can split each school c with capacity q c and a minority reserve r m c as two corresponding sub-schools, the unaffected sub-school (c u ) and the reserve sub-school (c r ), c = (c u , c r ). c u has a capacity of q c โˆ’r m c and maintains the original priority order โ‰ป. c r has a capacity of r m c and its new priority โ‰ป r c is i.e., c r keeps the same pointwise priority orders as school c for all majority students and all minority students, but it prefers all minorities to any majorities. When the market ฮ“ is effectively competitive, all the reserved seats (i.e., |r m | under the minority reserve policy and |q โˆ’q M | under its corresponding majority quota policy) are filled by minority students at Step 1 of the SOSM-Q and the SOSM-R, which ensures no majority student can replace any minority students who have tentatively filled these reserved seats in later steps. Thus, we have โ‰ป q c =โ‰ป r c , for each c โˆˆ C , as illustrated above. Since different affirmative action policies will not alter students' preference orders by assumption, each school will receive the same set of applicants at Step 1 of the SOSM-Q and the SOSM-R. Together with โ‰ป o c โ‰กโ‰ป u c and โ‰ป q c =โ‰ป r c , for all c โˆˆ C , we know that each school also accepts the same set of students at Step 1 of the SOSM-Q and the SOSM-R. Denote ยต Q k (c) (resp. ยต R k (c)) the set of students accepted by school c at Step k of the SOSM-Q (resp. SOSM-R), k โ‰ฅ 1. Assume that each school receives and tentatively accepts an identical set of students until Step k of the SOSM-Q and the SOSM-R, ยต R k (c) = ยต Q k (c). We will argue by contradiction. Case (i): at Step k + 1, let s be a student who was rejected by school c under the SOSM-Q but was tentatively accepted by c under the SOSM-R, i.e., s โˆˆ ยต R k+1 (c)\ยต Q k+1 (c). As ยต R k (c) = ยต Q k (c), while c receives the same set of applicants at the beginning of Step k+1 of the SOSM-Q and the SOSM-R, we have |ยต R k (c)| = |ยต Q k (c)| = q c (i.e., c has no empty seat left at the beginning of Step k + 1 of either the SOSM-Q or the SOSM-R); otherwise, s will not be rejected from c under the SOSM-Q given that ยต Q k (c q ) = ยต R k (c r ) โˆˆ S m . Therefore, there is another student A.3 Proof of Proposition 2 We first incorporate a stochastic variant of the SOSM introduced by [15] with affirmative actions in the sense that all majority and minority students are ordered in some predetermined (for instance, random) manner. Following the approach used in [9,12,13,8,5], we use the principle of deferred decisions and the technique of amnesia, which are originally proposed by [10,11], to simplify the stochastic process of Algorithm 1. By the principle of deferred decisions, we assume that students do not know their preferences in advance and whenever a student has an opportunity to submit applications according to the predetermined order, she applies to her most preferred school among those that has not yet rejected her. Since a school's acceptances could depend on the set of students that have applied to this school, we also apply the technique of amnesia to solve such dependency (from the past applications) in the sense that each student makes her applications randomly to a school in C as she cannot remember any of the schools she has previously applied to. Assuming students are amnesiac does not affect the final outcome of the algorithm, since if a student applies to a school that has already rejected her, she will be rejected again. To ease the superscript notations, we relabel the set of majority students S M and the minority students S m by S(M) and S(m), respectively. Let A s and B s be the respective sets of schools that a majority (resp. minority) student has already drawn from A n and B n . When |A s | = k (resp. |B s | = k) is reached, A s (resp. B s ) is the set of schools that is acceptable to the majority student s โˆˆ S(M) (resp. minority student s โˆˆ S(m)). ii. If s โˆˆ S(m), then let s be the l (m) th minority student, and increment l (m) by one. (b) If not, then terminate the algorithm. i. If |A s | โ‰ฅ k, as she has already applied to every acceptable school, then go back to Step (2). ii. If not, select c randomly from the distribution A n until c โˆ‰ A s . Add c to A s , if c has not rejected s yet. (b) If s โˆˆ S(m): i. If |B s | โ‰ฅ k, as she has already applied to every acceptable school, then go back to Step (2). ii. If not, select c randomly from the distribution B n until c โˆ‰ B s . Add c to B s , if c has not rejected s yet. 4. Acceptance and/or rejection: (a) If c has neither a majority quota nor a minority reserve: i. If c has an empty seat, then s is tentatively accepted. Go back to Step (2). ii. If c has no empty seat and prefers each of its current mates to s, c rejects s. Go back to Step (3). iii. If c has no empty seat but it prefers s to one of its current mates, then c rejects the matched student with the lowest priority. Let this rejected student be s and go back to Step (3). (b) If c has either a majority quota or a minority reserve, but has not received any application yet, then s is tentatively accepted as no majority students will apply to a school with q M c = 0 (Condition (6) ii. If c has no empty seat and prefers each of its current mates to s, c rejects s. Go back to Step (3). iii. If c has no empty seat but it prefers s to one of its current mates, then c rejects the matched student with the lowest priority while not admitting more majority students than its majority quota q M c . Let this rejected student be s and go back to Step (3). (d) If c has a minority reserve, and has tentatively accepted some students: i. If c has an empty seat, then s is tentatively accepted. Go back to Step (2). ii. If c has no empty seat and prefers each of its current mates to s, c rejects s. Go back to Step (3). iii. If c has no empty seat but it prefers s to one of its current mates, then c rejects the matched student with the lowest priority. Let this rejected majority student be s and go back to Step (3). The stochastic SOSM with affirmative actions algorithm terminates at Step (2b). By the principle of deferred decisions, the probability that Algorithm 1 arrives at any steps is identical regardless of whether the random preferences are drawn at once in the beginning or drawn iteratively during the execution of the algorithm.
Hematopoietic plasticity mapped in Drosophila and other insects Hemocytes, similar to vertebrate blood cells, play important roles in insect development and immunity, but it is not well understood how they perform their tasks. New technology, in particular single-cell transcriptomic analysis in combination with Drosophila genetics, may now change this picture. This review aims to make sense of recently published data, focusing on Drosophila melanogaster and comparing to data from other drosophilids, the malaria mosquito, Anopheles gambiae, and the silkworm, Bombyx mori. Basically, the new data support the presence of a few major classes of hemocytes: (1) a highly heterogenous and plastic class of professional phagocytes with many functions, called plasmatocytes in Drosophila and granular cells in other insects. (2) A conserved class of cells that control melanin deposition around parasites and wounds, called crystal cells in D. melanogaster, and oenocytoids in other insects. (3) A new class of cells, the primocytes, so far only identified in D. melanogaster. They are related to cells of the so-called posterior signaling center of the larval hematopoietic organ, which controls the hematopoiesis of other hemocytes. (4) Different kinds of specialized cells, like the lamellocytes in D. melanogaster, for the encapsulation of parasites. These cells undergo rapid evolution, and the homology relationships between such cells in different insects are uncertain. Lists of genes expressed in the different hemocyte classes now provide a solid ground for further investigation of function. Introduction Like most other animals, the fruit fly Drosophila melanogaster has blood cells patrolling all parts of the organism (Rizki, 1978;Rizki, 1984;Meister and Lagueux, 2003). These cells, the hemocytes, attack pathogens, participate in blood clotting and wound healing, and mediate the remodeling of tissues during development. Our understanding of how hemocytes carry out these tasks is surprisingly limited, at least on the molecular level. This may seem surprising, considering the popularity of this model organism. However, important steps forward have been taken during the past two decades. Antibodies and genetic markers have been developed to follow the hemocytes in vivo and manipulate their activities , and the hematopoietic events that control the production of these cells have now been characterized in considerable detail, as summarized in excellent reviews (Evans et al., 2003;Martinez-Agosto et al., 2007;Honti et al., 2014;Ramond et al., 2015;Gold and Brรผckner, 2015;Letourneau et al., 2016;Csordรกs et al., 2021). Our present knowledge about the role of hemocytes in immunity has been reviewed by Carton et al., 2008;Williams, 2007;Theopold and Dushay, 2007;Wang et al., 2014;and Yang et al., 2020, and their involvement in embryology and wound healing by Fauvarque and Williams, 2011;Evans and Wood, 2014;Theopold et al., 2014;and Ratheesh et al., 2015. For a recent comprehensive review of the entire field, with an emphasis on hematopoiesis, see Banerjee et al., 2019. In spite of this progress, many central questions remain to be answered, but the development of single-cell sequencing technology may change the picture. During the past years, this technique has proportion of constitutively produced lamellocytes was considerably increased in a population that had been raised under intense parasitoid wasp parasitism, and these larvae were also more resistant to parasitism. Standard lab stocks, which have been bred for decades without such selective pressure, may have lost the constitutive lamellocyte phenotype. Embryonic and adult hemocytes Drosophila hemocytes have been most thoroughly investigated in third-instar larvae, from which large numbers of hemocytes are conveniently available. Embryonic hemocytes, which are not freely circulating, were mainly studied in the context of embryonic development and wound healing (Ratheesh et al., 2015;Vlisidou and Wood, 2015;Wood and Martin, 2017). Both plasmatocytes and crystal cells have been identified in the embryo, but no lamellocytes (Fossett et al., 2003). Comparing the transcriptomes of embryonic and larval hemocytes, Cattenoz et al., 2020 found that extracellular matrix components were more highly expressed in the former and phagocytosis receptors in the latter. Adults have relatively few hemocytes, and the number is decreasing with age. It has been debated if they are mitotically active. (Ghosh et al., 2015) identified active hematopoiesis in hemocyte clusters in the dorsal abdomen of the fly, but that finding was later refuted by Sanchez Bosch et al., 2019. Recently, Boulet et al., 2021 investigated the hemocyte populations in adult flies using a set of transgenic marker constructs. They reported that mitosis in adult hemocytes is rare and restricted to a separate population of progenitor cells, as discussed in the section about primocytes. Like in the embryo, adult hemocytes are mainly sessile, and their overall transcriptome is more similar to that of embryonic than larval hemocytes. A majority of the adult hemocytes are plasmatocytes and only a small number are crystal cells (Kurucz et al., 2007b). Terminology Rizki's original terminology, which we have adhered to here, is well established in the Drosophila literature. However, we should warn the readers that the plasmatocytes in Drosophila should not be confused with the hemocytes called plasmatocytes in other insect orders (Table 1). Instead, Drosophila plasmatocytes most likely correspond to the cells called granular cells or granulocytes. In Lepidoptera, granular cells are functional phagocytes, whereas lepidopteran plasmatocytes are main capsule-forming hemocytes, much like Drosophila lamellocytes (Strand, 2008). Furthermore, crystal cells and lamellocytes are uniquely found only in the closest relatives of D. melanogaster. As discussed in detail below, the crystal cells are homologous to the cells called oenocytoids in other insects. Like crystal cells, oenocytoids are carriers of the phenoloxidase cascade components, but the crystal cell morphology is unique to a few Drosophila species. This is all rather confusing, and there is certainly room for a revision of Drosophila blood cell terminology. The motile form of plasmatocytes seen in Drosophila embryos has often been called 'macrophages,' and sometimes the same term has been extended to include all plasmatocytes, or even all hemocytes in general. Lanot et al., 2001 andLagueux, 2003 used the term to describe the activated plasmatocytes that are observed at the onset of metamorphosis and in the pupa. In this review, we have avoided the macrophage terminology entirely as it could be misunderstood to imply homology (rather that analogy) between Drosophila plasmatocytes and vertebrate macrophages. For similar reasons, we here use the term granular cell rather than granulocyte. The specialization of vertebrate blood cells into myeloid and lymphoid lineages probably happened after the split between protostomes and deuterostomes (such as insects and vertebrates, respectively), and the further specialization of vertebrate myeloid cells into macrophages and other subclasses must be an even later event. Nevertheless, specialized phagocytes must have existed throughout metazoan evolution, and plasmatocytes are therefore good models to understand mammalian phagocytes, such as neutrophils, monocytes, dendritic cells, and macrophages. Drosophila hematopoiesis During development, hemocytes are produced in two waves (Holz et al., 2003). The first wave is initiated in the embryo, from cells originating in the head mesoderm. These cells give rise to embryonic plasmatocytes and crystal cells, which are then directly carried over to the larvae where they act as founders of the larval circulating and sessile hemocytes. Hemocytes of this first wave also contribute to the pupal and adult hemocyte populations. The second wave originates from the thoracic mesoderm, which develops into a hematopoietic organ situated next to the anterior end of the dorsal vessel in the larva. This hematopoietic organ has been given the unfortunate name 'lymph gland,' although its function is more akin to that of the mammalian bone marrow than to lymph glands. Hemocytes are released from the lymph gland at the end of the larval stage, and these hemocytes contribute to the pupal and adult hemocyte populations. In response to parasitoid wasp infection, the lymph gland can also release hemocytes precociously. Cells from both hematopoietic waves contribute to all three classes of hemocytes, plasmatocytes, crystal cells, and, when required, lamellocytes. The lymph gland The genetic control of hematopoiesis has been studied in great detail in the lymph gland. The gland is made up of paired lobes, arranged on each side of the dorsal vessel. The anterior, or primary, lobes are largest and the ones that differentiate first. They are followed by more posterior pairs, the secondary, tertiary, and sometimes quaternary lobes, plus sometimes a variable number of smaller aggregates of hematopoietic cells along the dorsal vessel. The ordered structure of the primary lobes, where cells at different stages of differentiation are organized in different layers, has made them a favorite object of study. Undifferentiated progenitor cells are aggregated in a medially located medulla, usually called the medullary zone, which is directly attached to the dorsal vessel. More laterally, differentiating cells form a cortex, the cortical zone. Cells in transition are found in an intermediate zone, positioned between the medullary and cortical zones. Finally, a small group of cells at the posterior tip of the primary lobe, in direct contact with the medullary zone, form an interesting and rather mysterious structure, the posterior signaling center (PSC). This center was proposed to act as a niche that controls hematopoietic events (Lebestky et al., 2003), an idea that was further supported by the finding that the signaling molecule Hedgehog, secreted from the center, suppresses hemocyte differentiation. In this way, the PSC was suggested to control the balance between undifferentiated precursor cells and differentiating hemocytes (Mandal et al., 2007;Krzemieล„ et al., 2007). However, this interpretation was later challenged by the finding that the proportion of progenitors was unaffected when the PSC was ablated by induced apoptosis (Benmimoun et al., 2015b;Benmimoun et al., 2015a). More complex models have therefore been proposed based on the observation that the medullary zone cells are phenotypically and functionally heterogeneous (Oyallon et al., 2016;Baldeosingh et al., 2018;Banerjee et al., 2019). According to these models, stem cell maintenance is controlled PSC-independently in one subpopulation of medullary zone cells, called core progenitors, PSC-independent progenitors, or preprogenitors. These core progenitors may be precursors of the remaining cells in the medullary zone, the further differentiation of which is controlled by Hedgehog from the PSC. A new twist to this conundrum comes from the recent discovery that the core progenitor population is instead controlled by signals from the dorsal vessel. The dorsal vessel secretes a fibroblast growth factor (FGF) homolog, Breathless, which promotes stem cell maintenance (Destalminil-Letourneau et al., 2021). The PSC itself is also controlled by signals from the dorsal vessel, via a secreted glycoprotein encoded by the slit gene. This would all make the dorsal vessel more akin to the vascular hematopoietic niche in vertebrates, while the role of the PSC is more complex. The posterior signaling center is also required for the induction of lamellocyte formation, independently of its role in stem cell maintenance. Lamellocytes fail to differentiate in knot mutant lymph glands, which lack a posterior signaling center (Crozatier et al., 2004), or when PSC cells are ablated by induced apoptosis (Benmimoun et al., 2015b) (knot [Bridges and Brehme, 1944] is often referred to by the junior synonym collier). Interestingly, these manipulations abolish lamellocyte formation altogether, even among the circulating descendants of the first hematopoietic wave. This indicates that the posterior signaling center either acts remotely, via diffusible signals, or that knot-dependent PSC-like cells may exist elsewhere, in direct contact with the peripheral hemocytes. As discussed below, in the section about primocytes, the possible existence of such a class of cells is now supported by recent single-cell sequencing data (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020). Unlike the primary lymph gland lobes, the posterior lobes lack a clearly stratified structure, and their hematopoiesis is less well studied. They are not in direct contact with a signaling center, remain undifferentiated at the onset of metamorphosis, and do not initiate differentiation when the animal is infected (Rodrigues et al., 2021). Peripheral hematopoiesis Compared to the orderly events that go on in the lymph gland, hematopoiesis has been more difficult to study in the circulating and sessile larval hemocytes. It is clear, however, that the fully differentiated plasmatocytes, which derive from the first wave of embryonic hematopoiesis, are actively dividing throughout larval development (Makhijani et al., 2011). Mitotic plasmatocytes have been observed both among the freely circulating hemocytes and in the population of sessile hemocytes (Rizki, 1957;Mรกrkus et al., 2009;Kurucz et al., 2007a;Makhijani et al., 2011), and the mitotic activity is highest in connection with each larval molt (Rizki, 1957). Thus, the larval plasmatocytes are propagated by self-renewal of differentiated cells, without contribution from undifferentiated hematopoietic cells in the lymph gland or elsewhere (Makhijani et al., 2011), at least in healthy larvae. Only about 50% of the larval hemocytes circulate freely in the hemolymph. The remaining cells are attached to the basal membrane under the skin and on other tissues. Circulating and sessile hemocytes are in constant exchange, and the sessile hemocytes can rapidly be mobilized when the animal is disturbed. The attachment of sessile hemocytes depends on the interaction between the membrane protein Eater on the hemocytes and the specialized collagen Multiplexin in the extracellular matrix (Bretscher et al., 2015;Csordรกs et al., 2020). Under the epidermis, the attachment sites are arranged segmentally in a manner that is regulated by activin-ฮฒ, secreted by sensory nerves of the peripheral nervous system (Makhijani et al., 2017). Unlike plasmatocytes, mature crystal cells have never been observed to divide (Rizki, 1957;Leitรฃo and Sucena, 2015). Instead, they are generated by transdifferentiation of fully differentiated plasmatocytes in the sessile compartment (Leitรฃo and Sucena, 2015). This process requires Notch expressed in the transdifferentiating cell, and the Notch ligand Serrate in its plasmatocyte neighbors. The exact role of the sessile compartment in this context is still uncertain. Crystal cells are formed even in eater mutant animals, which have no sessile compartment (Bretscher et al., 2015). Finally, the origin of lamellocytes is not yet entirely settled. Rizki, 1957 proposed that lamellocytes originate from plasmatocytes in the circulating compartment via podocytes as an intermediate stage. Later, Lanot et al., 2001 showed that lamellocytes are formed inside the lymph glands, and they proposed that this is the major, if not the only, source of lamellocytes. This conclusion was in turn questioned by Mรกrkus et al., 2009, who showed that the lymph gland was not required for lamellocyte production. Using a ligation technique, they separated hemocytes in the posterior end from the lymph glands in the anterior part. Wasp infection in the posterior end of the animal triggered lamellocyte formation and encapsulation of the parasite in that part, but not in the anterior half. Furthermore, fluorescently marked sessile cells from the posterior end of a larva gave rise to lamellocytes when they were transplanted into an unmarked host. The present consensus is that both the pre-existing larval hemocyte population and the lymph glands contribute to produce lamellocytes. This conclusion was confirmed by a lineage-tracing approach, showing that lamellocytes in a wasp-infected larva have a mixed origin, including cells from both developmental waves of hematopoiesis (Honti et al., 2010). Notably, the lymph gland-derived lamellocytes were relatively few in this experiment (8% of all lamellocytes), and they were not released into circulation until 2-3 days after infection. Further lineagetracing experiments have shown that lamellocytes can be generated directly by transdifferentiation of differentiated plasmatocytes (Stofanko et al., 2010;Avet-Rochex et al., 2010). Recently, a more detailed analysis of the circulating hemocyte population after wasp infection added some complication to this picture (Anderl et al., 2016). At 8-10 hr after infection, a new population of hemocytes, dubbed lamelloblasts, was first observed. They were morphologically similar to plasmatocytes, but they were distinguished by a 10-fold lower expression of the plasmatocyte marker, eaterGFP. By 14 hr, the lamelloblasts had increased in number, to become even more abundant than the plasmatocytes. Later, the lamelloblast population was gradually replaced by cells that expressed increasing levels of a lamellocyte marker, msnCherry, and decreasing levels of eaterGFP. These prelamellocytes were finally replaced by fully differentiated msnCherry+, eaterGFP-lamellocytes. Because few intermediates were seen between the plasmatocytes and the lamelloblasts, it was speculated that the lamelloblasts originate from the sessile population. Simultaneously with the changes in the lamellocyte lineage, the plasmatocyte population also changed in appearance, the plasmatocytes became larger and more granular. Later they also began to accumulate cytoplasmic msnCherry-positive foci, suggesting that they had phagocytized lamellocyte fragments. There was evidence of intense mitotic activity among the lamelloblasts and prelamellocytes, but the mature lamellocytes have never been observed to divide (Rizki, 1957). The majority of lamellocytes generated in this way (type I) show no trace of plasmatocyte markers. However, under some circumstances lamellocytes can develop directly from differentiated plasmatocytes, for instance, those attached on the parasite egg (Anderl et al., 2016) or when activated in vitro (Stofanko et al., 2010), resulting in 'double-positive' lamellocytes, expressing both plasmatocyte and lamellocyte markers (type II). Hematopoiesis in other insects Historically, a rich literature has described hemocytes from various insect orders (e.g., Cuรฉnot, 1891;Paillot, 1933;Jones, 1962;Lackie, 1988), but for most of them we have little information about their hematopoiesis. Best studied are some mosquitoes and lepidopterans (Strand, 2008). Compared to the organized structure of the hematopoietic organs in Drosophila and Lepidoptera, no structured hematopoietic tissue has yet been described in mosquitoes. Three main hemocyte types -granulocytes, oenocytoids, and prohemocytes -were distinguished from one another by a combination of morphological and functional markers in two compartments, the circulation and the sessile tissue (Strand, 2008). The hemocytes of the adult females received more attention than those of the larvae as they are vectors of pathogens. However, adult males, pupae, and larvae contain the same hemocyte types as adult females. The sessile hemocytes, in the form of aggregates, however, show different characteristic spatial distribution in different developmental stages (Castillo et al., 2006;Hillyer and Christensen, 2002;League and Hillyer, 2016;League et al., 2017). These aggregates are reminiscent of niches for hemocyte development in lepidopterans and Drosophila, suggesting the existence of a dedicated hematopoietic tissue. However, the development of specific markers corresponding to functionally different subsets will be required to characterize the possible functional heterogeneity in these aggregates and help to reveal lineage relationships in specific sites of hematopoiesis. Studies on lepidopterans mainly focus on the immune systems of Manduca sexta and Bombyx mori, whose larvae contain four, functionally different hemocyte classes: capsule-forming plasmatocytes, phagocytic granular cells, oenocytoids, providing enzymes for the melanization cascade, and spherule cells, with a so far unknown function (Strand, 2008). Similar to the situation in Drosophila, the differentiated hemocytes derive both from the embryonic head mesoderm and from specialized hematopoietic organs that are associated with the wing discs in the larva (Nardi, 2004). The lobes of the hematopoietic organ contain prohemocytes and plasmatocytes, whereas the other cell types may derive directly from hemocytes in the circulation (Lavine and Strand, 2002;Strand, 2008). In the hematopoietic organ of B. mori, compact and loose regions as well as free cells were observed. The compact islets consist of proliferating prohemocytes and plasmatocytes, whereas differentiated hemocytes are found in the loose regions in late larvae (Grigorian and Hartenstein, 2013). This observation suggests that there is an anatomical and functional subdivision of the organ. In vivo and in vitro analysis of B. mori hemocytes confirms that the hematopoietic organ may serve as a niche for hemocyte development in lepidopterans (Nakahara et al., 2010). A comprehensive analysis of the fine structure of the M. sexta lymph gland (von Bredow et al., 2021) with a combination of monoclonal antibodies and the lectin peanut agglutinin (PNA) revealed zones with different binding characteristics, showing that the organ is subdivided into anatomical areas with prohemocytes, and differentiating and mature hemocytes, reflecting/suggesting a gradual development of hemocyte subsets within the organ. Ablation experiments revealed that the hematopoietic organ serves as a source of plasmatocytes and putative prohemocytes. However, unlike in Drosophila, this occurs throughout the larval stages, not only at the onset of metamorphosis. The lobes of the hematopoietic organs are compartmentalized, but a focus that directs hemocyte development, like the posterior signaling center does in Drosophila, has not been observed yet. The relative role of hematopoietic organs and versus hemocytes of embryonic origin as precursors of differentiated hemocytes, and the possible role of transdifferentiation, is still under active investigation (Nardi, 2004;Grigorian and Hartenstein, 2013;Nakahara et al., 2010;von Bredow et al., 2021). Single-cell RNA sequencing defines hemocyte heterogeneity Recently, several groups have used single-cell RNA sequencing technology to study the hemocyte diversity in Drosophila larvae and the relationship between the different hemocyte classes. Four published studies deal with the peripheral hemocytes, that is, the sessile and circulating hemocytes of the first hematopoietic wave (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020;Leitรฃo et al., 2020), and two focus on the lymph glands Girard et al., 2021). The Fly Cell Atlas, which appeared recently (Li et al., 2022), presents additional data on all cells in the adult fly, including hemocytes. These studies have confirmed the existence of unique hemocyte 'clusters,' corresponding to the classically defined crystal cell and lamellocyte classes, and in some cases also to their precursors. Not surprisingly, however, the plasmatocytes turned out to be heterogenous, and they were split by the different authors into several different clusters. Each cluster was defined by a unique pattern of gene expression, but these patterns were not entirely congruent between the different studies. The six studies on larval hemocytes identified between 2 and 13 plasmatocyte clusters, and in addition between 3 and 6 prohemocyte clusters in the lymph glands. A consensus view of the situation was recently published (Cattenoz et al., 2021), drawing general conclusions from three of the studies (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020). To bring further clarity to this issue, we have now compiled lists of the genes that are specifically expressed in each cluster and compared these lists from the different studies ( Figure 1-source data 1). To reduce noise, we set a threshold of at least 1.4-fold enrichment (2-fold for the data from Girard et al., 2021, where the relative enrichment values were generally higher). As Fu et al., 2020. did not provide comprehensive lists of specifically expressed genes, we have only listed genes mentioned in the text and figures. The most characteristic lamellocyte and crystal cell markers are listed in Figure 1, and primocyte and plasmatocyte markers in Figure 2. Lamellocytes: Highly active immune effector cells In total, transcripts of 136 genes were significantly enriched in lamellocyte clusters in at least two of the studies, and 32 were enriched in at least four studies ( Figure 1, Figure 1-source data 1). These genes include the well-established lamellocyte markers atilla, PPO3,ItgaPS4,cheerio,rhea (talin), and mys (Kurucz et al., 2007b;Dudzic et al., 2015;Irving et al., 2005). The widely used lamellocyte marker misshapen (msn) (Tokusumi et al., 2009a) turned up in only one of the studies . Disappointingly, these lamellocyte markers were not exclusively detected in Lamellocyte and crystal cell markers. For each gene, the cluster where it is most strongly expressed is indicated in red for lamellocyte and blue for crystal cell clusters. The names of previously established markers for these classes are also printed in the same colors. Lamellocytes. Genes enriched in lamellocyte clusters in at least four out of ve studies. Enrichment > 3-fold (geometric mean). Crystal cells. Genes enriched in crystal cell clusters in at least three out of six studies. Enrichment > 4-fold (geometric mean). Figure 1. Lamellocyte and crystal cell marker genes. Genes with enhanced expression in lamellocyte-and crystal cell-related clusters, as reported by Cattenoz et al., 2020 (Cat), Tattikota et al., 2020 (Tat), Fu et al., 2020 (Fu), Leitรฃo et al., 2020 (Lei), Cho et al., 2020 (Cho), and Girard et al., 2021 (Gir). Relative expression ('FC') in bulk plasmatocytes compared to whole larvae, as reported by Ramond et al., 2020, is shown in a separate column (Ram). The figure summarizes the most consistently and strongly enhanced genes for each of these cell classes, and the average (geometric mean) Figure 1 continued on next page the lamellocyte clusters only. Only five genes were consistently enriched by 10-fold or more, while other marker genes were typically enriched 4-fold or less. This could be due to incomplete separation between the lamellocyte clusters and other hemocytes, to admixture with lamellocyte precursors, or to uptake of lamellocyte fragments by other cells as discussed below. The highly expressed genes atilla and CG15347 both encode GPI-anchored membrane proteins, related to a group of Ly6-like proteins that mediate septate junction formation (Nilton et al., 2010). They may play a role in strengthening the capsules formed by these effector cells around parasites, while the Prophenoloxidase 3 (PPO3) gene encodes a phenoloxidase that is involved in the melanization of these capsules (Nam et al., 2008;Dudzic et al., 2015). Strikingly, the list of lamellocyte-specific genes includes many genes involved in cytoskeletal activity, cell adhesion, cell motility, and even muscle activity ( Figure 1-source data 2), suggesting a physically very active role for these cells. Two tubulin genes, ฮฑTub85E and ฮฒTub60D, are among the most highly enriched transcripts in the lamellocyte clusters, and two others, ฮฑTub84B and ฮฒTub97EF, are also on the list. Furthermore, two cytoplasmic actin genes, Act42A and Act5C, and no less than 21 genes involved in actin filament-based processes are more or less enriched in at least two of the five studies ( Figure 1-source data 2). Regarding genes involved in cell adhesion, two ฮฑ-integrins, ItgaPS4 and mew, and two ฮฒ-integrins, mys and Itgbn, are highly enriched, as well as several components of the intracellular machinery that mediate integrin interaction with the cytoskeleton, and integrin-mediated cell adhesion: rhea, plx, parvin, stck, Pax, ics, and Ilk. In line with a physically active role for the lamellocytes, the most highly enriched genes include the Trehalase (Treh) and Trehalose transporter 1-1 (Tret1-1) genes ( Figure 1), which mediate the uptake and utilization of trehalose from the hemolymph as an energy source for the cells. CG1208 encodes another potential sugar transporter that may also be involved in this traffic. As shown by Bajgar et al., 2015, the uptake of sugars into hemocytes is dramatically increased in wasp-infected larvae. Other tissues, muscles in particular, respond by mobilizing glycogen stores in order to supply trehalose to the hemolymph (Yang and Hultmark, 2017). The up to 30-fold increased expression of sugar transporters in lamellocytes, compared to other hemocytes ( Figure 1), suggests that the lamellocytes are major consumers of these sugars. Cattenoz et al., 2021 also noted that target genes of Tor and foxo, which regulate nutrient metabolism, were particularly enriched in the lamellocyte clusters. This is most likely connected to extreme metabolic needs in these cells. The geared-up metabolism in the lamellocytes may also be associated with a switch towards aerobic glycolysis in the lamellocytes, mediated by extracellular adenosine released from immune cells (Bajgar et al., 2015), although that metabolic switch has been best studied in phagocytically activated plasmatocytes (Bajgar and Dolezal, 2018;Krejฤovรก et al., 2019;Bajgar et al., 2021). However, in agreement with a specific role of extracellular adenosine in lamellocyte hematopoiesis, the adenosine deaminase-related growth factor A (Adgf-A) and adenosine receptor (AdoR) transcripts are enriched in the lamellocytes (Figure 1-source data 1). AdoR encodes a G protein-coupled receptor that functions via cAMP and PKA activation. Accordingly, target genes for the cyclic-AMP response element binding protein B (CrebB) are also enriched in the lamellocytes (Cattenoz et al., 2021). Adgf-A encodes a deaminase that regulates the level of extracellular adenosine. Finally, target genes of the JNK pathway are also enriched in the lamellocyte clusters (Cattenoz et al., 2021). This supports the idea that JNK signaling may be directly involved in lamellocyte differentiation (Zettervall et al., 2004;Tokusumi et al., 2009b). fold enhancement ('FC'). As we lack full data from Fu, we have only listed examples mentioned in the text and figures of that study. For a full list of all enhanced genes, see Figure 1-source data 1. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Genes with enhanced expression in specific cell classes. Primocyte and plasmatocyte markers. For each gene, the cluster where it is most strongly expressed is indicated in purple for primocyte-like clusters and green for plasmatocytes. The names of previously established markers for these classes are also printed in the same colors. Primocytes. Genes enriched in posterior signaling cell-like clusters in at least three out of ve studies. Enrichment > 4-fold (geometric mean). Crystal cells The crystal cell clusters also form a well-defined class, in this case with a high degree of overlap between all six studies ( Figure 3). 137 genes were preferentially expressed in crystal cell clusters in at least two of the studies and 35 genes in at least five of them ( Figure 1, Figure 1-source data 1). However, the level of enrichment varied enormously between the studies, with particularly high values reported by Girard et al., 2021. The putative precursors in the CC1 clusters Cho et al., 2020) overlap less with the major crystal cell clusters. As expected, well-established crystal cell markers like PPO1, PPO2 (Binggeli et al., 2014), and lozenge (lz) (Ferjoux et al., 2007) were represented among the preferentially expressed genes, although high enrichment of lozenge (53-fold) was reported in one study only. In the other studies, lozenge was enriched threefold at best, if at all (Figure 1, Figure 1-source data 1). By contrast, the phenoloxidase genes PPO1 and PPO2 were very highly enriched (up to 836-fold) in all six studies. PPO3 has also been used as a crystal cell marker, but only in the embryo (Waltzer et al., 2003;Bataillรฉ et al., 2005;Ferjoux et al., 2007), but that gene was exclusively lamellocyte-specific in the studies discussed here. More rarely used crystal cell markers like peb, klu (Terriente-Felix et al., 2013) and Jafrac1 (Waltzer et al., 2003) were also overrepresented in the crystal cell clusters, as were 19 of 31 embryonic crystal cell markers listed by Ferjoux et al., 2007 Crystal cells are best known for their role in the melanization of wounds and encapsulated parasites, as reflected in the list of genes that are preferentially expressed in these cells (Figure 1). PPO1 and PPO2 encode phenoloxidases, key enzymes in the melanization reaction. They are copper enzymes that catalyze the oxidation of tyrosine and other phenols, leading to their polymerization into melanin (Carton et al., 2008). PPO1 is produced in an inactive form, prophenoloxidase, which is possibly secreted directly into the hemolymph. By contrast, PPO2 is kept sequestered in the crystals and released only when the activated crystal cells rupture and the crystals dissolve (Binggeli et al., 2014;Schmid et al., 2019). The proenzymes are proteolytically processed in the hemolymph into active phenoloxidase forms, in a process that involves several serine proteases (Nam et al., 2012;Dudzic et al., 2019). Together, PPO1 and PPO2 mediate the melanization of wound sites and of infecting bacteria (Binggeli et al., 2014;Dudzic et al., 2019). PPO2 from crystal cells also acts together with PPO3 in lamellocytes to melanize encapsulated parasites (Dudzic et al., 2015). Related to the production of these copper enzymes, the transcripts for two copper-transporting and -concentrating proteins, encoded by the Ctr1A and Atox1 genes, are overrepresented in the crystal cells. Four metallothionein genes are also specifically transcribed in the crystal cells: MtnA, MtnB, MtnD, and MtnE ( Figure 1, Figure 1-source data 1). Their role may be to deal with copper toxicity. The capacity of crystal cells to burst and release their contents, via a pyroptosis-like mechanism (Dziedziech and Theopold, 2021), in response to infection and other challenges, may also be reflected in the single-cell transcriptome data. In all six studies, Ninjurin B (NijB) transcripts were enriched in the crystal cell clusters (Figure 1-source data 1). NijB encodes a homolog of the human Ninjurin-1, which is described as a cell adhesion protein. The corresponding mouse protein, NINJ1, was recently found to mediate plasma membrane rupture in macrophages, in response to bacterial infection, and thereby the release of pro-inflammatory cytokine IL-1ฮฒ and other danger signals (Kayagaki et al., 2021). NijB may play a similar role in the infection-induced rupture of crystal cells. The membrane receptor Notch plays a central role in crystal cell fate determination and maintenance, together with the Runt domain transcription factor Lozenge, which acts downstream and together with Notch. Consequently, Notch (N) and lozenge (lz) transcripts were picked up in most of the studies as preferentially expressed in the crystal cells, as were several Notch and Lozenge transcriptional targets: the early-onset Notch targets E(spl)m3-HLH and E(spl)mฮฒ-HLH (Couturier et al., 2019), which encode basic helix-loop-helix transcription factors; and the Notch/Lozenge target genes pebbled (peb = hindsight); klumpfuss (klu); and CG32369 (Terriente-Felix et al., 2013). Transcripts of the numb gene, which encodes a membrane-associated inhibitor of signaling from Notch (Wu and Li, 2015), were also found to be enriched, but only in the lymph gland studies. The clusters prolif and X represent mitotic cells. Some plasmatocyte clusters were named after characteristic genes or gene groups that are enriched in the clusters (ImpL2, AMP, Rel, vir1, Inos, robo2, Pcd, Lsp, Ppn, CAH7, GST). Two additional suggested classes, thanacytes (TH) and adipohemocytes (Adipo), were not reproducibly observed and are not further discussed here. Finally, no genes were preferentially expressed above our cutoff in clusters PL-1 (Cattenoz et al., 2020), PM4, and PM11 Figure 3 continued on next page The interpretation of the crystal cell transcriptome data is complicated by the fact that the recovery of crystal cells was in some cases very low. While normally about 5-10% of the hemocyte population are crystal cells in uninfected third-instar larvae, only 0.6% (Cattenoz et al., 2020) or 0.35% (Fu et al., 2020) of all counted hemocytes were assigned to the crystal cell clusters. Although other studies found higher numbers, many cells may have been lost due to the sensitive nature of crystal cells, which tend to burst after bleeding. Primocytes: A new class of hemocytes related to the cells of the posterior signaling center Unexpectedly, one additional well-defined hemocyte class was standing out in these comparisons, besides lamellocytes and crystal cells. Clusters belonging to this class have a pattern of gene expression that is indistinguishable from cells of the posterior signaling center of the lymph glands. This cluster was called PL-ImpL2 by Cattenoz et al., 2020, who also noted the similarity to the posterior signaling center (Cattenoz et al., 2021), PM11 by Tattikota et al., 2020, andprimocytes by Fu et al., 2020 (Figure 3). We will here use the term primocytes as an inclusive term for these clusters together with the PSC clusters described from the lymph glands Girard et al., 2021; Figure 2, Figure 1-source data 1). Notably, the primocyte class went undetected in the Leitรฃo et al., 2020 study, perhaps because primocytes are relatively rare, only about 0.3% of all peripheral hemocytes (Cattenoz et al., 2020;Fu et al., 2020). Alternatively, they may have been lost in the He, srp-selection step employed by Leitรฃo et al., if these markers are not expressed by the primocytes. Of the genes that were preferentially expressed in primocyte clusters, 69 were identified in at least two studies, and 12 in at least four of the six studies ( Figure 2, Figure 1-source data 1). Among these genes, one gene, CG15550, is uniquely standing out as highly enriched in all primocyte clusters. It encodes a small predicted transmembrane protein of unknown function. CG15550 has highly conserved homologs among Drosophila species in the melanogaster and obscura groups of the Sophophora subgenus, but is strikingly absent in other organisms. Highly enriched are also well-established markers for the posterior signaling center, such as Antennapedia and knot (collier). Another gene that was identified as enriched in the primocyte clusters is ImpL2, which encodes a secreted insulin antagonist (Honegger et al., 2008) that can cause wasting by redirecting nutrients to proliferating tissues (Kwon et al., 2015). Bajgar et al., 2021 have shown that ImpL2 is secreted from certain circulating hemocytes, most likely primocytes, and thereby induces adipose tissue to release lipoproteins and carbohydrates that can be utilized by the activated immune system. The discovery of circulating primocytes may resolve the old question how manipulations that affect the posterior signaling center can control the generation of lamellocytes not only in the lymph gland, but also in the peripheral population of hemocytes. The genetic ablation of primocytes in the posterior signaling center (Crozatier et al., 2004;Benmimoun et al., 2015b) is likely to ablate primocytes also elsewhere. An important function of the primocytes may be to directly trigger lamellocyte formation in the peripheral compartment as well as in the lymph gland, either by direct contact with lamellocyte precursors or via diffusible signals. The expression of Antennapedia (Antp) in the posterior signaling center has been taken as evidence that it originates from the mesodermal T3 segment in embryonic development, unlike the primary lymph gland lobes, which arise from segments T1-T2, and the larval hemocytes, which arise from head mesoderm (Mandal et al., 2007). Antp is also ubiquitously expressed in the circulating primocytes, suggesting that they may also have a different origin from the other larval hemocytes. It should be investigated if cells are released from the posterior signaling centers, or perhaps from other T3-derived cells, long before the rupture of the lymph gland lobes. It is also worth noting that primocyte-like (i.e., PSC-like) cells have also been detected in at least one posterior lymph gland lobe, the tertiary lobe. Like primocytes, they express knot (collier), but instead of Antp they express a more posterior homeotic gene, Ubx (Rodrigues et al., 2021;Kanwal et al., 2021). The exact role of these cells also remains to be investigated. . The shape and size of circulating larval primocytes are unknown. Instead, the illustration is based on published images of primocyte-like cells in adults (Boulet et al., 2021) and primocytes in the posterior signaling center (Krzemieล„ et al., 2007;Mandal et al., 2007). Figure 3 continued The circulating primocytes were generally interpreted as a subclass of plasmatocytes (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020;Leitรฃo et al., 2020). However, since they probably have a common origin with the posterior signaling center, not with the other hemocyte classes, and since a majority of the primocyte-specific markers are highly depleted or absent in bulk plasmatocytes (Ramond et al., 2020; Figure 2), we will here treat them as a separate class of hemocytes. In their recent study of adult hemocytes, Boulet et al., 2021 identified a small cell population that expressed the domeMeso-GAL4 driver, a marker for hemocyte progenitors in the medullary zone of the larval lymph gland (Banerjee et al., 2019), and they concluded that these cells were prohemocytes. However, lineage tracing suggested that they derived from the posterior signaling center (or from a similar primocyte source). Furthermore, these cells expressed primocyte markers such as the Antp gene and the col-GAL4 driver (with the knot [col] promoter). Thus, this cell population may correspond to bona fide primocytes. These putative primocytes had a fusiform shape, with long filopodial extensions, much like the filopodia that extend from the posterior signaling center into the primary lymph gland lobes (Krzemieล„ et al., 2007;Mandal et al., 2007). This gives them an appearance that is reminiscent of the nematocytes that have been described from other drosophilid species, discussed below. In conclusion, primocytes constitute a distinct hemocyte lineage with a different origin than other hemocytes. The functional role of the circulating primocytes remains speculative, but it is possible that they interact with and control the peripheral plasmatocytes, like the primocytes of the posterior signaling center control hemocytes in the primary lobe of the lymph gland. That interaction would be facilitated if the circulating larval primocytes are shaped like the putative adult primocytes, with long extensions. However, this interpretation is not in line with the description of the adult primocyte-like cells as a set of prohemocytes with capacity to divide and to differentiate into plasmatocytes (Boulet et al., 2021). Plasmatocytes: Multitasking and very plastic cells The data analyses in the single-cell transcriptomic studies discussed here were primarily designed to identify different plasmatocyte subgroups, not to find common markers for plasmatocytes in general. As a proxy for such pan-plasmatocyte markers, we combined three subclusters that express several classical plasmatocyte markers: (1) the PLASM1 cluster of Leitรฃo et al., 2020, (2) the PM cluster of Cho et al., 2020, and (3) the PL2 cluster of Girard et al., 2021. After weeding out genes that are more strongly expressed in lamellocytes, crystal cells or primocytes in any of the other transcriptome studies, we could assemble a list of 125 putative plasmatocyte-specific marker genes, 46 of which were expressed in at least two of the three clusters ( Figure 2, Figure 1-source data 1). This tentative list includes well-known plasmatocyte marker genes such as Hemolectin, Col4a1, Peroxidasin, viking, NimC1, eater, and Sr-CI (Figure 2;Goto et al., 2003;Fessler et al., 1994;Paladi and Tepass, 2004;Irving et al., 2005;Kurucz et al., 2007a;Kroeger et al., 2012). However, we can neither be sure if these markers are exclusively expressed in plasmatocytes only, nor if they are ubiquitously expressed in every plasmatocyte. To resolve these questions, raw data will have to be reanalyzed under conditions such that all plasmatocytes fall into one cluster. It is interesting to compare this list with the bulk transcriptomic analysis of total (Hemolectinpositive) plasmatocytes recently published by Ramond et al., 2020, as shown in the last column in Figure 2, although it should be kept in mind that the single-cell data shows the expression in one cluster compared to all other clusters, while the bulk data shows the expression in all (Hemolectinpositive) plasmatocytes compared to the total expression in the entire larva. Nevertheless, there is good correlation between the single-cell and the bulk plasmatocyte data sets, except that the relative enhancement is generally much higher in the bulk data, presumably because plasmatocytes are also present in the reference clusters of the single-cell data. Most primocyte markers, like Antp and knot (collier), are strongly depleted or undetected in the plasmatocyte data of Ramond et al., 2020, giving further support to the conclusion that primocytes are unrelated to the plasmatocyte class. Lamellocyte and crystal cell markers also tend to be underrepresented in the bulk plasmatocyte cell data, but there is some overlap, perhaps because the plasmatocyte sample includes precursors of lamellocytes and crystal cells. Strikingly, a large number of plasmatocyte-specific genes in the list encode basement membrane components, or are involved in extracellular matrix formation or in cell-matrix or cell-cell adhesion ( Figure 2, Figure 1-source data 2). We conclude that plasmatocytes must be constantly active in shaping and reshaping the extracellular matrix . The list also includes several known or suspected phagocytosis receptors and microbial pattern recognition molecules (Figure 2), such as NimC1, NimB4, eater, Sr-CI, and PGRP-SA (but not PGRP-LC), as well as the lectins Hemolectin and lectin-24Db. This is in line with a role of plasmatocytes in recognizing and phagocytizing microorganisms. The subclustering analysis of the single-cell transcriptomic data documented much plasmatocyte heterogeneity (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020;Leitรฃo et al., 2020;Cho et al., 2020;Girard et al., 2021), but we find only limited congruence between the different studies ( Figure 3). Based on the available data, it is therefore still not possible to identify any welldefined plasmatocyte subclasses. Thus, it may be more practical to treat the plasmatocytes as a single class, albeit a very plastic one, that turns on different transcriptional programs depending on the needs of the moment (Mase et al., 2021). The entire complement of plasmatocytes will then represent a continuum of cells that to a variable extent have activated one or more of these programs. Subclusters with an activated antimicrobial program were identified in all but one of the published studies (Figure 2-figure supplement 1, Figure 1-source data 1). These clusters include the ones called PL-AMP by Cattenoz et al., 2020, PM7 by Tattikota et al., 2020, AMP by Leitรฃo et al., 2020, PH4 and PH6 by Cho et al., 2020, and MZ by Girard et al., 2021 The overlap between the studies was modest. Only 21 genes, almost all of them known targets of the Imd and/or Toll signaling pathways, were shared by two or more of the studies. Only three genes, encoding different cecropins, were identified in more than three of the studies. Besides these targets of the antimicrobial response, there was very little overlap between the different studies. Similarly, the PM5 cluster of Tattikota et al., 2020 and the GST cluster of Cho et al., 2020 define a program for oxidative stress. These clusters share only 15 genes, but both clusters include genes involved in the response to oxidative stress, such as different glutathione S transferases (GSTs) ( Figure 1-source data 1). The PL-Pcd cluster of Cattenoz et al., 2020 and the Thanacyte cluster of Fu et al., 2020 have a significant overlap, and together they define cells involved in a program for protein export. These cells specifically express genes involved in the protein export pathways as well as genes encoding exported proteins, notably three thioester-containing proteins (TEPs) (Figure 1-source data 1). Two of the studies have identified cell clusters that express a mitotic program. The PL-prolif cluster of Cattenoz et al., 2020 and the X cluster of Girard et al., 2021 share a large number of genes involved in the cell cycle (Figure 1-source data 1). There is also a small but significant overlap with the PM2 and PM9 clusters of Tattikota et al., 2020. The clusters called PL-Lsp (Cattenoz et al., 2020) or Lsp + PM (Fu et al., 2020) constitute a special case. Besides a normal complement of plasmatocyte-specific genes, they are highly enriched for several genes that are otherwise only expressed in the fat body. The fact that they express plasmatocyte markers excludes the possibility that they represent a contamination by fat body cells. It is possible that these cells function as nutrient reservoirs, as suggested by Cattenoz et al., 2020. Alternatively, they may simply be plasmatocytes that have engulfed fat body fragments, in preparation for metamorphosis, as discussed below. Cattenoz et al., 2021 have made a careful and more detailed comparison between two of the studies (Cattenoz et al., 2020;Tattikota et al., 2020), and also taken the results of Fu et al., 2020 into account. Besides lamellocytes, crystal cells, and PSC-like cells (primocytes), they proposed five subgroups of plasmatocytes: proliferative, antimicrobial, phagocytic, secretory, and unspecified plasmatocytes. That classification scheme is similar to the programs we describe above, although the subgroups defined by Cattenoz et al., 2021 tend to include additional clusters. The difference seems to be due to the lower cutoff values set by Cattenoz et al., 2021. For instance, the 'proliferative' subgroup includes not only the PL-prolif cluster but also the PL-Inos cluster of Cattenoz et al., 2020. However, the level of enrichment of mitosis-specific genes is very low in the latter cluster. The most highly enriched mitotic gene, string, is 9.7-fold enriched in PL-prolif, but also 1.9-fold in PL-Inos and, surprisingly, 1.3-fold in CC. In the data from Tattikota et al., 2020, it is 1.5-fold enriched in PM9, 1.4-fold in PM2, and 1.2-fold in PM1. We conclude that low levels of mitotic activity may go on in many clusters. In general, plasmatocytes seem to be engaged in many different activities, sometimes simultaneously and to a variable degree. That makes it difficult to classify them into well-defined and reproducible subgroups. Earlier literature has documented many aspects of plasmatocyte plasticity, and the different roles of plasmatocytes are very well described in a recent review (Mase et al., 2021). At the onset of metamorphosis, plasmatocytes become very active. They become adhesive and motile, take on a podocyte morphology, and begin to phagocytize large quantities of histolyzing larval tissue, in particular muscle and fat (Lanot et al., 2001;Meister and Lagueux, 2003;Sampson and Williams, 2012;Ghosh et al., 2020). Major changes have also been observed in the plasmatocytes of wasp-infected larvae. Besides lamellocytes and their precursors, which turn up in the hemolymph of the infected larva, a population of cells of the plasmatocyte lineage begin to increase in size and granularity about 10 hr after infection and such activated plasmatocytes become abundant after 30 hr (Anderl et al., 2016). The activated plasmatocytes were observed to express increased levels of the eaterGFP plasmatocyte marker, and they also accumulate inclusions that express the msnCherry lamellocyte marker, most likely remnants of phagocytized lamellocyte fragments. The changed plasmatocyte activity before metamorphosis and after infection is unfortunately not reflected in the single-cell sequencing studies discussed here, but calls for more extensive time series of infected and uninfected animals. Beyond the limits of this plasticity, which seems to be largely reversible, plasmatocytes also have a capacity to transdifferentiate irreversibly to become crystal cells and lamellocytes (Leitรฃo and Sucena, 2015;Honti et al., 2010;Avet-Rochex et al., 2010;Stofanko et al., 2010), in the latter case via intermediate stages such as lamelloblasts and prelamellocytes (Anderl et al., 2016). Such intermediate stages are exemplified by the CC1 and LM1 clusters of Tattikota et al., 2020, for crystal cells and lamellocytes, respectively, and the LM-2 prelamellocytes of Cattenoz et al., 2020. It should be noted that these crystal cell and lamellocyte precursor clusters share no genetic markers with the similarly named CC1 and LM1 clusters of Cho et al., 2020, which originate from prohemocytes, not plasmatocytes. In conclusion, the plasmatocyte subclusters show disappointingly little overlap between the different studies. The described clusters are either unique or share a limited number of enriched genes between just a few of the studies. The only exceptions are clusters involved in an antimicrobial program. Such cells were noted in most of the studies, but the overlap includes only a very narrowly defined class of genes. The general picture is that the plasmatocytes constitute a cell class that serves many tasks, each task requiring the activation of a few specialized genes, without requiring a complete re-differentiation of the cells. Lineage-tracing assays will establish whether the observed specific features are lineage related (identity) or depend on the environment (state), whether specified plasmatocytes arise from the nonspecified ones or from differentiated plasmatocytes that change potential. Prohemocytes: Only in the lymph gland It has been argued that the mitotically active hemocytes in circulation represent a prohemocyte population (Cattenoz et al., 2021), but since they express plasmatocyte markers our tentative interpretation is that mitosis occurs as a transient stage in the life of a plasmatocyte. It is uncertain if prohemocytes, that is, self-renewing and truly undifferentiated hemocytes, ever occur in circulation in Drosophila, but in the lymph gland they occupy the medullary zone, and it cannot yet be ruled out that a population of prohemocytes is hiding among the sessile hemocytes in the larva. However, it has been demonstrated that crystal cells are generated by transdifferentiation of fully differentiated sessile plasmatocytes under the skin of the larva (Leitรฃo and Sucena, 2015), and lamellocytes are also generated from the plasmatocyte lineage in the larva (Honti et al., 2010;Avet-Rochex et al., 2010;Stofanko et al., 2010). Anderl et al., 2016 could directly confirm how plasmatocytes that were attached to the egg of a parasitoid wasp transdifferentiate into lamellocytes type II. On the other hand, Anderl et al. also observed that a large population of undifferentiated and self-proliferating hemocytes, the lamelloblasts, appeared in the Drosophila larva soon after wasp infection, and these cells later seemed to differentiate into lamellocytes via an intermediate prelamellocyte stage. It is possible that the LM1 clusters of Tattikota et al. and Cho et al., which share few if any markers with the differentiated lamellocyte clusters (Figure 3), correspond to lamelloblasts. Thus, true prohemocyte clusters were only identified in the lymph gland studies, the prohemocyte clusters: PH1-PH6 of Cho et al., 2020 and the medullary zone cluster MZ of Girard et al., 2021. Surprisingly, these clusters share few markers with each other (or with other hemocyte clusters), except that MZ and PH4 both have enhanced expression of the CecA1, CecA2, and CecC antimicrobial peptide genes. The functional importance of that observation is unclear at the moment. For an update on the interesting field of lymph gland hematopoiesis, interested readers are referred to recent reviews (Banerjee et al., 2019;Csordรกs et al., 2021;Morin-Poulard et al., 2021). Relationship to blood cells in other species Armed with the new markers for specific cell classes in D. melanogaster, we can begin to look for homologous cell types in other species. Beginning with the crystal cells, where the relationships are more clear, we will here discuss the results from three single-cell transcriptomic studies of hemocytes from the malaria mosquito, Anopheles gambiae, and one from the silkworm, B. mori (Severo et al., 2018;Raddi et al., 2020;Kwon et al., 2021b;Feng et al., 2021). We will also discuss transcriptomic and genomic information available for drosophilid flies other than D. melanogaster. Crystal cells/oenocytoids Crystal cells are generally considered equivalent to the cells called oenocytoids in other insects (Lavine and Strand, 2002;Ribeiro and Brehรฉlin, 2006;Hillyer, 2016;Eleftherianos et al., 2021), and there is plenty of evidence supporting that view. Crystal cells and oenocytoids have similar cytology, and neither cell type is known to undergo mitosis. Like crystal cells, oenocytoids are the main or sole source of phenoloxidases (Iwama and Ashida, 1986;Ashida et al., 1988), which are required for melanin deposition around parasites, at wound sites, and in pigmented cuticle. Lepidopteran oenocytoids can release their phenoloxidases in a lytic reaction, in which the cells burst and release their entire contents (Strand and Noda, 1991;Ribeiro and Brehรฉlin, 2006). This response is triggered by prostaglandin (Shrestha and Kim, 2008;Shrestha et al., 2011;Park and Kim, 2012), and a similar prostaglandin-dependent response has also been reported from mosquitoes (Kwon et al., 2021a). Likewise, Drosophila crystal cells are triggered to burst at wound sites and in response to parasitization (Rizki, 1957;Rizki and Rizki, 1959;Rizki, 1978;Bidla et al., 2007;Schmid et al., 2019). One difference is that phenoloxidase is stored in regular polyhedral ('pseudocrystalline') inclusion bodies in the crystal cells of D. melanogaster, while phenoloxidases are found in the cytoplasm or in various non-crystalline inclusions in lepidopteran oenocytoids (Iwama and Ashida, 1986;Ashida et al., 1988). Mosquito oenocytoids lack cytoplasmic inclusions, and they stain homogenously for phenoloxidase in the cytoplasm (Hillyer et al., 2003). However, most drosophilids have their phenoloxidases stored in amorphous granules, as summarized in Figure 4 (Rizki and Rizki, 1980;Rizki, 1984). Crystal cells with well-ordered crystalline inclusions have only been observed in the closest relatives of D. melanogaster. Even within the melanogaster species subgroup, D. yakuba and D. teissieri have less regular inclusion bodies. Thus, there are good reasons to conclude that crystal cells are indeed oenocytoids. The term 'crystal cell' should either be dropped altogether (Ribeiro and Brehรฉlin, 2006) or at least be restricted to the few species that have oenocytoids with crystalline inclusions. Mosquitoes We can therefore expect that the relatedness between oenocytoids and crystal cells should also be reflected by the genes they express. Three studies on hemocytes from the malaria mosquito (A. gambiae) have been published, but the data do not give an entirely coherent picture (Severo et al., 2018;Raddi et al., 2020;Kwon et al., 2021b). In Figure 5, we have summarized mosquito clusters that express orthologs of D. melanogaster crystal cell-specific markers. First, in a small study specifically focusing on oenocytoids, Severo et al., 2018 purified a population of hemocytes that express an oenocytoid-specific fluorescent marker, driven by the prophenoloxidase 6 (PPO6) gene promoter. Unexpectedly, single-cell RNA sequencing of that population identified two different kinds of cells, expressing either high or low levels of the PPO6 marker, respectively. The PPO6 gene encodes one of the nine different phenoloxidase genes of Anopheles, PPO1-PPO9. Anopheles PPO1 corresponds to the PPO2 gene in Drosophila, while Anopheles PPO2-9 are all related to Drosophila PPO1 (Figure 6). Five of them, PPO2, 4, 5, 6, and 9, were more than 1000-fold enriched in the PPO6 high population compared to PPO6 low ( Figure 5; Severo et al., 2018). The remaining four genes, PPO1, 3, 7, and 8, were only expressed in a few scattered cells, but these cells also belonged Figure 4. Occurrence of specialized effector cells (lamellocytes, nematocytes, multinucleated giant hemocytes, pseudopodocytes, and crystal cells) in parasitized drosophilid larvae, and correlation with presence or absence of PPO3 and ItgaPS4 genes. Consensus phylogenetic tree from Russo et al., 2013, Thomas and Hahn, 2017, Miller et al., 2018, Kim et al., 2021, and Finet et al., 2021. Basic topology from Finet et al., 2021, time calibration from Russo et al., 2013, and taxonomy from Kim et al., 2021 Presence or absence of lamellocytes (Eslin and Doury, 2006;Eslin et al., 2009; Figure 4 continued on next page to the PPO6 high cluster. The high expression of phenoloxidase genes confirms the expectation that the PPO6 high cells are oenocytoids or a subpopulation of oenocytoids. By contrast, the expression pattern of the other cluster, the PPO6 low cells, corresponded to what might be expected in granular cells, with an enrichment of orthologs of Drosophila plasmatocyte markers as discussed below, although no morphological differences were found between PPO6 high and PPO6 low cells (Severo et al., 2018). The homology between the Anopheles PPO6 high cells and Drosophila crystal cells is further supported by the fact that orthologs of several other crystal cell marker genes are enriched in the PPO6 high and none in the PPO6 low cluster ( Figure 5). Besides the phenoloxidase genes, homologs of CG9119, meep, Fkbp59, and CG17109 were all more than 1000-fold enriched in the PPO6 high population (Severo et al., 2018). Homologs of CG17065, Ctr1A, CG10467, and pathetic were also enriched, but to a lesser extent. However, homologs of the classical crystal cell markers, lozenge, Notch, or pebbled, were not detected, perhaps due to low expression levels of these genes, and because very few cells were analyzed in this study. It should be noted that the recovery of oenocytoids was very low in this study. It may be that the lytic program of these cells was activated during the handling of the samples. In that case, the surviving cells may not be entirely representative of oenocytoids in general. It was suggested that the detection of PPO6 marker in PPO6 low cells was due to uptake of RNA-laden microvesicles, shed by the PPO6 high cells (Severo et al., 2018). Alternatively, phagocytic PPO6 low granular cells may have taken up fragments of disrupted oenocytoids. In a larger study, Raddi et al., 2020 identified a likely oenocytoid cluster with enhanced transcription of the five phenoloxidase genes PPO2, 4, 5, 6, and 9. Furthermore, this cluster, called HC1, expressed two additional homologs of the Drosophila crystal cell markers ( Figure 5), giving further support for the homology between Drosophila crystal cells and Anopheles oenocytoids. However, the enrichment of these or other transcripts was in general much lower than in the data from Severo et al., 2018. Again, none of the crystal cell markers lozenge, Notch, or pebbled were detected. Unlike the other single-cell transcriptomic studies, Kwon et al., 2021b did not find statistically significant enrichment of the phenoloxidase genes in any specific hemocyte cluster. The primary markers for oenocytoids, PPO2, 4, 5, 6, and 9, were expressed at moderate levels, and relatively evenly distributed between seven different hemocyte clusters (Kwon et al., 2021b). One possible explanation is that the oenocytoids were completely lysed in this experiment, and that the remnants were taken up by other hemocytes. Interestingly, however, PPO1, 3, 7, and 8 transcripts were primarily found in two clusters, cluster 7 and 8, albeit at low levels. Incidentally, the latter four genes are exactly the ones that have been linked to prostaglandin-dependent induction in oenocytes of Plasmodiuminfected mosquitoes (Kwon et al., 2021a). The same clusters were also reported to express the crystal cell markers peb, DnaJ-1, Mlf, klu, and lozenge, although not to levels that reached statistical significance. The authors conclude that clusters 7 and 8 correspond to the oenocytoid class, but that assignment may have to be revised, as cells in cluster 8 also express primocyte markers (see below). Figure 4 continued Lamellocytes. Most consistently enriched markers for Drosophila lamellocytes (see Figure 1a). Figure 1a). Figure 5. Orthologs of Drosophila lamellocyte and crystal cell markers expressed in mosquito and silkworm hemocyte clusters. Data from single-cell RNAseq studies by Severo et al., 2018 (Sev), Raddi et al., 2020 (Rad), Kwon et al., 2021a (Kwo), and Feng et al., 2021 (Feng). Drosophila markers for which no orthologs could be identified were excluded from the analysis. Clusters where the genes are significantly enriched are indicated, with highest enrichment first. Non-hemocyte clusters are omitted. PPO2/3 Figure 6. Phylogenetic relationships between insect phenoloxidases. Maximum parsimony tree of protein sequences found by blastp search of all annotated sequences from the family Drosophilidae and from Anopheles gambiae and Bombyx mori, in the refseq_protein database. Additional selected protein sequences were modeled from genomic sequences retrieved in a tblastn search of the refseq_genomes and wgs databases. Bootstrap Figure 6 continued on next page the silkworm has three different phenoloxidase genes ( Figure 6). By yet another unfortunate twist of nomenclature, the gene related to Drosophila PPO1 is called PO2 (or PPO2), and two genes related to Drosophila PPO2 are called PO1 and PO1-like (or PPO1 and PPO1-like). Only PO1 and PO2 were annotated in the database used by Feng et al., and both of them were found to be highly expressed in all four oenocytoid clusters ( Figure 5). Several genes involved in general metabolism and one copper ion transporter were also upregulated, presumably to meet the needs of the copper enzyme phenoloxidase. Importantly, lozenge transcripts were significantly enriched in all four oenocytoid clusters and Notch in two of them. As lozenge and Notch are characteristic markers for the crystal cell fate in D. melanogaster and directly involved in their hematopoiesis, this is strong evidence that crystal cells are indeed oenocytoids. Lamellocytes Drosophilids other than D. melanogaster Of particular interest are the lamellocytes, a cell type that is uniquely found only among the drosophilid flies. According to most authors, typical lamellocytes do not occur outside the genus Drosophila, or even outside the melanogaster and suzuki subgroups (Eslin and Doury, 2006;Eslin et al., 2009;Havard et al., 2009;Salazar-Jaramillo et al., 2014;Wan et al., 2019;Cinege et al., 2020;Figure 4). In apparent contradiction to that view, a master's thesis by Kacsoh, 2012 documents a type of large lamellocyte-like cells in wasp-infected larvae of several other more distantly related drosophilid flies (see asterisks in Figure 4), and similar cells have also been reported from Zaprionus indianus and Drosophila willistoni (Kacsoh et al., 2014;Salazar-Jaramillo et al., 2014). They were described as large cells that flatten out on a dissection slide (Kacsoh, 2012), though not as large or flat as lamellocytes (Salazar-Jaramillo et al., 2014). It is possible that these lamellocyte-like cells correspond to the 'activated plasmatocytes' or to the 'lamellocytes type II' that have been observed in wasp-infected animals (Anderl et al., 2016;Cinege et al., 2021). Regardless of the status of such lamellocyte-like cells, two of the more prominent lamellocytespecific marker genes, PPO3 and ItgaPS4, are uniquely present only in the genomes of the 'oriental' subgroups of the melanogaster species group (Figure 4, Figure 6, Figure 7). These are also the only species where typical lamellocytes have been found. PPO3 and ItgaPS4 both originate from gene duplication events in the ancestors of the 'oriental' species groups. It has been proposed that PPO3 originates from a duplication of an ancestral PPO2-like gene (Salazar-Jaramillo et al., 2014;Dudzic et al., 2015) (node 1 in Figure 6). A more detailed phylogenetic analysis supports this idea and suggests that the duplication happened before the split between the melanogaster and obscura species groups ( Figure 6). The exact branching order is uncertain, and a likely scenario is that the duplication actually happened even later, in the immediate ancestors of the 'oriental' subgroups (node 2 in Figure 6) about 20 million years ago (Figure 4). The resulting tree ( Figure 6-figure supplement 1) would fit with the present distribution of the PPO3 gene. Similar to the PPO3 gene, the lamellocyte marker ItgaPS4 originates from a series of gene duplications at about time when lamellocytes first appeared on the scene. The ItgaPS4 gene encodes an integrin alpha subunit that is expressed on the surface of lamellocytes. It is closely related to ItgaPS5, which is also highly enriched in hemocytes, though primarily in plasmatocytes (Leitรฃo and Sucena, 2015). A phylogenetic analysis (Figure 7) suggests that ItgaPS4 and ItgaPS5 originate from the duplication of a common ItgaPS4/5 precursor around 20 million years ago (Figure 4). Interestingly, the ancestral ItgaPS4/5 gene in turn comes from a prior duplication of an ItgaPS3-like gene, values are percent support after 1000 replicates, using the PPO1-like proteins as outgroup. Note that the PPO3 homolog is pseudogenized in D. sechellia, and there is no trace of a PPO3 homolog in D. ficusphila. Consequently, although D. sechellia has lamellocytes, it is unable to encapsulate the eggs of parasitoid wasps (Kacsoh, 2012;Salazar-Jaramillo et al., 2014). D. ficusphila can encapsulate and kill parasites, but the capsules are not melanized (Kacsoh, 2012). While lamellocytes are in many ways unique, most drosophilids, like insects in general, have other specialized hemocyte types that participate in the encapsulation of parasites (Figure 8). It is an open question how these effector hemocytes are related to each other. Nematocytes, a characteristic class of very thin filamentous hemocytes, were first found by Rizki, 1953 in larvae of D. willistoni, and similar cells have later been found in D. hydei, Z. indianus, and many other drosophilids (Srdic and Gloor, 1893;Kacsoh et al., 2014;Bozler et al., 2017). Possibly related to the nematocytes are the multinucleated giant hemocytes (MGHs), first discovered in species of the ananassae subgroup (Mรกrkus et al., 2015). MGHs have also been identified in Z. indianus, D. falleni, and D. phalerata, and perhaps D. grimshawi (Cinege et al., 2020;Bozler et al., 2017). The multinucleated giant hemocytes form huge and highly motile networks of fused hemocytes that ensnare parasite eggs. Together with activated plasmatocytes, they form a capsule that envelops the parasite (Cinege et al., 2021). The wide distribution of nematocytes and/or MGHs among both distant and close relatives of D. melanogaster ( Figure 4) suggests that they must have been present already in the common ancestor of all drosophilids. Yet another type of effector cell, the pseudopodocyte, has been described from species in the obscura subgroup Havard et al., 2012). Pseudopodocytes are large plasmatocyte-like cells equipped with numerous long pseudopods, and they participate in the encapsulation of parasites. As the PPO3 and ItgaPS4 genes are markers for typical lamellocytes only, they are not informative about the possible homology between lamellocytes and other effector cells that can be found in species that lack lamellocytes. We have no single-cell transcriptomic data yet of hemocytes from drosophilids other than D. melanogaster, but recently the bulk transcriptome of D. ananassae multinuclear giant hemocytes was compared to that of other hemocytes in infected and uninfected larvae (Cinege et al., 2021). Strikingly, transcripts of one potential lamellocyte marker, the atilla ortholog, the sequences were aligned. The ItgaPS4 and ItgaPS5 sequences have only an A-form leader. A few partial or chimaeric forms were excluded from the analysis. Bootstrap values are percent support after 1000 replicates, using the Scapto Drosophila lebanonensis protein as outgroup. (Rizki, 1957), the D. hydei nematocyte from Kacsoh et al., 2014, the Zaprionus indianus multinucleated giant hemocyte from Cinege et al., 2020, and the D. affinis pseudopodocyte from Havard et al., 2012. The primocyte illustration is based on published images of primocyte-like cells in adults (Boulet et al., 2021) and primocytes in the posterior signaling center (Krzemieล„ et al., 2007;Mandal et al., 2007). The morphology of circulating larval primocytes is unknown. were found to be 300-fold enriched in the giant cells of wasp-infected larvae, compared to the circulating activated plasmatocytes in these larvae. However, this difference is mainly due to a very low expression of this gene in the latter cells. The atilla gene is otherwise also highly expressed in the naรฏve hemocytes of uninfected larvae, perhaps due to the presence of giant cell precursors. On the other hand, some lamellocyte-specific gene homologs, like the integrin Itgbn, are induced by infection in multinuclear giant hemocytes, and yet others, like Trehalase, are induced both there and in activated plasmatocytes. A substantial number of homologs of lamellocyte-specific genes are even downregulated after infection in one or both hemocyte classes. In conclusion, it is still difficult to judge if the limited overlap between gene expression in lamellocytes and multinuclear giant hemocytes is evidence of true homology or if it merely reflects an active role of these hemocytes. Mosquitoes Similarly, none of the mosquito hemocyte clusters described in the recent single-cell transcriptomic analyses (Severo et al., 2018;Raddi et al., 2020;Kwon et al., 2021b) show obvious homologies to the lamellocytes of D. melanogaster. Transcripts of atilla and CG15347 homologs are, for instance, not enriched in any hemocyte cluster (Raddi et al., 2020; Figure 5). However, we only have information from adult mosquitoes about hemocyte clustering, while lamellocytes and hemocytes with similar functions in other insects are typically found only in larvae. Thus, it is too early to speculate about possible lamellocyte homologs in mosquitoes. Besides, parasitoid wasps are certainly less of a problem for aquatic larvae. Silkworms In lepidopterans, such as B. mori, hemocytes called plasmatocytes (not to be confused with Drosophila plasmatocytes) play a similar role as the lamellocytes in Drosophila (Strand, 2008), and it is possible that these cell classes have a common origin in the ancestor of these insects. In line with this idea, a number of lamellocyte markers are in fact shared between Drosophila lamellocytes and the Bombyx plasmatocyte clusters 14 and 15 (Feng et al., 2021), for instance, the homologs of ฮฑ-Tubulin at 85E, and ฮฒ-Tubulin at 60D ( Figure 5). However, the overlap is rather modest, and it could be due to convergent evolution. The Bombyx plasmatocyte cluster 14 also expresses mitosis markers, suggesting that unlike Drosophila lamellocytes these cells are mitotically active. Furthermore, a few crystal cell markers are also expressed in Bombyx plasmatocyte clusters 14 and 15 ( Figure 5). Until we know the signaling pathways involved in their hematopoiesis, it will be difficult to judge if these cell types are related. One additional hemocyte class has been described in lepidopterans, the spherule cells, which lack dipteran counterparts. Spherule cells constitute one cluster in the study of Feng et al., 2021, cluster 19. While many crystal cell orthologs are found in the silkworm oenocytoid clusters, a few are to some extent expressed in the spherule cell cluster 19 ( Figure 5). These more promiscuous markers also tend to be expressed in the lepidopteran plasmatocyte clusters 14 and 15, making them less useful as indicators for a relationship to Drosophila lamellocytes or crystal cells. Primocytes Few of the primocyte markers are enriched in any particular hemocyte cluster in mosquitoes or silkworms, and there are no convincing candidates for a primocyte class in these species. Intriguingly, as indicated in Figure 9, homologs of the key markers of primocytes, knot and Antennapedia, are enriched in the Anopheles hemocyte cluster 8 of Kwon et al., 2021b, but there is no indication in the other studies that these genes are expressed in any of the hemocyte classes. Cluster 8 is otherwise the candidate of Kwon et al. for being oenocytoids. Plasmatocytes/granular cells Drosophila plasmatocytes have often been compared to hemocytes called granular cells in other insects since they are the major phagocytes in the respective insect groups. As a test for possible homology of these cell classes, we investigated if orthologs of our tentative Drosophila plasmatocyte markers were significantly enriched in particular hemocyte clusters of Anopheles and Bombyx. As shown in Figure 9, such orthologs were generally enriched in one or more of the five Bombyx hemocyte clusters 7, 0, 4, 17, and 10, all of which were classified as granular cells in Bombyx (Feng et al., 2021). The results from two of the Anopheles studies also support the same conclusion. Many Drosophila plasmatocyte markers are enriched in the granular cell-like PPO6 low cluster of Severo et al., 2018, and in the main granular cell clusters HC2, HC3, and HC4 of Raddi et al., 2020. The results of Kwon et al., 2021b are less clear. Regarding the many suggested subclusters of Drosophila plasmatocytes (Cattenoz et al., 2020;Tattikota et al., 2020;Fu et al., 2020;Leitรฃo et al., 2020;Cho et al., 2020;Girard et al., 2021), most of them lack equivalents in mosquitoes or silkworms, as might be expected since they were not reproducibly found even in Drosophila. However, the orthologs of several markers for mitotic cells in Drosophila (Figure 1-source data 1) could be identified in the Bombyx granular cell cluster 4 (Figure 9-figure supplement 1 ), which probably includes the mitotically active fraction of granular cells in that species (Feng et al., 2021). Furthermore, as noted by Raddi et al., 2020, a minor cluster of granular cells in Anopheles, cluster HC6, overexpressed antimicrobial peptides, much like some minor plasmatocyte clusters in Drosophila. By contrast, most or all granular cell subclusters (7, 0, 4, 17, and 10) in Bombyx express antimicrobial peptides, but only the members of the cecropin B class, Primocytes. Most consistently enriched markers for Drosophila primocytes (see Figure 1b). Primocyte and plasmatocyte markers. Clusters judged by the respective authors to be (lepidopteran) plasmatocytes are labeled red, oenocytoids blue, granular cells green, spherulocytes gold, and prohemocytes grey. Figure 9. Orthologs of Drosophila primocyte and plasmatocyte markers expressed in mosquito and silkworm hemocyte clusters. Details as in Figure 5. The online version of this article includes the following figure supplement(s) for figure 9: and one of the gloverins. Apparently, these peptides are constitutively expressed in silkworm granular cells, not acutely induced like in Drosophila and Anopheles (Figure 9-figure supplement 1). Experimental problems Some experimental difficulties will have to be dealt with in future studies. One is the fragility of the crystal cells, which is a likely cause of the low yields of these cells in some of the studies. Crystal cells are not easy to collect, and the problem may have been exacerbated by the violent pretreatment of the larvae, intended to force the release of sessile cells. The same is true for the oenocytoids in other insects, especially for mosquitoes that have to be transfused in order to get a reasonable yield. Other artifacts may be caused by the habit of plasmatocytes and granular cells to phagocytose fragments of other cells. Such fragments are generated when crystal cells/oenocytoids release their contents. Cell fragments may also be generated in the turnover of superfluous lamellocytes and in the autolytic disruption of larval tissue in preparation for metamorphosis. When such fragments are internalized or attached to plasmatocytes, they will contaminate the transcriptional profile of these cells. This could explain the unexpected presence of markers for crystal cells, oenocytoids, lamellocytes, or fat body cells in the plasmatocyte or granular cell transcriptomes. Further experiments will be required to resolve these issues. The lack of reproducibility in the subclustering of Drosophila plasmatocytes may be due to experimental details that were not common to all laboratories. The pooling of data from parasitized and unparasitized animals may have introduced further variability. The outcome of the subclustering may also be dependent on different parameters chosen for the clustering algorithms. The yield is a problem of its own. Most of the Drosophila studies were done with 15,000-20,000 hemocytes or more ( Figure 1-figure supplement 1), which seems sufficient. Fu et al. assayed a smaller number, 3424, but they tested fewer conditions. Similarly, Feng et al. analyzed over 20,000 cells from the silkworm. The mosquito studies have struggled with smaller numbers. Raddi et al. assayed over 5000 cells, but Kwon et al. and Severo et al. had to do with 262 and 26 cells, respectively. This means that stochastic errors become serious, and that rare hemocyte classes will be missed. These results must therefore be regarded as tentative. Throughout, we were surprised by the relatively modest levels of enrichment ('FC values') reported for many purportedly cell-type-specific transcripts. Part of the explanation may be that the borders between clusters become blurred when cells gradually activate different programs or initiate transdifferentiation, or when too many subclusters are recognized. Standard bioinformatic algorithms also tend to underestimate differences in gene expression. In order to avoid zero denominators, a constant value (typically 1) is usually added to all standardized read counts (RPKM). This gives conservative and more reliable estimates of statistical significance, but the FC values will systematically be underestimated, and the problem will become larger when the total number of reads is small. Conclusions and outlook Our analysis of the recently published single-cell transcriptomic studies shows that Drosophila plasmatocyte heterogeneity is not due to the presence of distinct and reproducibly occurring cell classes. Rather, plasmatocytes are flexible and they have a capacity to engage in different tasks, such as production and reshaping of extracellular matrix, phagocytosis of cell debris and microbes, encapsulation of parasites, etc., (Mase et al., 2021), and to adjust their activity accordingly. The resulting heterogeneity is gradual, transient, and probably reversible, and it does not result in the formation of separate well-defined classes. A similar functional plasticity is also seen in vertebrate myeloid cells (Galli et al., 2011). In a broad sense, this capacity may be inherited from the phagocytes of early metazoans, but the more specific adaptations of these plastic cells have probably evolved independently, considering the over 600 million years of separate evolution of insects and mammals (Cunningham et al., 2017). On the other hand, insect plasmatocytes/oenocytoids and vertebrate myeloid cells have several basal functions in common. They can phagocytize microbes and apoptotic cells, and they can detect and react to the presence of specific microbe-associated molecular patterns. These functions are probably very old, going back to the very first animals, and even to bacteria-eating protists (Gilmore and Wolenski, 2012;Franzenburg et al., 2012;Wenger et al., 2014;Menzel and Bigger, 2015;Emery et al., 2021). In Hydra, a jellyfish relative, these functions are carried out by phagocytic epithelial cells in the gut, while in corals like Swiftia, Pocillopora, and Nematostella, it is done by specialized motile immunocytes (Metchnikoff, 1892;Bosch et al., 2009;Franzenburg et al., 2012;Menzel and Bigger, 2015;Snyder et al., 2021). In order to meet special needs of immunity and wound healing, plasmatocytes can terminally transdifferentiate to become crystal cells (oenocytoids) or lamellocytes. Oenocytoids were probably present already in the first insects, and a subclass of phenoloxidase-expressing mobile cells have even been described from the coral Swiftia exserta (Menzel and Bigger, 2015). By contrast, lamellocytes are products of recent and very rapid evolution. The arms race with parasites like the parasitoid wasps has brought forward a plethora of different types of highly specialized effector cells among the drosophilid flies, and typical lamellocytes can only be found in a subset of species in the genus Drosophila. One novel and distinct class of hemocytes did come out of the transcriptomic studies, the primocytes. They populate the posterior signaling centers of the lymph glands, but they also appear to circulate freely in the hemolymph. It is possible that circulating or sessile primocytes play a similar role for the activation of peripheral hemocytes as the posterior signaling centers do for the cells in the lymph glands. The expression of Antennapedia suggests that primocytes may have an origin separate from that of other hemocytes. A comparison of Drosophila crystal cell transcriptomes with oenocytoid data from Anopheles and Bombyx gives strong support for the long suspected homology of these cell types. Similarly, Drosophila plasmatocytes are most likely homologous to the granular cells of other insects. Unlike these well-conserved hemocyte classes, the designated effector cells of the immune defense seem to undergo very rapid evolution, generating formidable entities such as lamellocytes, multinucleated giant cells, and lepidopteran plasmatocytes. The transcriptomic studies published so far provide a rich source of data, and further analysis can probably yield even more information. For instance, how are the changes in morphology and activity reflected in the transcriptomes of the 'activated plasmatocytes' in infected larvae? And, is it possible already from existing data to generate better catalogs of plasmatocyte and granular cell transcriptional markers?
Annual reproductive cycle and fecundity of Scorpaena notata (Teleostei: Scorpaenidae)* The family Scorpaenidae is of particular interest from the reproductive point of view, since it is made up of species with a wide variety of reproductive strategies, ranging from the most basic oviparity to matrotrophic viviparity. Wourms and Lombardi (1992) consider that, specifically within the genus Scorpaena, there is a shift from a primitive to a specialized mode of oviparity. Fertilization is still external and development is ovuliparous, but eggs are embedded in a gelatinous matrix. Scorpaena notata Rafinesque 1810 is a common species in rocky coastal habitats, and is found at depths of up to 700 meters. It is the object of semiindustrial and small-scale fishing (Whitehead et al., 1986; Fischer et al., 1987). It appears in the Mediterranean Sea and adjacent areas of the Atlantic, Madeira, the Azores and the Cabo Verde Islands. The southern limit of distribution seems to be Senegal (Eschmeyer, 1969). Until recently, most aspects related to its reproduction were unknown, but a recent study into its ovarian structure and process of oogenesis (Muรฑoz et al., 2002a) revealed that it is an ovuliparous species that shows several features that can be considered intermediate INTRODUCTION The family Scorpaenidae is of particular interest from the reproductive point of view, since it is made up of species with a wide variety of reproductive strategies, ranging from the most basic oviparity to matrotrophic viviparity. Wourms and Lombardi (1992) consider that, specifically within the genus Scorpaena, there is a shift from a primitive to a specialized mode of oviparity. Fertilization is still external and development is ovuliparous, but eggs are embedded in a gelatinous matrix. Scorpaena notata Rafinesque 1810 is a common species in rocky coastal habitats, and is found at depths of up to 700 meters. It is the object of semiindustrial and small-scale fishing (Whitehead et al., 1986;Fischer et al., 1987). It appears in the Mediterranean Sea and adjacent areas of the Atlantic, Madeira, the Azores and the Cabo Verde Islands. The southern limit of distribution seems to be Senegal (Eschmeyer, 1969). Until recently, most aspects related to its reproduction were unknown, but a recent study into its ovarian structure and process of oogenesis (Muรฑoz et al., 2002a) revealed that it is an ovuliparous species that shows several features that can be considered intermediate between the most basic oviparity and the first transitional stages towards viviparity. Muรฑoz et al., (2002b) have already examined the histology of this species and the ultrastructure of the testes and the spermatogenic phases. These authors showed that the gonadal structure is intermediate between the restricted and unrestricted spermatogonial types of testes defined by Grier (1981Grier ( , 1993 and that the spermatogenesis is semicystic, which has been described in very few species of fish. In this study, we aim to provide an in-depth analysis of the annual cycle of S. notata by studying the seasonal histological changes of the gonads and of various indices related to its reproduction. We also estimate and explain the fecundity of this species in relation to its specialized reproductive strategy. MATERIALS AND METHODS For the description of the different stages of maturity of the gonads, the ovaries and the testes were embedded in Histosec 56-58 pF (Merck) or in hydroxyethyl methacrylate, and were sectioned at between 4-10 ยตm, depending on the sex and the stage of maturity. Transverse and longitudinal sections were obtained for both sexes. The following stains were used for the samples kept in Histosec: haematoxylin-eosin for general histology; Mallory as a trichrome; PAS reaction (periodic acid-Schiff) for the demonstration of neutral mucopolysaccharides; and Alcian blue for acid mucopolysaccharides. The samples kept in methacrylate were stained with methylene blue-basic fuchsin, toluidine blue, and also PAS. The stages of development in the oocytes were determined by following the criteria established by Wallace and Selman (1981) and West (1990). The ovaries were classified according to the more developed type of oocyte (West, 1990). The development stage of the testis was determined following the criteria laid down by Grier (1981). In order to study the indices related to reproduction, we used 507 individuals of Scorpaena notata which were caught during a year-long period at various ports along the Costa Brava (northwest Mediterranean). The fish were fixed immediately after capture in 10% formaldehyde and preserved in 4% formaldehyde. The following parameters were analysed: -Sexual dimorphism. The total standard lengths and weights of the males and females were compared using an analysis of variance (ANOVA). -Sex Ratio (SR = Number of males / Number of females) and the monthly variations of this index. We determined if the result was significantly different from 1 by means of the c 2 test. All the indices were calculated as a function of the eviscerated weight, in order to avoid possible variations arising from differences in the digestive tract contents or energy reserves of the specimens. We calculated the indices, separating the fish according to sex and month of capture. Later, we observed any significant differences in the monthly variations by means of an analysis of variance (ANOVA). All the statistical analyses in this section were carried out in line with the criteria set out by Sokal and Rohlf (1995) with the programme SPSS 12.OS for Windows. Fecundity was estimated in 45 females using the gravimetric method (Burd and Howlett, 1974;Hunter et al., 1985). The oocytes were separated by introducing samples of completely mature ovaries into Gilson's solution, as modified by Simpson (1951). The eggs were then filtered and once they had been sorted into different diameters they were counted. We repeated the process twice for each ovary. The individual or absolute fecundity refers to the number of eggs produced per female per year (Wootton, 1979), and can be defined as the number of mature oocytes present in the ovary immediately before spawning (Bagenal, 1973). In species that use multiple spawning, it is the number of oocytes destined for spawning, i.e. the ones that will mature during the current reproductive cycle, which are usually taken into account (Aboussouan and Lahaye, 1979). Therefore only oocytes with a diameter greater than the oocytes at the cortical alveoli stage were taken into account, since only these are considered to have been released in this reproductive cycle. This absolute fecundity tends to increase according to the size and age of the fish. Therefore, in order to facilitate the comparison we also calculated the relative fecundity, i.e. the number of eggs per unit eviscerated weight (Bagenal, 1978). In order to study the relationship between the fecundity and the size or the total weight of the individual, we used the linear regression analysis by means of the logarithm log Y = log a + b . log X. This is calculated using the least squares method and corre-sponds to an exponential function of the type: Y = a . X b . The significance levels are the same as we described above. Finally, we also determined the frequency distribution of the egg diameters. Testes The lobular structure of the testes can be seen clearly during November and December, since they are in the spermatogonial proliferation period. According to Grier (1981), they are characterized by the fact that the lobular lumens only contain a few spermatogonial cysts here and there on the periphery, which are always enclosed in the Sertoli cells which cover the inside of the seminiferous lobule. During the months between January and May, the testes are in the early recrudescence period: the lobular lumens are full of spermatocytes, and usually free of cysts (Fig. 1A). In May, the testes enter the mid-recrudescence period, and now contain germinal cells in all stages of development: spermatogonia, spermatocytes and spermatids. Already little groups of free spermatozoa can be seen in the lobular lumen (Fig. 1B). From June onwards, the lobules still show all the cited stages, but especially spermatids in various stages of development as well as spermatozoa: the testes are in the late recrudescence period (Fig. 1C). During the functional maturity period, which occurs from July to September, the lobules and all the ducts are full of sperm. A great quantity of PAS negative substance is detected within the lobular lumens (Fig. 1D). Finally, in September, there are no spermatozoa in any ducts or other regions of the testes because they are now in the post-spawning period. Spermatogonia become more and more abundant. Ovaries From November to March, oogonia and oocytes at various stages of development can be observed. This is the period of previtellogenis (Fig. 1E). The vitellogenic period begins approximately in June, when the largest oocytes exhibit yolk granules that grow progressively (Fig. 1F). During the period of maturation, between July and October, a lot of mature oocytes which are full of yolk granules as well as oocytes with migrated germinal vesicle and hydrated oocytes are detected (Fig. 1G). The ovary also contains postovulatory follicles, so it can therefore be assumed that spawning takes place within this period. During the periods of vitellogenesis and spawning, the internal epithelium of the ovarian wall has cytoplasmic projections and the lumen of the ovary contains PAS positive ovarian fluid, which is particularly abundant and viscous during spawning (Fig. 1H). Table 1 shows the averages obtained for the standard lengths and total weights of the males and females of Scorpaena notata. There are no significant differences between the values obtained for the two sexes. Reproductive indices The annual and monthly sex ratio values are shown in Table 2: 60.7% of the 507 individuals are male and the remaining 39.3% female, so the sex ratio is 1.5 which differs in a highly significant way from 1 (c 2 =24.434, g.d.l.=1, p=0.000). Figure 2 shows the annual development of the various indices we analysed in relation to the phase the gonad is in. The gonadosomatic index (GSI) shows highly significant differences for both sexes (ANOVA, p=0.000), and the maximum values appear between June and October in males, and between July and September in females. For the hepatosomatic index (HSI), which also shows high-REPRODUCTIVE BIOLOGY OF SCORPAENA NOTATA 557 ly significant monthly changes (ANOVA, p=0.000), the highest values appear from January to May in males, and from January to July in females. The condition of the individuals we studied (K) showed highly significant differences among the males but no significant differences among the females (ANOVA, p=0.000 and p=0.256, respectively). The profile of the condition factor is not shown in Figure 2 because in both sexes the annual development of the index is not very marked, with only slight decreases during late summer. Mesenteric fat was not detected in any of the specimens analysed. Fecundity The results obtained for absolute and relative fecundities are presented in Table 3. The distribution of the eggs in terms of frequencies of diameters is relatively open and often marked by two peaks. The relationship of the absolute and relative fecundity with the size and total weight of the specimens was significant in all cases. DISCUSSION During November, for Scorpaena notata both the ovaries and the testes are in the initial phases of development. In January, the hepatosomatic index (HSI) of the females begins to increase continuously and constantly, reaching much higher values than in the case of the males, but this is a common feature in many species of fish. The males enter a phase of mid-recrudescence in May, which leads to a progressive increase in testicular activity as well as a significant decrease in their HSI. In the case of the females, the growth in GSI is more sudden and occurs later, beginning in July, which also brings with it a sharp fall in HSI in the following month. The transitory increase in the relative weight of the liver just before the increase in weight of the gonads suggests mobilization and processing of fatty acids and carbohydrates (Bruslรฉ and Gonzรกlez, 1996). The spawning period of the scorpionfish is clearly delimited between July and October, a period which coincides, although it is slightly longer, with the period given for the Gulf of Leon (Duclerc and Aldebert, 1968) and the Algiers region (Siblot-Bouteflika, 1976). On the other hand, Fischer et al., (1987) stated that the reproduction of this species in the Mediterranean probably occurs in May, a hypothesis which we consider to be erroneous. The condition of the individuals does not seem to be affected much by reproductive activity. The explanation probably lies in the major importance of the liver as an organ for storage, since no mesentery fat reserves were detected, nor were there seasonal changes in alimentary behaviour (Harmelin-Vivien et al., 1989) that could alleviate such a drain on resources. In the same vein, Shchepkin (1971) found that the liver of S. porcus had a high fat content compared to the muscles, and pointed out the major role of the liver as an organ for storing reserves to be used during reproduction. This information and the results about the evolution of the HSI obtained here suggest that the liver of the scorpionfish is the main provider of energy during the maturation and spawning periods. The population we studied of Scorpaena notata is clearly dominated by males. The proportion of males is also higher in the Algiers region (Siblot-Bouteflika, 1976) andin Marseille (Kaim-Malka andJacob, 1985), although the difference is not as marked as it is in this study. In contrast, Bradai and Bouain (1991) studied the reproduction of two species of the same genus and found that, while S. scrofa had similar numbers of males and females, the population of S. porcus was clearly dominated by females, especially among larger-sized individuals. These authors felt that the results indicated a faster growth rate in the S. porcus females, a characteristic also attributed to S. guttata (Love et al., 1987). In the case of S. notata, the large inequality in the sex ratio cannot be connected with sexual differences in growth rates, since male dominance appears above all in medium-sized individuals (Muรฑoz et al., 1996). Furthermore, the morphometric analysis carried out showed no significant variations in size between males and females. However, the sex ratio obtained from a monthly population analysis also rules out the segregation of the sexes during spawning, a behaviour which in some species leads to different sex ratios of the unit (deMartini and Fountain, 1981;Alheit et al., 1984;Barbieri et al., 1992;among others). One possible explanation would be the existence of different distribution patterns between males and females, but there is insufficient data to confirm this idea. When Scorpaena notata spawns, the number of eggs per female ranges from approximately 6000 to 33000. The analysis of the fecundity of another species of the same genus, S. porcus, carried out by Bradai and Bouain (1991), gave similar figures for total egg number and distribution of frequencies of diameters. The distribution of oocytes in various stages of development indicates that spawning is multiple, in such a way that the release of the more mature group is followed by the development and spawning of the following group. If we compare similar-sized individuals of S. notata captured at the beginning and end of the reproductive period (exam-ples with standard length = 140 mm from July and examples with standard length = 141 from September), we can see that the total egg number per female decreases as the spawning period goes on, a characteristic feature of a species with a determined fecundity (Greer Walker et al., 1994). At the same time, the decrease in relative fecundity would also make the worsening condition of the individuals clear, as a result of the drain on resources brought about by reproduction. These tendencies remain even if the individuals from September are larger (examples with standard length = 149 or 152), despite the fact that fecundity increases significantly as the standard length and the weight of the fish increases. It should be noted that the fecundity of S. notata is relatively low compared with other species from the same Scorpaeniformes order, whether they are typically oviparous species, such as Trigla lyra (Muรฑoz et al., 2002c), with a maximum number of eggs which our own studies found to be around 108000 per female (Muรฑoz, 2001), or whether they are species of a much more viviparous nature, such as the zygoparous species Helicolenus dactylopterus, with a maximum egg count of 87000 per female (Muรฑoz and Casadevall, 2002). This low fecundity may be related to the specialized reproductive behaviour of the species studied in this paper. S. notata releases the spawn within a gelatinous mass segregated by an internal epithelium of the ovaric wall, which may have various functions, such as that it enables the spawn to float as well as provides mechanical protection and defence against predators (Muรฑoz et al., 2002a). However, if we combine the data we obtained with the data from our previous work on testicular structure and spermatogenesis of the same species (Muรฑoz et al., 2002b), another function of this gelatinous mass, which is perhaps the most important, becomes apparent: it keeps the spawn together. The abundant, viscous seminal fluid probably keeps the sperms together when they are released. If they are released onto the grouped mass of eggs within the gelatinous matrix, fertilization is assured, therefore reducing the need for the female to produce numerous eggs which would explain the low fecundity of the species. Observations on mating scorpionfish during certain times of the year seem to bear out this hypothesis. It has been observed that the maximum egg diameter of Scorpaena notata is about 500 mm. This figure was obtained by both histological measurement and after being fixed in Gilson liquid. However, it must be pointed out that the eggs of the same species that were found in the sea, floating within the gelatinous matrix, measured between 760 and 880 mm (Spartร , 1956;Kimura et al., 1989). This difference in size is probably partly due to the decrease in volume of the oocytes that occurs when they are fixed in formol (between 0 and 10% according to Fleming and Ng, 1987;Hislop and Bell, 1987; Lowerre- Barbieri and Barbieri, 1993), as well as the significant increase in volume in the eggs of some species when they come into contact with the marine environment.
On the Optimal Selection and Integration of Batteries in DC Grids through a Mixed-Integer Quadratic Convex Formulation Grids a Mixed-Integer Quadratic Abstract: This paper deals with the problem of the optimal selection and location of batteries in DC distribution grids by proposing a new mixed-integer convex model. The exact mixed-integer nonlinear model is transformed into a mixed-integer quadratic convex model (MIQC) by approximating the product among voltages in the power balance equations as a hyperplane. The most important characteristic of our proposal is that the MIQC formulations ensure the global optimum reaching via branch & bound methods and quadratic programming since each combination of the binary variables generates a node with a convex optimization subproblem. The formulation of the objective function is associated with the minimization of the energy losses for a daily operation scenario considering high renewable energy penetration. Numerical simulations show the effectiveness of the proposed MIQC model to reach the global optimum of the optimization model when compared with the exact optimization model in a 21-node test feeder. All the validations are carried out in the GAMS optimization software. Introduction Electrical distribution networks have experienced important paradigm shifts associated with the large-scale insertion of renewable generation based on photovoltaic and wind sources in conjunction with energy storage technologies mainly focused on chemical storage [1][2][3][4]. All of these devices are interfaced with the distribution network through the implementation of power electronic converters with alternating current (AC) and direct current (DC) conversion stages as a function of the device interconnected and the technology of operation of the distribution grid, i.e., AC or DC network [5,6]. From the beginning to the electrical networks to our days, the predominant technology for building electrical networks in transmission levels has been the AC technology due to the most of the demands connected to the networks were simple, i.e., electrical rotate machines (induction motors), temperature conditioners, and illumination, among others; however, the nature of the loads have drastically changed with the appearance of computers, electric vehicles, household appliances, and small dispersed generating and storage systems, most of them operated with DC technologies [7][8][9][10]. Recent studies have demonstrated the advantages of having distribution networks in medium voltage levels operated with DC technologies due to important reductions in energy losses associated with the distribution activity since the reactive power is compensated directly in the point of connection of the load [11,12]. Recognizing the recent relevance that is having distribution networks operated with DC technologies, this research explores the impact of the integration of battery energy storage systems (BESSs) in these grids, considering the high penetration of renewable generation under an economic dispatch environment [13,14]. In the current literature, the problem of the optimal integration of BESSs has been explored in multiple pieces of research for AC and DC grids; here, we present some of these researches. Authors of [15] have presented a mixed-integer nonlinear programming (MINLP) model to represent the problem of the optimal location of BESSs in AC distribution grids. To solve this model a decomposition method was proposed where the MINLP model is divided into two optimization sub-problems named planning and operation. In the planning stage is defined the optimal location of the BESSs, while the operation stage is entrusted with their optimal operation. For solving the planning stage the classical simulated annealing algorithm is implemented based on sensitivity indexes associated with the impedance matrix of the network. Numerical validations were carried out in two test feeders composed of 135 and 230 nodes; however, the authors did not provide comparisons with other optimization methodologies to confirm the effectiveness of the proposed approach. In Reference [16], the authors have implemented the problem of the optimal location-reallocation of batteries in DC microgrids by solving the exact MINLP model in the GAMS software. Numerical results were presented in a 21-bus system considering that the initial location of the batteries was heuristically defined by the utility company. The authors do not provide any comparative methodology to demonstrate the efficiency of the proposed optimization methodology since the research is presented in a tutorial style. Soroudi, in Reference [13] has presented different optimization models for the optimal operation of BESSs in electrical AC networks. Three models were presented, which correspond to (i) the economic dispatch model, (ii) the DC equivalent of the AC grid, (iii) and the complete AC model of the grid. All these optimization models were solved in the GAMS optimization package; nevertheless, no comparison with metaheuristic or approximated optimization models were provided, since the intention of the author is to provide optimization tools to introduce engineers in the power system optimization topics from the tutorial point of view. Authors of [17] have presented a master-slave optimization methodology to operate batteries in DC networks considering multiple loads curves and high renewable generation availability. In the master stage was proposed a particle swarm optimizer to define the optimal operation of the batteries during the day, while the slave stage was entrusted with solving the multi-period power flow problem. The objective persecuted by the authors corresponded to the minimization of the energy purchasing cost at the substation node. The proposed methodology was tested in a test feeder composed of 21 nodes, and the efficiency of the methodology was compared with different metaheuristic approaches such as black-hole optimizer and the genetic algorithm. In Reference [18] the authors have proposed the implementation of the genetic algorithm to select and operate BESSs in AC distribution networks. The genetic algorithms were entrusted with determining the size and the operation scheme of the batteries using an integer codification with three possible states to operate these batteries. Numerical results demonstrate that the total grid energy losses is reduced when batteries are installed by using the Baran & Wu test feeder composed of 69 nodes [19]; however, the authors did not provide comparisons with exact or metaheuristics optimizers to confirm the effectiveness of the proposed optimization approach. Other authors have proposed multiple operative models to coordinate the daily operation of the batteries; some of these approaches are: mixed-integer linear programming [20][21][22]; second order cone optimization [23][24][25], semidefinite programming [26]; genetic algorithms [27][28][29], particle swarm optimization [30,31]; nonlinear programming [32][33][34][35][36], and reinforcement learning for energy system optimization [37,38]. The main characteristic of those researches is that the batteries are modeled through a linear relation between the state-of-charge and the amount of power injected/absorbed into the grid [11]; this linear representation allows solving efficiently the problem of the optimal dispatch of these batteries in AC and/or DC grids where these are previously located to the network. To contribute to the research area associated with the optimal integration and operation of BESSs in electrical networks, here, we propose an efficient mixed-integer quadratic convex (MIQC) model to select and size batteries in DC networks. This corresponds to an improvement of the mixed-integer nonlinear programming (MINLP) model proposed in [16], with the main advantage that the optimal location and coordination of the batteries correspond to the global optimal solution of the problem since the MIQC model ensures the finding through the application of the Branch & Bound and interior-point methods [39]. To verify the efficiency of the proposed MIQC model is evaluated in a DC network composed of 21 nodes and its results are compared with the exact MINLP model implemented in the GAMS software. To demonstrate the novelty of the proposed convex reformulation to select and locate batteries in DC grids, in Table 1 are summarized the main literature reports, by highlighting the type of mathematical models, solution techniques, and objective functions considered. Table 1. Main approaches reported in the specialized literature. Math. Model Objective Function Solution Method Ref. MINLP Minimization of the grid generation costs General algebraic modeling system [34] MINLP Minimization of energy losses costs and investment costs Sensitivity index combined with simulated annealing [15] MINLP Minimization of energy losses costs General algebraic modeling system [16] MINLP Minimization of energy losses costs Genetic algorithms and multiperiod power flow [18] NLP Simultaneous minimization of energy losses costs and greenhouse gas emissions General algebraic modeling system [40] NLP Minimization of grid generation costs General algebraic modeling system [13] NLP Minimization of energy losses costs Particle swarm optimization and multiperiod power flow [17] LP Minimization of grid operation costs and greenhouse gas emissions Stochastic linear programming [20] MILP Minimization of the operating costs reduction by promoting self-consumption General algebraic modeling system [22] MILP Minimization of operative costs in microgrids Simulation scenarios in the CPLEX solver [21] MICP Minimization of the grid expansion planning costs CPLEX solver in the AMPL software [23] SDP Minimization of the grid generation costs Convex solvers in the CVX environment for MATLAB [26] SOCP Minimization of the grid generation costs Convex solvers in the CVX environment for MATLAB [25] Note that the literature reports in Table 1 show that the most common objective functions are associated with the minimization of the grid operating costs and energy losses costs, which supports the selection of the objective function considered in this research to select and integrate BESSs in DC grids. In addition, the proposed MIQC reformulation for the exact MINLP model has not been previously proposed for AC and DC grids, which was identified in this research as a gap in the scientific literature that this investigation tries to fill. Ahead, this document has the following organization: Section 2 presents the exact MINLP formulation of the problem of the optimal selection and operation of BESSs in DC grids considering as the objective function minimization of the energy losses costs during the period of operation; Section 3 presents the proposed MIQC reformulation by using a Taylor linearization of the product among voltages in the power balance equation; Section 4 presents the main characteristics of the 21-bus system used to validate the proposed optimization methodology. Section 5 presents the main numerical results when comparing the exact MINLP and the proposed MIQC models, including their analysis and discussion. Finally, Section 6 lists the main conclusions obtained from this research. General Formulation The study of the optimal siting and selection of BESS in DC microgrids considering high penetration of renewable sources corresponds to a mixed-integer nonlinear programming formulation that can be represented as a multi-period economic dispatch [15,41]. The nonlinear part of the optimization model is defined by the product among voltages in the power balance constraint [42]; while binary nature is associated with the variables that define the location or not of a BESS in an arbitrary node of the network [16]. Objective Function Here, we present the exact MINLP for optimal locating-reallocating BESS in DC networks considering as the objective function the minimization of the costs of the daily energy losses. where f 1 is the value of the objective function related with the total costs of the daily energy losses, CoE t is the average cost of the energy in the spot market; v i,t and v j,t are the voltage variables at nodes i and j during the period of time t, respectively. G ij corresponds to the conductance parameter that relates nodes i and j; โˆ†t represents the length of the time period where the demand and generation are assumed constant; this parameter can be 1 h, 30 min, or 15 min, depending on the data resolution. It should be noted that H and B are the sets that contain all the periods of time and all the buses of the network, respectively. Remark 1. The main characteristic of the objective function f 1 is that it corresponds to an algebraic sum of quadratic terms with the main advantage that its combination is convex, since the conductance matrix, i.e., G, is a positive definite [43]; which implies that this objective function can be rewritten as follows where v t is the vector that contains all the nodal voltages at each period of time. It is worth mentioning that the minimization of the objective function (2) involves the improvement of the voltage profiles in all the nodes of the networks since the total grid energy losses are a nonlinear function of the voltage profiles. In addition, due to the presence of the conductance matrix in this objective function, the usage of the renewable energies are in general maximized, since these helps with local power injection that reduces the current magnitudes provided by the slack source. For this reason, as confirmed by Table 1, in this research the minimization of the total costs of the energy losses is selected as the performance index for our proposed MIQC model. Set of Constraints The problem of the optimal selection and operation BESSs in DC grids includes multiple constraints: power balance, state-of-charge (SoC) in batteries, devices' capabilities, voltage regulation bounds, and the maximum number of BESSs that can be installed along the grid [17], among others. In this paper, the complete list of constraints considered is presented below. where is the maximum number of BESS available for being introduced in the DC grid, and E is the set that includes all the BESS technologies available. Model Interpretation The exact MINLP formulation (1) to (12) can be understood as follows: Equation (1) and its matricial form defined in (2) correspond to the objective function of the optimization problem that defines the daily cost of the energy losses of the network associated with the energy dissipation in all the distribution lines of the distribution system. Equality constraint (3) defines the power balance at each node of the network which results from the application of the nodal voltage method to the grid with constant power consumption; Equation (4) defines the linear relation between the state-of-charge of the battery and its power delivered/consumed [13]; Equations (5) and (6) define the operative characteristics for operating BEESs, which are the desired initial and final SoCs typically defined by the distribution company. Equations (7)-(9) define the upper and lower power limits for the slack and disperse generation sources as well as for BESSs, respectively. In Equation (10), the voltage regulation constraints are set. Inequality constraint (11) presents the upper and lower bounds for the SoCs in BESSs; Equation (12) defines the maximum number of BESSs available for installation in the DC distribution grid. Remark 2. The main complication of the MINLP model (1)- (12) corresponds to the power equilibrium constraints (see Equation (3)) since this is non-convex owing the products among continuous variables, i.e., voltages in all the nodes [42]. This set of equality constraints (power balance equations) will be treated with a Taylor-based linearization to become the MINLP model into a MIQC model as the main contribution of this work as will be presented in the next section. Mixed-Integer Quadratic Reformulation To deal with the MINLP model that represents the problem of the optimal selection and location of batteries in DC distribution grids, we present the proposed MIQC reformulation based on the linearization of the product among two continuous variables present in the power balance constraint (3). This linearization is based on the convex representation of the load flow problem for DC distribution grids proposed in [44]. The linearization of the product of two continuous variables is detailed as follows. Let us suppose that a function to represent the product between two continuous positive variables defined as: In addition, consider that the initial value assignable to these variables are ฯ‰ 10 and ฯ‰ 20 , which implies that after applied the Taylor' series expansion to (13), the following representation for this product between two continuous variables is obtained, where f H.O.T (ยท) models the high-order-terms in the Taylor's series expansion. To obtain a linear approximation of the product between two variables, the high-order-terms in f H.OT (ยท) can be neglected due to their small contribution around the operating point (ฯ‰ 10 , ฯ‰ 20 ). When the linear approximation defined in (14) is applied to electrical networks to transform the power balance set of constraints in (3), the following linear equivalent set of constraints is obtained [44]. (15) where v i0,t and v j0,t represent the linearization points for the voltage profiles at each period of time; where if the per-unit representation is used, then these values are equal to 1.00 pu [44]. To demonstrate the effectiveness of the linearization of the product between voltages around the operative point (v 10 , v 20 ) = (1, 1) a graphic comparison between the nonlinearised and the linearised representation is considered for two small load systems. Considering a grid with two demand nodes with lower and upper voltage regulation bounds of 0.90 pu and 1.10 pu, the error between the nonlinear function in (13) and the linear representation (14) is presented in Figure 1. Note that the percentage error depicted in Figure 1 shows that the linearization of the product between voltages has an estimation error of about 1% in the extreme voltage points, which confirms that the linear representation in (14) is suitable to represent the product of the voltage variables as was redefined in Equation (15). Remark 3. Once the power balance equations are linearized as defined in (15), the complete structure of the proposed MIQC is composed by the objective function (1) or (2) and the restrictions (4)- (12) and (15). To summarize the solution methodology for the proposed MIQC model and the exact MINLP formulation in Figure 2 is presented the flow diagram of both mathematical models in the GAMS software [13]. Analysis of a DC Network To validate the MIQC model for the optimal selection and location of BESSs in DC networks using the GAMS software we use the 21-node test feeder reported in [16]. The complete information of this test feeder is presented below. The 21-bus system corresponds to a radial DC network with 21 nodes and 20 branches with a radial configuration. A controlled voltage source is sited at node 1, which defines the voltage profile of the network in 1 kV. The connection among nodes in this test feeder is presented in Figure 3. Two distributed generators (DGs) are considered in the the 21-bus system, one wind power generator and one photovoltaic system. The wind power generator is located at node 12 with a maximum power rate of 221.52 kW. The photovoltaic source is located at node 21 with a maximum power rate of 281.58 kW. It is worth mentioning that the rated power of the DGs will multiply the normalized generation curves presented in Table 5. Implementation and Results The implementation of the proposed MIQC model and the exact MINLP model was made on a desktop computer running on INTEL(R) Core(TM) i7-7700, 3.60 GHz, 8 GB RAM with 64-bit Windows 10 Pro (Intel, Santa Clara, CA, USA). The optimization package corresponds to the GAMS software version 25.1.3 using the BONMIN solver. To evaluate the performance of the BESSs in the DC network in all the simulations is considered that the initial and the final state-of-charge are assigned as 50%, and along the day this variable can vary from 10% to 90% [36]. In addition, three simulation cases are studied as follows: โ€ข Case 1: The initial location of the BESSs is tested in the exact MINLP model and the MIQC model to determine the error introduced by the Taylor approximation in the daily cost of the energy losses. Note that this simulation case solves the power flow problem with multiple periods since the binary variables related with BESSs are fixed, i.e., the MINLP model becomes into a nonlinear programming model and the MIQC model becomes a quadratic convex model. The main idea of the proposed simulation scenarios is to verify the effect of the proposed approximated MIQC model to select, locate and operate batteries in DC networks when compared with the exact MINLP model. For this reason, both models are implemented in the same optimization environment (i.e., GAMS interface). Regarding processing times, both optimization models take less than 5 min to be solved, which implies that for the 21-node test system the proposed optimization model allows evaluating multiple operative conditions (generation and demand combinations) to define the best operative scheme of the batteries, once these batteries have been installed, by ensuring that each solution will be optimal. Comparative Results in the Case 1 In this simulation case, the initial BESS location is presented in Table 4 and Figure 3 are fixed into the exact MINLP and the MIQC model. Once the exact MINLP model is solved in the GAMS package with the DICOPT solver, it is found a total daily cost of the energy losses of COP $52957.92; while the solution of the MIQC model with the same solver provides an optimal solution with a cost of COP $50, 890.12. The difference between both solutions is about 3.90%, which corresponds to the approximation error between the exact and approximated power balance equations (see Equations (3) and (15)). In Figure 4 are reported the profiles of the state-of-charge of the batteries obtained with the exact and the convex approximation models. The behavior of the state of the charge in the exact and convex model follows the same tendency with negligible errors, which confirms the effectiveness of the quadratic convex approximated model to operate batteries in DC networks. The main advantage of the quadratic approximation is that the existence of the global optimal solution is ensured via convexity theory, which is indeed more attractive for the grid operation since the solution with the same inputs will always be the same, which is not possible to ensure with nonlinear programming models or solution techniques based on metaheuristics. It is worth motioning that if the solution provided by the proposed quadratic model is evaluated in the exact nonlinear model, then, the error in the estimation of the daily energy losses costs is less than 0.5%, which demonstrate the effectiveness of our proposal to determine the operative plan of BESSs under an economic dispatch environment for DC networks. With respect to the behavior of the state of charge for all the BESSs depicted in Figure 4, we can observe that: (i) all the batteries begin and finish the day with the operative consign assigned, i.e., 50 % of the state of charge; (ii) the maximum value of the state of charge is about 77.20% for the battery located at node 7, and the minimum value occurs for the same battery with a value of 48.90%; these values imply that the batteries maintain during all the day values between their minimum and maximum bounds, i.e., from 10% to 90%; and (iii) the behavior of the state of charge at each battery is different since this depends on its location and the possibility of absorbing energy from the renewable energy source to return this energy to the grid when the load increases. Note for example that the BESS located at node system provides energy to the grid after period 21, while the remainder BESSs returns energy to the grid only after period 33, these behaviors confirm the complex relation among demand, power generation, and batteries to reach the minimization of the daily energy losses costs in the DC grid. Comparative Results between Case 2 and Case 3 To determine the effectiveness and robustness of the proposed MIQC model to select and locate BESSs in DC networks, here, we compared the solution provided with the exact MINLP formulation and our proposed convex approximation. The solution of the exact MINLP and the MIQC models are obtained with the GAMS optimization package using the BONMIN solver. The objective function value for both models as well as the battery locations are reported in Table 6. Results in Table 6 show that: (i) the BESSs' location found by the exact MINLP model are the nodes 13, 20 and 21, with an objective function of COP $47,209.95, i.e., a reduction with respect to the base case of 10.85%. The solution of the exact MINLP model reallocates all the batteries with respect to the benchmark case, which implies that the solution of the optimization model to select and locate the BESSs in the DC network is better than the heuristic approach reported in [36]; (ii) the proposed MIQC model finds a better optimal solution with an objective value of COP $41,627.34, which has an estimation error with respect to the evaluation of the BESSs' location in the exact model about 3.49%, which implies that the location of the batteries in nodes 5, 16, and 21 provides a daily energy losses cost of COP $43,134.59 (note that this objective function value corresponds to the evaluation of the batteries' location provided by our proposed MIQC model into the exact MINLP approach to eliminate the estimation error introduced with the model linearization). This objective function shows that with respect to the benchmark case the effective reduction of the daily energy losses cost is about 18.55% when the MIQC model is used to select and locate the BESSs in the DC grid; and (iii) the effective improvement of the proposed MIQC model with respect to the exact MINLP formulation is about 8.63%. Figure 5 presents the final location of the BESSs system obtained for the exact MINLP and the proposed MIQC model, where we can observe that only node 21 appears in both solutions, this is due to the presence of a distributed generator in this node that allows storing energy in the periods of high generation and low demand. To illustrate the behavior of the BESSs in both models, we also present the state-of-charge profile in this node for both solutions compared with the generation curve in this node. Figure 6 shows that: (i) both optimization models present a similar state-of-charge behavior at node 21; however, the proposed MIQC model provides additional energy to the grid which helps with the total energy losses cost minimization (see periods of time between 3 to 33) and (ii) both batteries provide energy in the initial period of time (from 1 to 18) since this energy will recover in the period of times where the PV source increases its power injection (see periods of time between 15 to 33) to help with additional power injections in the periods of time that the PV source decreases (periods 33 to 48) and the demand increase, i.e., periods of time from 33 and beyond. Conclusions This paper studied the problem of the optimal selection and location of the BESSs in DC grids by transforming the exact MINLP formulation into a MIQC equivalent. This transformation was applied to the power balance constraint using Taylor's series expansion applied to the product among voltages at each node. The proposed obtained MIQC model has as the main advantage that it ensures the global optimal reaching via Branch & Bound and interior-point methods. Numerical results demonstrated that the error introduced by the Taylors' approximation is less than 4.0 % for in the economic dispatch analysis when batteries are considered fixed. When the exact MINLP and the proposed MIQC models are solved in the GAMS software with the BONMIM solver the numerical results demonstrated that the MINLP model obtains a reduction of 10.85% in the daily energy losses cost when compared to the benchmark case by reallocating the BESSs in nodes {13(A), 20(B), 21(B)}; however, the proposed MIQC model allowed a reduction of 18.55% in the daily grid operation costs by reassigning batteries at nodes {5(A), 16(B), 21(B)}. The net difference between both models was about 7.70% in favor of the proposed MIQC model when compared with the exact MINLP formulation, which confirmed the robustness and effectiveness of the proposed convex formulation to select and locate BESSs in DC networks. It will be possible to develop the following future researches: (i) to include in the proposed MIQC model the possibility of doing the location of renewable energy sources and batteries for DC networks simultaneously; (ii) reformulate the exact MINLP model as a mixed-integer conic model to size and locate batteries in DC grids by minimizing the total cost estimation error; and (ii) improve the battery models with temperature and degradation factors using reinforcement learning techniques for being integrated into AC and DC networks considering different objective functions such as voltage maximization or renewables usage maximization. energรฉticos distribuidos en redes de distribuciรณn de energรญa elรฉctrica." and in part by the Direcciรณn de Investigaciones de la Universidad Tecnolรณgica de Bolรญvar under grant PS2020002 associated with the project: "Ubicaciรณn รณptima de bancos de capacitores de paso fijo en redes elรฉctricas de distribuciรณn para reducciรณn de costos y pรฉrdidas de energรญa: Aplicaciรณn de mรฉtodos exactos y metaheurรญsticos." Conflicts of Interest: The authors declare no conflict of interest.
Usefulness of bee bread and capped brood for the assessment of monocyclic aromatic hydrocarbon levels in the environment * Monitoring airborne pollutants, like aromatic hydrocarbons, are raising more and more concerns recently. Various sampling techniques and methods are known to collect, measure, and analyse environmental pollution levels based on honey bee bodies or bee product samples. Although honey bees are studied in detail and sampling methods are becoming more and more sophisticated biological samples may signi ๏ฌ cantly differ in pollutant accumulation, showing a wide range of pollution levels even in the same site and environment. We have compared the pollution levels of honey bee capped brood and bee bread (pollen collected by honey bees and deposited in the hive) originating from four sites during two years of study and twelve honey bee families near various pollution sources emitting monocyclic aromatic hydrocarbons (BTEX) to the environment. Our result showed, that the environmental monitoring of BTEX can be based on sampling honey bees, and bee bread in particular. However, we found a signi ๏ฌ cant difference in the uptake of these pollutants regarding sample type. Pollen collected as a food source revealed consistently higher levels of BTEX than bee brood, as well as some other differences in pollution levels between samples and between seasons, as opposed to capped brood. Based on our re- sults, we suggest that for measuring and monitoring of BTEX pollution in the environment the use of bee bread is a valuable source of information. ยฉ 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction Anthropogenic pollution of the environment is a growing problem globally, and has been since the Industrial Revolution. Direct measurement of environmental pollutants can show how contaminated the soil, air, or water is, but to assess how this contamination can affect the ecosystem requires a somewhat different approach. Animal-or plant-derived samples can indicate the uptake of pollutants from anthropogenic environmental sources. Various aquatic and terrestrial organisms are used for such monitoring purposes, as are honey bees (Apis mellifera) (Bromenshenk et al., 1985;Barga nska et al., 2015). The honey bee as a species has been managed for thousands of years throughout human history, and due to its economic and agricultural importance is nowadays widespread and abundant on almost all continents. Honey bees' worldwide distribution allows scientists to use them in various ecosystems for environmental monitoring. The bees themselves, as well as their products d honey or pollen d can be used to monitor the environment for the distribution of various pollutants: heavy metals (Bromenshenk et al., 1991;Conti and Botr e, 2001;Satta et al., 2012), essential metals (D _ zugan et al., 2017) radioactive substances (Haarmann, 1997), non-organic substances (Ponikvar et al., 2005), pesticides (Chauzat et al., 2006), organic contaminants like polychlorinated biphenyls (Anderson and Wojtas, 1986), and, lately, aromatic hydrocarbons (Dobrinas et al., 2008;Perugini et al., 2009). Various sampling techniques and methods are known to collect, measure, and analyse pollution levels based on bee bodies, body parts, or bee product samples. Although honey bees are studied in detail and sampling methods are becoming more and more sophisticated (Barga nska et al., 2015), biological samples may significantly differ in pollutant accumulation, showing a wide range of pollution levels even in the same site and environment. Especially honey was well studied due to its importance as a bee product. It was found, that variation in trace element content in honey is first of all due to botanical origin rather than environmental exposition to pollution (Bogdanov 2006;Bogdanov et al., 2007), however also apiculture practices and honey processing should be considered when analysing metal content of honey samples (Pohl, 2009). Honey bees feed on honey d which is made from floral nectar or honeydew (sap excreted by aphids living on plant sap) collected by the bees d and on pollen, also collected by worker bees from flower anthers. Floral nectar is a substance produced by the nectaries; due to the structure of flowers it is less exposed to airborne pollution and is the least polluted bee product (Formicki et al., 2013;Joveti c et al., 2018;Matin et al., 2016). Nectar can also evaporate from the flower in high temperatures or be washed out during heavy rains, which means that the plant must constantly renew it until the flower is pollinated. For this reason, honey is usually less exposed to airborne pollutants (often travelling on PM present in the air, eg. heavy metals), and contains lower levels of such pollution than honeydew or pollen, which can be exposed to airborne pollutants for longer periods of time (Maragou et al., 2017). In addition, the high viscosity of honeydew and pollen causes them to accumulate larger amounts of pollution. Nectar and pollen can be contaminated not only by the deposition of atmospheric pollution on plants, but also by plants' uptake of pollutants like radionuclides or heavy metals from the soil (Bunzl et al., 1988;Ismael et al., 2019;Silva et al., 2012). The presence of pollutants in honey can therefore vary based on the flower's ability (its shape or environmental conditions during flowering) to collect these airborne pollutants, and on the plants' ability to uptake and excrete pollutants into the produced nectar (Bunzl et al., 1988). These differences can be quite significant. For example, the level of heavy metal pollution can differ by a factor of one hundred depending on the type of sampled honey, usually reaching higher levels in honeydew than in monofloral honeys (D _ zugan et al., 2017). A similar tendency was found when comparing radionucleotide levels in various nectar honeys and honey from honeydew (Bari si c et al., 1999). Monofloral honeys can also differ in the level of pollutants, e.g. due to differences in flower structure and shape, such as open or closed flowers or flowers standing upright or hanging down. The fresh nectar collected by bees is also mechanically filtered by the proventriculus before reaching the crop (the nectar-collecting organ of bees) and particles 100 mm or larger are caught between the stylets of the mouthparts and are not ingested (Peng and Marston, 1986). In honey, the composition of minerals or pollutants besides the raw material from which honey was produced (botanical origin) may also depend on the climatic conditions and geographical area (Bogdanov, 2006;Bogdanov et al., 2007) and was found to be a poor bioindicator of heavy metal pollution in the environment (Conti et al., 2018;Pohl, 2009;Satta et al., 2012), but also honey processing itself can cause additional contamination of collected honey (Pohl, 2009). Nevertheless, there are studies suggesting, that honey samples can indicate of the sampled honey's geographical and botanical origins, as well as types, source, and degree of contamination (Solayman et al., 2015). Yet, having in mind the large effect of botanical and climatic conditions on pollutant uptake by honey, it is the least reliable bee product for monitoring purposes. Pollen is usually found to be more contaminated due to its being exposed to airborne pollution longer than the continuously produced nectar, and because pollen is highly lipophilic, containing 4%e8% lipids, but in some cases as much as 22.4% (Szczฤ™ sna, 2006). Honey, on the other hand, contains water, sugars, amino acids, organic acids, minerals, and other relatively hydrophilic constituents (da Silva et al., 2016). Similarly to pollen, propolis (resinous exudates gathered mainly from buds, but also from leaves, branches, and barks and mixed with the secretion of bees' mandibular gland) also contains more contaminants (Matin et al., 2016) and various microelements than does honey (Maragou et al., 2017). However, collecting propolis from the hive is a more time-consuming and complicated procedure than collecting pollen, as the latter can be collected by using readily available pollen traps or by simply taking samples straight from a comb filled with bee bread (pollen gathered and slightly fermented for storage in cells). Therefore, bee bread can be an easily accessible bee product to be used for environmental monitoring. For the last more than 50 years numerous studies have showed that measuring pollutant level in adult honey bees can also serve for monitoring purposes (Bromenshenk et al., 1985;Crane, 1984;Conti and Botr e, 2001;Wallwork-Barber et al., 1982). Most studies use adult honey bee bodies to monitor the environment for pollutants first of all. The level of pollution in bee bodies was found to be significantly different between environments and to correspond to the varying level of pollution (Barga nska et al., 2015). Heavy metal concentration in the body of adult bees was, for example, almost one hundred times larger in bees living in areas with a higher probability of pollution (Bromenshenk et al., 1985). The area of foraging activity associated with honey bee colonies can generally extend over a 10-km radius (Visscher and Seeley, 1982) around the colony; however, in the natural environment bees will fly up to approximately 1.7 km (Waddington et al., 1994), while in an urban environment it is usually about 1.2 km (Garbuzov et al., 2015). These differences in foraging distance are usually due to differences in food source availability: the more diverse and rich the actual food source around the colony is, the shorter distance a bee will fly while foraging (Schneider and McNally, 1993;Beekman and Ratnieks, 2000). Nevertheless, colonies even in the same place can differ in their actual foraging area, foraging activity, distance covered, and even in preferred food sources (both for nectar and pollen). These differences between colonies can affect the amount of pollution found in the collected pollen and in bee bodies. Bees of various age may also differ in the levels of contamination found in their body or tissues. After hatching from its egg, a bee larva is fed a high-protein diet based on pollen and royal jelly (a secretion of nursing bee glands) which is necessary for its development. Depending on its future caste, the bee larva is fed continuously for 5e8 days by the nursing bees; after consuming all the food it is provided with, it will produce a cocoon, go through metamorphosis, and finish its development. During its first few days of life after eclosion, a young bee will clean the brood cells, build up its protein level by consuming more and more pollen, and feed the older larvae. Later, when its hypopharyngeal glands are fully developed, it will also feed the younger larvae royal jelly (Haydak, 1963). During the last two phases, the nursing bee may be exposed to relatively high levels of contamination present in the pollen and nectar from which it produces royal jelly for the larvae and covers its own energetic and metabolic needs for survival. Later, when hive bees turn into foragers, their diet changes: they reduce their fat bodies and consume mostly honey instead of pollen (Haydak, 1963). Monitoring airborne pollutants, like aromatic hydrocarbons, are raising more and more concerns recently. Although in the last few years the emission of air pollutants in Europe has followed a downward trend, in Poland PM2.5 and PM10 levels continue to be among the highest rates in the EU. Reports of the World Health Organization (WHO) indicate that more than half of the 50 smogaffected European cities are located in southern Poland (WHO, 2016), a region which is both highly urbanised and industrialised. The city of Krak ow is ranked 11th among them. Several factors can be the source of such high concentrations of air pollution in the capital of the Lesser Poland Voivodship. The most important one is called "low emission" (emission from sources located at a height of up to 40 m) and results from the combustion of solid fuels (e.g. low quality coal) and rubbish in heating furnaces (Burchart-Korol et al., 2016;Dziku c and Adamczyk, 2015). Another factor affecting the quality of the air in Krak ow is pollution due to vehicular traffic, in particular car exhaust fumes (Dziku c et al., 2017). The data from 2016 presented in the TomTom Traffic Index report indicate that Krak ow ranks 8th in the most congested cities in Europe, and its position in this ranking is rising. In addition, as indicated by the Statistical Yearbook of Poland (GUS, 2018), one in five cars registered in Poland is over 15 years old with an inefficient or worn-out catalytic converter. The air quality in Krak ow is also strongly influenced by the location of the city in the Vistula River valley, and its dense urban development significantly limits the movement of air masses, making it impossible to disperse persistent pollution over a large area (Oleniacz et al., 2016). In the vicinity of Krak ow, there are other significantly polluted areas. Stretching westward from Krak ow is the Katowice industrial region, with a number of coal and ore mines, smelters, and other heavy industrial activities. One of the closest industrial sites is just outside the city of Olkusz, 30 km northwest of Krak ow: the zinc smelter ZGH "Bolesล‚aw" in Bukowno. It pollutes the environment both with the by-products of previous mining activities, stored in ore heaps, and with air pollution from combustion processes and other technological processes during the production of various forms of zinc from metalliferous ores. In our study, we described the aromatic hydrocarbon pollution levels of honey bee capped brood (larvae after finishing feeding and being in the cocoon in a closed, capped cell and later changing into pupa) and bee bread in hives (pollen collected by and prepared for storage and larval feeding by bees) located on sites with different sources of air pollution (mostly urban or mostly industrial) in southern Poland d in the city of Krak ow and around the city of Olkusz. The aim of our study was to test how high the uptake in bee bread and capped brood is and how reliable single bee bread or capped brood samples are for monitoring urban and industrial areas for aromatic hydrocarbon pollution. So far, only a few studies were conducted for monitoring of polycyclic aromatic hydocarbons (PAHs) with use of either adult honey bees (Perugini et al., 2009) or additionally also honey bees products (Lambert et al., 2012;Kargar et al., 2017). We have chosen capped brood instead of adult bees, based on the growth and life cycle of honeybees. One can except high levels of contamination in bee bodies during the last phase of larval development (Haydak, 1963). The pollution level is mostly dependent on the level of contamination of pollen, the protein source for developing larvae, therefore, capped brood may serve as a better indicator of environmental pollution levels than adult forager bees feeding mostly on nectar. Bee bread stored in the hive may also have higher pollution levels than pollen. Pollen is transported by forager bees to the hive, where it is deposited and processed for storage possibly gathering also additional pollution from the foragers bee's body during deposition. Therefore bee bread may give a better picture of the total environmental pollution in the area surrounding the hive. Materials and methods Samples of bee bread (further called pollen) and capped brood were collected on two urban-type sites in the city of Krak ow (sites K1 and K2, further called urban sites) and on two industrial sites near and in the city of Olkusz (sites O1 and O2, further called industrial sites). Site K1 (50 03 0 39.4 00 N 19 52 0 17.0 00 E) was located in the vicinity of the Wolski Forest in the outskirts of Krak ow, 4 km from the city centre, the Main Market Square, in a predominantly residential area with detached houses, meadows, and woodland, but within a few hundred metres of a public road with heavy traffic. Site K2 (50 03 0 40.9 00 N 19 55 0 48.1 00 E) was located in the old city centre of Krak ow, a few hundred metres from the Main Square, in the Monastery of Minor Capuchin Friar. Considering the average flight range of honeybees, which in an urban environment can reach about 1.2 km (Garbuzov et al., 2015), the possible foraging areas of the sampled bee colonies from sites K1 and K2 did not overlap, as shown in Fig. 1, but both were located inside the city area. Site O1 (50 17 0 03.6 00 N 19 26 0 57.5 00 E) was located in the village of Bukowno, which neighbours the city of Olkusz, less than two km from the ZGH "Bolesล‚aw" zinc smelter and the ore heaps located just outside of the city of Olkusz. Site O2 (50 18 0 14.6 00 N 19 32 0 34.7 00 E) was located at a distance of about 3 km from Olkusz's city centre, in an area of detached houses neighbouring a woodland and lying approximately 4 km from the smelter and the ore heaps. Both sites, O1 and O2, were located approximately 3 km from national road No. 94, which experiences heavy traffic. The locations of all sample sites are presented in the maps in Fig. 1. The pollution of atmosphere with benzene in the Lesser Poland Voivodship was monitored in 2018 and the mean concentration did not exceed 3 mg/m3. Specifically for Krak ow the pollution levels with benzene for this period based on 3 sampling points were: min. 2.1 mg/m 3 max. 2.8 mg/m 3 and mean 2.32 mg/m 3 in 2018. no such data is available for the city or the surroundings of Olkusz. At each site, capped brood and pollen were collected from three stationary hives owned by local beekeepers. Two pieces of comb were cut out for testing, each with a surface area of at least 15 cm ร‚ 15 cm (or more when necessary) d one containing stored bee pollen and the other capped brood. Samples were collected in two seasons: 2017 and 2018. In each season, all colonies were sampled twice, in the spring (the end of April or May, depending on the weather) and in the summer (June). The pieces of comb were placed in airtight polyethylene bags and kept cool in portable cooling boxes, transported back to the laboratory, where they were kept frozen at ร€20 C until analysis. Prior to the analysis of volatile organic compounds (BTEX), the samples were defrosted and homogenised to obtain the most homogenous mass, and then weighed in amounts of 0.4 g. The samples were not dried before the analysis, due to possible losses of volatile organic compounds during drying. Three weights were made from a sample from a given hive. BTEX hydrocarbon concentrations (benzene, toluene, ethylbenzene, and p-xylene) were analysed using the GC/MS technique with a headspace injector and n-buthylbenzene as an internal standard. The analyses were performed with a Hewlett Packard 6890 chromatograph equipped with Hewlett Packard headspace model HP7694E and Agilent 5HS 30 m ร‚ 0.25 mm x 0.25 mm capillary column. The fused silica 30 m capillary column containing 10% of phenyl groups ran at a constant pressure. Helium was used as a carrier gas at a flow rate of 1.0 ml/ min. The following conditions were used: an initial temperature of 55 C, an equilibration time of 3 min, a temperature gradient of 15 C/min, and a final temperature of 120 C kept for 3 min. Vial pressure was 50 psi, vial pressurizing time lasted 0.3 min and Vial sampling time 0.33 min. The temperatures of the ion source and the quadrupole were 230 C and 150 C, respectively. Detection was conducted using electron impact ionization at 70 eV in selected ion monitoring (SIM) mode at an m/z of 78, 91, 106, 106, and 134 amu for selective detection and quantification of benzene, toluene, ethylbenzene, p-xylene, and n-butylbenzene, respectively. Unfortunately, no certified reference materials are available for honey bees and pollen or bee bread therefore we used the following hydrocarbon standards: benzene, toluene, ethylbenzene, p-xylene. Their LOD values were the following: benzene 0.243 ng/g and 0.6967 ng/g, toluene 0.189 ng/g and 0.866 ng/g, ethylbenzene 0.170 ng/g and 0.943 ng/g and p-xylene 0.766 ng/g and 0.420 ng/g for capped brood and pollen, respectively. Their LOQ values were the following: benzene 0.810 ng/g and 2.322 ng/g, toluene 0.631 ng/g and 2.885 ng/g, ethylbenzene 0.566 ng/g and 3.14 ng/g and p-xylene 2.553 ng/g and 1.399 ng/g for honeybees and bee bread, respectively. Hydrocarbon concentrations in standard solutions 1e6 were in the range of 0.167e0.850 mg/ml (hydrocarbon dissolved in methanol). The concentration of solution No. 7 was about 5 times higher than solution No. 6. Solution No. 7 was prepared to check whether the straightness of the calibration curve (y ยผ ax รพ b) was maintained at much higher concentrations. Calibration of the chromatographic system for the analysis of monocyclic aromatic hydrocarbons was performed not for pure standard solutions, but for weights of capped brood and pollen with the addition of these solutions. This treatment was aimed at eliminating the influence of interference effects of matrix components on the course of the calibration curve. Capped brood and pollen used for calibration were obtained from an apiary located outside Krak ow. They were packed into sealed bags immediately after being removed from the patch. After packing, the material was homogenised. The homogenization time was 2 min 14 samples of capped brood and 14 samples of pollen (0.4 g each) were prepared in glass vials. Then, each of the seven calibration solutions was dosed to two bee samples and two pollen samples in an amount of 50 ml. The amounts of individual hydrocarbons added to the samples equalled from 8.4 to 212.6 ng, which, based on 1 g of capped brood or pollen, gives values from 21 to 532 ng. The weight of 0.4 g was the maximum mass of capped brood that could be obtained for one measurement from the average slice obtained for testing. Obtaining a smaller amount of sample would result in less volatile compounds evaporated to a larger volume of the headspace phase in the vial, and thus smaller peak areas and greater uncertainty in the results obtained. Despite weighing the same capped brood and pollen weight (0.4000 ยฑ 0.0010 g) each time, these weights differ in volume, and thus the volume of the headspace phase in the vial (and the concentration of volatile substances in this phase) is different for each weighting. The addition of an internal standard to the weighing each time and taking into account its peak area in the calculations allows to eliminate differences in the peak areas resulting from unequal volumes of the supra-surface phase. Therefore, in addition to calibration solutions, 43 ng of internal standard (n-butylbenzene) in the form of a 50 mL methanol solution was added to the capped brood and pollen weights, and the vial prepared in this way was sealed and subjected to chromatographic analysis. Linearity range for each hydrocarbon, for capped brood and pollen was 0.5 mg/g. Accuracy (and precision given as standard brood and pollen, respectively. For the calculation of hydrocarbon concentrations in real samples, calibration curve equations in the form y ยผ ax were used. The calibration curves were plotted as the S cor (corrected area) dependence on c (hydrocarbon content per gram of capped brood or pollen). S cor was calculated as the quotient of the peak area characteristic of a given hydrocarbon and the peak area of the internal standard (n-butylbenzene) (Fig. 2). For each colony, the level of pollen and capped brood pollution were counted based on a maximum of three repeated measurements per sample. We calculated the coefficients of variance between colonies from the same site at the same time in the season and in the same year. Then, we compared these values to the coefficient of variance between sites of the same type and from the same year using a t-test. The possible correlation between pollen and capped brood samples originating from the same families at the same time were compared using Spearman's correlation, which is less sensitive to possible strong outliers. Next, the mean values for all families per site were counted to assess pollution levels for each site. Due to the non-normal distribution of the data, non-parametric tests were used for further analysis. For comparison of pollen and capped brood contamination levels between years and site types, mean values counted for each site were used (to weigh against unequal colony number on some sites) and analysed using ManneWhitney's U test. To compare pollen and capped brood contamination from the same family and to assess possible seasonal differences between sites, a t-test was used for paired comparison. All calculations were done using Statistica 13 (Dell Statistica, 2016). Results We collected 43 pollen samples and 46 samples of capped brood from 12 hives, during the two years and during two different seasons (Table 1A and B). The level of mean BTEX pollution was generally higher in the pollen (mean ยฑ SE: 29.2 ยฑ 2.15 mg/kg) than in the capped brood (17.3 ยฑ 1.22 mg/kg)(t ยผ 21, df ยผ 16, p ยผ 0.015, although in some cases in 2017, due to high ethylbenzene levels in some samples, this trend was reversed (Table 1A). An analysis based on the mean pollution levels calculated for each colony showed that the contamination of bee bread and capped brood with BTEX on urban and industrial sites were similar (bee bread: U ยผ 30, p ยผ 0.878; capped brood: U ยผ 29, p ยผ 0.798) (Fig. 3). However, some differences between the study years were found. The pollution levels found in the bee bread were higher in 2018 than in 2017 (U ยผ 6.0, p ยผ 0.005), but not in capped brood (U ยผ 30.0, p ยผ 0.878) (Fig. 4). The mean BTEX contamination of bee bread samples between spring (mean ยฑ SE: 27.0 ยฑ 2.95 mg/kg) and summer (31.4 ยฑ 3.11 mg/kg) sampling were similar (t ยผ ร€1.23, df ยผ 19, p ยผ 0.234) and the contamination of capped brood samples were the same between seasons (spring mean ยฑ SE: 16.8 ยฑ 1.46 mg/kg and summer 17.8 ยฑ 1.99 mg/kg Detailed analysis of the four measured components of BTEX contamination did not show any significant difference between the Fig. 2. Calibration curves of four hydrocarbons (benzene, toluene, ethylbenzyne and -xylene). The S cor (corrected area) dependence on c (hydrocarbon content per gram of capped brood or bee bread). S cor was calculated as the quotient of the peak area characteristic of a given hydrocarbon and the peak area of the internal standard (n-butylbenzene). two types of sites, either in bee bread or in capped brood samples (Table 2A). The highest levels in both bee bread and capped brood was of toluene, while the lowest were ethylbenzene in bee bread and benzene in capped brood ( Fig. 5a and b). There was also no significant difference between the spring and summer samples of the bee bread and the capped brood, except for toluene levels in the bee bread. The summer samples had significantly higher toluene levels in the bee bread (Table 2B). The benzene (r(40) ยผ 0.423, p ยผ 0.005), toluene (r(40) ยผ 0.604, p < 0.001), and p-xylene (r(40) ยผ 0.561, p < 0.001) levels in the Table 1 Mean pollution (ยฑSD) levels with BTEX on two industrial and two urban sites measured twice during the season: during spring and during the summer in bee bread stored by bees and in capped brood and in two consecutive seasons: 2017 (A) and 2018 (B). capped brood corresponded positively to the pollution levels found in bee bread, while ethylbenzene showed a somewhat weaker and negative correlation (r(40) ยผ -0.350, p ยผ 0.023) between the two sample types (Fig. 6). The coefficients of variance between the samples from the same site and the same season were compared, and we found that in the case of bee bread these values ranged between 2.6% and 59.2% per site (Table 3A), while in the case of the bee capped brood the samples ranged between 9.6% and 41.5% (Table 3B). No significant difference was found between the coefficients of variance for the bee bread and capped brood samples (t ยผ ร€0.46, df ยผ 15, p ยผ 0.652). We also calculated the coefficient of variance between sites and found similar or somewhat higher values as the coefficient of variance between colonies at the same site (Table 3). For bee bread, the range was between 6.9% and 76.6% (Table 3A), while for capped brood it was 12.7%e71.6% (Table 3B). Discussion Our results show that the environmental monitoring of BTEX can be based on sampling honey bee capped brood, and bee bread in particular. However, there is a significant difference in the uptake of these pollutants regarding sample type. Bee bread collected as a food source revealed consistently higher levels of BTEX than capped brood (Fig. 3), as well as differences between years, as opposed to capped brood. Honey bees in urban areas collect pollen from 0.5 to 1.2 km around the hive (Garbuzov et al., 2015). However, this 0.8e4.5 km 2 area in the case of urban honey bees may not be covered uniformly by foragers of each family. Bees learn the location of a food source from each other, so each colony might forage on different areas of this larger potential zone. Such foraging differences can result in varying pollution uptake depending on where in the surrounding area and on which flowers bees of a certain colony mostly forage. These differences are quite visible when comparing the coefficient of variance between colonies from the same site. Both bee bread and capped brood samples showed a wide variance, suggesting that the families studied indeed used different food sources, even when located on the same sites (Table 1). Actually, the coefficient of variance between colonies from the same site and the coefficient of variance between mean pollution levels of the four various sites are similar. The wide variance of pollution levels measured at the same site but in different hives shows clearly that monitoring should be based on more than one colony (Table 3). A minimum of three colonies, like in our study, or optimally more, should be used to achieve an accurate mean pollution level per site. This is even more true if only small differences are expected in pollution levels between sites, like in our example. Various studies used so far different number of colonies for biomonitoring purposes. Some based their results on a single colony per site, others used more, usually at least three colonies per site. Honey bee larvae are fed primarily royal jelly with a growing addition of pollen over time. Pollen (both in the form of royal jelly Table 2 Statistical comparison of the level of BTEX pollutants found in bee bread and capped brood samples between urban and industrial sites (A) and between spring and summer samples (B). * indicates statistically significant difference. and as bee bread) is the larvae's source of the proteins necessary for development. Lower pollution levels found in honey bee capped brood are a natural phenomenon and corresponds to the results of Lambert et al. (2012) who also found lower levels of PAHs in honey bee bodies, than in pollen samples on the same sites. In addition to protein (from pollen) bees also need carbohydrates, lipids, minerals, and water for their development, which are found mostly in the honey produced from flower nectar. Honey made from flower nectar is a less polluted food source (Formicki et al., 2013;Joveti c et al., 2018), so feeding the larvae both honey and pollen can explain the lower, more diluted pollution levels in the capped brood's bodies. Lower pollution levels in capped brood were not followed by lower variance levels, as one might expect. Pollution levels tend to present a left-skewed lognormal distribution, causing lower variance in the case of lower mean values, due to the skewness of data. Therefore, one can assume that bee bread d with its higher mean pollution values and similar variance between samples from the same site d actually shows a more accurate picture of pollution than the less-polluted capped brood. Bee bread pollution levels may also more accurately correspond to environmental pollution level than capped brood. The BTEX levels found in capped brood which were fed the bee bread present in the hive and analysed for pollution did not correspond fully to the levels found in the bee bread. While in the bee bread samples, substantially more BTEX was found in 2018 in the capped brood, such increased levels of BTEX pollution did not appear in that year of our study; moreover, the capped brood actually showed similar levels on both site types and through both years, regardless of changes in bee bread pollution. There are two possible explanations: either there is a mechanism controlling some of the BTEX pollution levels in the capped brood or the nursing bees were choosing less-polluted bee bread to feed to the larvae. In both cases, monitoring of BTEX pollution in the capped brood may result in a false, reduced picture of overall pollution, due to the controlled or selective uptake of pollutants. This could explain why in our samples the levels of three out of four pollutants (benzene, toluene, and p-xylen), nevertheless, correlated between the bee bread and capped brood samples, while one (ethylbenzene) did not, and actually showed a negative correlation. Assuming possible differences in pollution levels at different time-points, there is also a possibility that in some cases the bee bread samples taken from the hive were not fresh and therefore represented a different timeframe for pollution than the larvae, which are usually fed fresh bee bread. Such a scenario could also cause not only discrepancies between the pollution levels found in the capped brood and bee bread samples, but higher pollution levels in the capped brood than in the bee bread taken from the same colony at the same time. Such a Fig. 6. Correlation of BTEX pollution levels (benzene, toluene, ethylbenzene, p-xylene) between capped brood and bee bread originating from the same colonies. situation also occurred in our study at site O1, during both the spring and the summer sampling, and in the case of the spring samples from K1 and K2 in 2017 (Table 1). In the case of the O1 and K1 spring samples, in some of the colonies the amount of bee bread was actually too scarce to run the analysis; therefore, the results obtained from the bee bread are based on one or two colonies. This could also mean that the nursing bees in these colonies were forced to use all d probably leftover, older d bee bread in the hive to feed the sampled capped brood during open brood phase. As a result, they could have fed them bee bread which was more polluted, as it would have been collected earlier in the season when household heating was still causing more air pollution on these sites. Although in our case it might cause a discrepancy, yet sampling bee bread on marked combs (combs added at known dates and their filling up controlled) can allow for sampling pollutants for longer periods of time. Additionally bee bread sampling will not deprive the colony of fresh pollen completely, like in case of using pollen traps. Based on our results, we suggest that for measuring and monitoring of BTEX pollution the use of bee bread is a better source of information about environmental pollution levels than capped brood. Bee bread usually has a higher level of pollution than capped brood, which allows for more accurate analysis, and it is also easier to extract from the cell than capped brood. Bee larvae may also be fed selectively or may possess a mechanism which controls the uptake of BTEX from food. It is also important to remember that honey bee families d even if they are located in the same place d can prefer certain areas and pollen sources (flowers). Therefore, sampling should be based on a minimum of three, but ideally even more, bee families in order to have a better coverage of the tested area. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
An Optimal Distributed Algorithm with Operator Extrapolation for Stochastic Aggregative Games This work studies Nash equilibrium seeking for a class of stochastic aggregative games, where each player has an expectation-valued objective function depending on its local strategy and the aggregate of all players' strategies. We propose a distributed algorithm with operator extrapolation, in which each player maintains an estimate of this aggregate by exchanging this information with its neighbors over a time-varying network, and updates its decision through the mirror descent method. An operator extrapolation at the search direction is applied such that the two step historical gradient samples are utilized to accelerate the convergence. Under the strongly monotone assumption on the pseudo-gradient mapping, we prove that the proposed algorithm can achieve the optimal convergence rate of $\mathcal{O}(1/k)$ for Nash equilibrium seeking of stochastic games. Finally, the algorithm performance is demonstrated via numerical simulations. Introduction In recent years, the non-cooperative games have been widely applied in decision-making for networked systems, where selfish networked players aim to optimize their own objective functions that have coupling with other players' decisions. Nash equilibrium (NE) is one of the most widely used solution concept to non-cooperative games, under which no participant can improve its utility by unilaterally deviating from the equilibrium strategy [1]. Specifically, the aggregative games is an important class of non-cooperative games, in which each player's cost function depends on its own strategy and the aggregation of all player strategies [2]. This aggregating feature emerges in numerous decision-making problems over networks systems, when the individual utility is affected by network-wide aggregate. Hence, aggregative games and its NE seeking algorithm get wide research, with application in charging coordination for plug-in electric vehicles [3,4], multidimensional opinion dynamics [5,6], communication channel congestion control [7], and energy management in power grids [8,9], etc. It also worth pointing out that stochastic game should be taken as a proper decision-making model when there are information uncertainties, dating back to [10], while the NE seeking for stochastic games with various sampling schemes are recently studied [11,12]. Hence, when the game has both information uncertainties and aggregating-type objective functions, stochastic aggregative game and its NE seeking are studied [13,14,15]. Generally speaking, NE seeking methods include centralized, distributed, and semi-decentralized methods. In the centralized methods [16], each player needs to know the complete information to compute an NE in an introspective manner. In the semi-decentralized methods [17], there exists a central coordinator gathering and broadcasting signals to all players. While in distributed methods [18], each player has its private information and seeks the NE through local communications with other players. As such, designing distributed NE seeking algorithm has gained tremendous research interests due to the benefits owning to decentralization. Existing works on distributed NE seeking largely resided in best-response(BR) schemes [19][12] [6] and gradient-based approaches [20][21] [22]. In BR schemes, players select a strategy that best responds to its rivals' current strategies. For example, [23] proposed an inexact generalization of this scheme in which an inexact best response strategy is computed via a stochastic approximation scheme. In gradient-based approaches, the algorithm is easily implementable with a low computation complexity per iteration. For instance, [24] designed an accelerated distributed gradient play for a strongly monotone game and proved its geometric convergence, while it needs to estimate all players' strategies. To reduce the communication costs, [25] proposed a consensus-based distributed gradient algorithm for aggregative games, in which players only need to estimate the aggregate in each iteration. Furthermore, [26] proposed a distributed gradient algorithm based on iterative Tikhonov regularization method to resolve the class of monotone aggregative games. [27] proposed a fully asynchronous distributed algorithm and rigorously show the convergence to a Nash equilibrium. Besides, [28,9] proposed a continuous-time distributed algorithm for aggregative games. Moreover, fast convergence rate is indispensable bonus for distributed Nash equilibrium seeking algorithm, since it implies less communication rounds. Particulary, for stochastic games, distributed NE seeking with a fast convergence rate is highly desirable since its also implies low sampling cost of stochastic gradients. Motivated by this, [30] designed an extra-gradient method to improve the speed of NE seeking, while it needs two steps of operator evaluations and two step of projections at each iteration. To further reduce the computation burden, [31] proposed a simpler recursion with one step of operator evaluation and projection at each iteration, given by , called projected reflected gradient method. But the iterate x t +ฮฒ t (x t โˆ’ x tโˆ’1 ) may sit outside the feasible set X, so it needs to impose the strong monotone assumption on the pseudo-gradient over R n . Recently, [32] proposed an operator extrapolation (OE) method to solve the stochastic variational inequality problem, for which the optimal convergence rate can be achieved through one operator evaluation and a single projection per iteration. Motivated by the demands for fast NE seeking algorithm of stochastic aggregative games and inspired by [32], we propose a distributed operator extrapolation (OE) algorithm via mirror descent and dynamical averaging tracking. At each stage, each player aligns its intermediate estimate by a consensus step with its neighbors' estimates of the aggregate, samples an unbiased estimate of its payoff gradients, takes a small step via the OE method, and then mirrors it back to the strategy set. The algorithm achieves the optimal convergence rate O(1/k) for the class of stochastic strongly monotone aggregative games. The numerical experiments on a Nash-cournot competition problem demonstrate it advantages over some existing NE seeking methods. In addition, we compare our algorithm with some other distributed algorithms for aggregative games and summarize those works in Table 1. Specially, we compare the iteration complexity and communication rounds to achieve an โˆ’ NE when the iterate x satisfies E[ xโˆ’ x * 2 โ‰ค . The proposed algorithm can achieve the optimal convergence rate with only one communication round per iteration in comparison with multiple communication rounds required by [6] and [29]. Though [27] can achieve convergence rate 1/k with fixed step-sizes for a deterministic game, our work can achieve the optimal convergence result with diminishing step for a stochastic game, which can model various information uncertainties. The rest of paper is organized as follows. In section 2, we give the formulation of the stochastic aggregate games, and state the assumptions. In section 3, we propose a distributed operator extrapolation algorithm and provide convergence results for the class of strongly monotone games. The proof of main results are given in Section 4. We show the empirical performance of our algorithm through the Nash-cournot model in section 5, and give concluding remarks in Section 6. Notations: A vector x is a column vector while x T denotes its transpose. x, y = x T y denotes the inner product of vectors x, y. x denotes the Euclidean vector norm, i.e., x = โˆš x T x. A nonnegative square matrix A is called doubly stochastic if A1 = 1 and 1 T A = 1 T , where 1 denotes the vector with each entry equal 1. I N โˆˆ R Nร—N denotes the identity matrix. Let G = {N, E} be a directed graph with N = {1, ยท ยท ยท , N} denoting the set of players and E denoting the set of directed edges between players, where ( j, i) โˆˆ E if player i can obtain information from player j. The graph G is called strongly connected if for any i, j โˆˆ N there exists a directed path from i to j, i.e., there exists a sequence of edges (i, i 1 ), (i 1 , i 2 ), ยท ยท ยท , (i pโˆ’1 , j) in the digraph with distinct nodes i m โˆˆ N, โˆ€m : 1 โ‰ค m โ‰ค p โˆ’ 1. Problem Formulation In this section, we formulate the stochastic aggregative games over networks and introduce basic assumptions. Problem Statement We consider a set of N non-cooperative players indexed by N {1, ยท ยท ยท , N}. Each player i โˆˆ N choose its strategy x i from a strategy set X i โˆˆ R m . Denote by x (x T 1 , ยท ยท ยท , x T N ) T โˆˆ R mN and x โˆ’i {x j } j i the strategy profile and the rival strategies, respectively. In an aggregative game, each player i aim to minimize its cost function f i (x i , ฯƒ(x)), where ฯƒ(x) N j=1 x j is an aggregate of all players' strategies. Furthermore, given ฯƒ(x โˆ’i ) N j=1, j i x j , the objective of player i is to minimize its parameterized stochastic local cost: We consider the scenario that each player i โˆˆ N knows the structure of its private function f i , but have no access to the aggregate ฯƒ(x). Instead, each player may communicate with its neighbors over a time varying graph G k = {N, E k }. Define W k = [ฯ‰ i j,k ] N i, j=1 as the adjacency matrix, where ฯ‰ i j,k > 0 if and only if ( j, i) โˆˆ E k , and ฯ‰ i j,k = 0, otherwise. Denote by N i,k { j โˆˆ N : ( j, i) โˆˆ E k } the neighboring set of player i at time k. 2 Literature stochastic Method rate Iteration communication sample [6] ร— BR-based -v rounds - [27] ร— gradient-based 1/k -1 round - [29] gradient-based 1/k O(ln(1/ )) k + 1 rounds N k This work gradient-based 1/k Corollary1 1 round 1 In column 2 "stochastic", means that the studied problem is stochastic, while ร— implies that the studied problem is deterministic; In columns 4 and 5, dash -implies that the literature has not studied this property; In column 6 "communication" stands for that the communication round required at each iteration; In column 7 "sample" stands for the number of sampled gradients per iteration. Assumptions We impose the following conditions on the time-varying communication graphs (c) There exists a positive integer B such that the union graph N, B l=1 E k+l is strongly connected for all k โ‰ฅ 0. We define a transition matrix ฮฆ(k, s) = W k W kโˆ’1 ยท ยท ยท W s for any k โ‰ฅ s โ‰ฅ 0 with ฮฆ(k, k + 1) = I N , and state a result that will be used in the sequel. We require the player-specific problem to be convex and continuously differentiable. Assumption 2. For each player i โˆˆ N, (a) the strategy set X i is closed, convex and compact; (b) the cost function From (3) and Assumption 2(c), it follows that the pseudogradient ฯ†(x) is continuous. Since each player-specific problem is convex, by [34, Proposition 1.4.2], x * is a NE of (1) if and only if x * is a solution to a variational inequality problem VI(X, ฯ†), i.e., finding x * โˆˆ X such that Since ฯ† is continuous, X is convex and compact, the existence of NE follows immediately by [34,Corollary 2.2.5]. We impose the following Lipschitz continuous conditions. In addition, we require each ฯˆ i (x i , ฯƒ; ฮพ i ) to be differentiable, and assume there exists a stochastic oracle that returns unbiased gradient sample with bounded variance. Algorithm Design and Main Results In this section, we design a distributed NE seeking algorithm and prove its optimal convergence to the Nash equilibrium in the mean-squared sense. Distributed Mirror Descent Algorithm with Operator Extrapolation We assume throughout that the paper that the regularization function h is 1-strongly convex, i.e., We define the Bregman divergence associated with h as follows. Recall that the operator extrapolation method for VI in [32] requires a simple recursion at each iteration given by which only involves one operator evaluation F (x t ) and one prox-mapping over the set X. Suppose that each player i at stage k = 1, 2, ยท ยท ยท selects a strategy x i,k โˆˆ X i as an estimate of its equilibrium strategy, and holds an estimate v i,k for the average aggregate. At stage k + 1, player i observes its neighbors' past information v j,k , j โˆˆ N i,k and updates an intermediate estimate by the consensus step (8), then it computes its partial gradient based on the sample observation, and updates its strategy x i,k+1 by a mirror descent scheme (9) with operator extrapolation [32] by setting 1 ) without loss of generality. Finally, player i updates the average aggregate with the renewed strategy x i,k+1 by the dynamic average tracking scheme (10). The procedures are summarized in Algorithm 1. Algorithm 1 Distributed Mirror Descent Method with Operator Extrapolation Strategy Update. Each player i โˆˆ N updates its equilibrium strategy and its estimate of the average aggregate by where ฮฑ k > 0, ฮป k > 0, and ฮพ i,k denotes a random realization of ฮพ i at time k. Define the gradient noise Main Results Define With the definition of the Bregman's distance, we can replace the strong monotonicity assumption by the following assumption. This assumption taken from [32] includes ฯ†(x), x โˆ’ x * โ‰ฅ ยต x โˆ’ x * , โˆ€x โˆˆ X as the special case when h(x) = x 2 /2. Assumption 5. There exists a constat ยต > 0 such that With this condition, we now state a convergence property of Algorithm 1, for which the proof can be found in Section 4.2. Proposition 1. Consider Algorithm 1. Let Assumptions 1-5 hold. Assume, in addition, that there exists a positive sequence Then where Remark 1. Consider the special case where the digraph G k is a complete graph for each time k with W k = This recovers the bound of [32,Theorem 3.3]. In the following, we establish a bound on the consensus error ฯƒ(x k ) โˆ’ Nv i,k+1 of the aggregate, for which the proof can be found in Section 4.3. where the constants ฮธ, ฮฒ are defined in (2), and By combining Proposition 1 and Proposition 2, we can show that the proposed method can achieve the optimal convergence rate for solving the stochastic smooth and strongly monotone aggregative games. The proof can be found in Section 4.4. Theorem 1. Consider Algorithm 1. Suppose Assumptions 1-5 hold. Set Then the following hold with c e 4NC N i=1 L f i M i . where Corollary 1. The number of iterations (the same as communication rounds) required by Algorithm 1 for obtaining an ap- Preliminary Results We now state a property from [35,Proposition B.3], and a well-known technical results regarding the optimality condition of (9) (see [36, Lemma 3.1]). Lemma 2. Let h be a smooth and 1-strongly convex regularizer. Then and Then from (13) and (24) it follows that Furthermore, we define We are now ready to show the convergence properties of the proposed method. Proof. With the definitions (27) and (28), (25) becomes By multiplying both sides of the above inequality by ฮธ k , and summing up from i = 1 to N, and k = 1 to t, we obtain that Then by recalling that โˆ†q i,1 = 0, using (13) and (14), we obtain By the definitions of (11) and (28) By using (5) and Assumption 3, we derive This together with (34), and the definitions of Q t and ฮต k in (33) and (18) implies that Then by (26), the following holds with defining ฮธ 0 0. We let By using (15) and the Young's inequality a 2 + b 2 โ‰ฅ 2 a b , we derive Term 2 โ‰ฅ 0. Also, we obtain Term 1 โ‰ฅ โˆ’2ฮธ k ฮฑ 2 k ฮป 2 k ฮถ k โˆ’ ฮถ kโˆ’1 2 . Then by substituting the bounds of Term 1 and Term 2 into (37), we have This together with (33) implies that Similarly to (34), we have that Similarly to (35), by using Assumption 3, (3), and (5), we have that Therefore, , where the last inequality follows by the Young' s inequality. This incorporating with (39) produces . By recalling the definition (29) and rearranging the terms, we prove the lemma. Proof of Proposition 1 Since x k is adapted to F k , from (12)it follows that for any k โ‰ฅ 1. Therefore, By noting that In addition, by (12) we have Then by taking unconditional expectations on both sides of (30), using (41) and (42), we obtain that Proof of Proposition 2. Since v i,1 = x i,1 and W(k) is doubly stochastic, similarly to [25, Lemma 2], we can show by induction that Akin to [25,Eqn. (16)], we obtain the following bound. Then by using (2) and v i,1 = x i,1 , we obtain that This combined with (5) proves =C. By (5) and (47), we have that ฯƒ(x k ) โ‰ค M H for any k โ‰ฅ 0. Thus by (49), we obtain that for each i โˆˆ N, Then by using Assumption 3(b) and (3), we obtain that for each j โˆˆ N and any s โ‰ฅ 0 : โ‰ค NCL f j + max xโˆˆX ฯ† j (x) (20) = C j . By applying the optimality condition to (9), using the definitions (7) and (11), we have that By setting x i = x i,k in (51), rearranging the terms, and using the assumption that h is 1-strongly convex, we obtain Then from (50) it follows that This together with (5) and (48) produces (19). Proof of Theorem 1 By substituting (49) into the definition of ฮต k in (18), we obtain that With the selection of parameters, similar to the proof of [32,Corollary 3.4], we can verify that (14), (15), and (16) hold. Furthermore, by c 0 โ‰ฅ 4 and the simple calculations we obtain This incorporating with (17) and (52) implies that In the following, we establish an upper bound of t k=1 Hence for any t โ‰ฅ 1, Similarly to (56), we have Note by c 0 โ‰ฅ 4 that for any โ‰ฅ 1, By (12), we obtain that This together with (19) produces Then by using (55) and (56), we obtain that Similarly, by using (55) and (57), we obtain that By recalling the definition of ฮต k in (18), we obtain that where the last equality follows from the definitions of c 1 and c 2 in (22) and (23). This together with (53) and Then the result follows immediately. 2 Numerical Simulations In this section, we validate the algorithm performance through numerical simulation on the Nash-Cournot games (see e.g., [21,14]). There is a collection of N factories denoted by N = {1, . . . , N} competing over l markets denoted by L {1, ยท ยท ยท , L}, where each factory i needs to decide its production x i,l at markets l. Then, the cost of production of factory i is defined as , c i > 0 is parameter for factory i, and ฮพ i is a random disturbance or noise with zero-mean and bounded variance. The income of factory i is p l (S l ; ฮถ l ) x i , where S l = N i=1 x i,l denote the aggregate products of all factories delivered to market l. By the law of supply and demand, the price function p l can be represented by the reverse demand function and is defined as p l (S l ; ฮถ l ) = d l + ฮถ l โˆ’ b l S l , where d l > 0, b l > 0, and ฮถ l is zero-mean random disturbance or noise. Then, the fac- . . , S l } and ฮถ = col {ฮถ 1 , ยท ยท ยท , ฮถ l } . Finally, factory i's local optimization problem is min while satisfying a finite capacity constraint X i . It is straightforward to verify that the aforementioned Nashcournot example satisfy the aggregative game formulation (1) with (3), F i (x i , z) = c i โˆ’ d + B (z + x i ) and ฯ† i (x) = c i โˆ’d+B N i=1 x i + x i . We can verify that Assumptions 2, 3 and 4 hold when the random variables ฮพ i , ฮถ l , i โˆˆ N, l โˆˆ L are zero mean with bounded variance. In addition, we set h(x) = 1 2 x 2 2 , then the Bregman divergence becomes D h (x, y) = x โˆ’ y 2 2 . 9 Then by Set N = 20, l = 3, and let the communication among the factories be described by an undirected time-varying graph. The graph at each iteration is randomly drawn from a set of four graphs, whose union graph is connected. Set the adjacency matrix W = w i j , where w i j = 1 max{|N i |,|N j |} for any i j with (i, j) โˆˆ E, w ii = 1 โˆ’ j i w i j , and w i j = 0, otherwise. Let c i is drawn from the uniform distribution U [3,4]. The pricing parameters d l , b l of market l โˆˆ L are derived from uniform distributions U[10, 10.5] and U[0.5, 1], respectively. The capacity constraint for each x i,l is the same as x i,l โˆˆ [2, 10], โˆ€i โˆˆ N, l โˆˆ L. After fixing the problem parameters c i and d l , let the random variables ฮพ i , i โˆˆ N and ฮถ l , l โˆˆ L be randomly and uniformly from U [โˆ’c i /8, c i /8] and U [โˆ’d l /8, d l /8], respectively. We implement Algorithm 1 and display the empirical results in Fig.1 by averaging of 20 sampling trajectories with the same initial points, with ฮฑ k and ฮป k choosing as in Theorem 1. Besides, we compare our algorithm with the projected gradient method [25] and extra-gradient method [30] when applied to the stochastic aggregative game considered in this work, but the network aggregate value is still estimated with the dynamical average tracking. The projected gradient method (PGA) requires a simple projection given by , while the extra-gradient method (Extra-G) consists of two steps, requiring two steps of projection evaluation and gradient sample at each iteration. (extrapolation) x k+1/2 i = P X i x k i โˆ’ ฮฑ k q i (x i,k , Nv i,k+1 ; ฮพ i,k ) , (update) x k+1 i = P X i x k i โˆ’ ฮฑ k q i (x k+1/2 i , Nv i,k+1 ; ฮพ i,k ) . 1 displays the convergence of the three algorithms, and it shows the superior convergence speed of Algorithm 1 compared to projected gradient method. Though the convergence speed of Algorithm 1 is almost the same as that of the extra-gradient method, the advantage of our method lies in that it only requires a single projection and one sampled gradient, greatly reducing the computing and sampling cost. Conclusions This paper proposes a distributed operator extrapolation method for stochastic aggregative game based on mirror descent, and shows that the proposed method can achieve the optimal convergence for the class of strongly monotone games. In addition, empirical results demonstrate that our method indeed brings speed-ups. It is of interest to explore the algorithm convergence for monotone games, and extend the operator extrapolation method to the other classes of network games in distributed and stochastic settings.
The global research and emerging trends in autophagy of pancreatic cancer: A bibliometric and visualized study Objective To present the global research features and hotspots, and forecast the emerging trends by conducting a bibliometric analysis based on literature related to autophagy of pancreatic cancer from 2011 to 2022. Methods The literature data regarding autophagy of pancreatic cancer were retrieved and downloaded from the Web of Science Core Collection (WOSCC) from Clarivate Analytics on June 10th, 2022. VOSviewer (version 1.6.18) was used to perform the bibliometric analysis. Results A total of 616 studies written by 3993 authors, covered 45 countries and 871 organizations, published in 263 journals and co-cited 28152 references from 2719 journals. China (n=260, 42.2%) and the United States (n=211, 34.3%) were the most frequent publishers and collaborated closely. However, publications from China had a low average number of citations (25.35 times per paper). The output of University of Texas MD Anderson Cancer Center ranked the first with 26 papers (accounting for 4.2% of the total publications). Cancers (n=23, 3.7%; Impact Factor = 6.639) published most papers in this field and was very pleasure to accept related researches. Daolin Tang and Rui Kang published the most papers (n=18, respectively). The research hotspots mainly focused on the mechanisms of autophagy in tumor onset and progression, the role of autophagy in tumor apoptosis, and autophagy-related drugs in treating pancreatic cancer (especially combined therapy). The emerging topics were chemotherapy resistance mediated by autophagy, tumor microenvironment related to autophagy, autophagy-depended epithelial-mesenchymal transition (EMT), mitophagy, and the role of autophagy in tumor invasion. Conclusion Attention has been increasing in autophagy of pancreatic cancer over the past 12 years. Our results undoubtedly provide scholars with new clues and ideas in this field. Introduction Pancreatic cancer remains the most aggressive and fatal among all malignancies, with a dismal 5-year relative survival rates of only 11%. Approximately 62,210 new pancreatic cancer cases are expected in the US in 2022 (1). Pancreatic ductal adenocarcinoma (PDAC) is the majority (90%) of pancreatic cancers. Most patients with pancreatic cancer are not suitable for curative surgery because of an advanced or metastatic stage at the time of diagnosis (2). Over the past decade, even the most advanced diagnostic tools, perioperative management, and systemic anti-tumor therapy for advanced disease have been developed but only modest improvements in patient outcomes (3). Therefore, early diagnosis, mechanisms of tumorigenesis, and anti-tumor strategies of pancreatic cancer have always been research hotspots. Autophagy is an evolutionarily conserved catabolic mechanism that damaged organelles, aggregated proteins, cytoplasmic macromolecules, or pathogen are delivered to lysosomes for degradation, providing macromolecular precursors and energy, and ultimately recycled back into the cytosol for reuse (4)(5)(6)(7)(8)(9)(10)(11)(12). Based on diverse cellular functions, autophagy broadly encompasses three types: macroautophagy, microautophagy, and chaperone-mediated autophagy (13). Macroautophagy is the main autophagy process (hereafter autophagy) in which the autophagosome is newly formed by a double-membrane vesicle to sequester a variety of cellular cargo and transport this autophagic material to lysosomes for subsequent degradation (14, 15). Autophagy can be selective and non-selective depending on the way of sequestration of degradation targets. Non-selective autophagy is responsible for randomly engulfing cytoplasmic components into phagophores (the precursors to autophagosomes), whereas selective autophagy identifies and removes specific components. Selective autophagic degradation processes include mitophagy for damaged and/or superfluous mitochondria, aggrephagy for protein aggregates, ferritinophagy for the iron-sequestering protein ferritin, xenophagy for intracellular pathogens and the like (16)(17)(18)(19). By contrast, microautophagy is responsible for directly engulfing cellular cargo by lysosomes (20). Finally, chaperone-mediated autophagy involves the direct translocation of specific cytosolic proteins (and possibly DNA and RNA) across the lysosomal membrane with the assistance of HSC70 and other cochaperones (21,22). In virtually all eukaryotic cells, autophagy occurs at a low basal level in physiological condition to maintain cellular homeostasis or regulate cellular functions (23,24). Given the catabolic degradation function of autophagy, it is not surprising that dysregulation of autophagy has been associated with numerous human diseases, including cancers, neurodegenerative diseases, autoimmune disorders, and inflammatory diseases (25, 26). A total of 18,881 autophagyrelated articles were published before 2019 and relevant research has dramatically risen in the past decade (27). Among which, the relationship between autophagy and cancer is one of the research hotspots. In 2011, Yang et al. reported that pancreatic cancers have a distinct dependence on autophagy (28). Then, hundreds of research articles have been published on autophagy of pancreatic cancer. Thus, it is urgently needed to collect and analyze the vast quantities of literatures on this topic. Bibliometrics is a quantitative science based on large volumes of literatures. It can use of mathematical and statistical methods to comprehensively analyze the authors, keywords, journals, countries, institutions, citations, and their associations of selected publications, thus providing an objective evaluation of the dynamics and emerging trends in a research field or discipline (29). The visualization of bibliometric analysis can demonstrate the results in different forms and contribute to data interpretation, which make the results more intuitive and comprehensive (29,30). This method has been widely used to assess various research domains, including medicine (31)(32)(33). Previous bibliometric studies has focused on the research of autophagy (27), mitophagy (34), pancreatic cancer (35), tumor microenvironment of pancreatic cancer (36), and pancreatic neuroendocrine tumors (37). As a novel perspective, we conducted a bibliometric analysis based on literature related to autophagy of pancreatic cancer from 2011 to 2022. This study aims to present the global research features and hotspots, and forecast the emerging trends in that field, which may provide researchers with new clues and ideas in the field of autophagy and pancreatic cancer. Data screening and collection Web of Science Core Collection (WOSCC) is the most frequently used database in bibliometric analysis (31). We retrieved and downloaded literature data in the WOSCC from Clarivate Analytics on June 10th, 2022. Primary search terms were "pancreatic cancer", "pancreatic carcinoma", "pancreatic ductal adenocarcinoma" and "autophagy" and detailed search strategy is provided in Supplemental File S1. The retrieval time was set from January 1st, 2011 to June 9th, 2022. The language was limited to English and the literature type we searched for was restricted to article or review article. Two authors (MY S and Q L) independently screened the search results and removed the paper that did not related to autophagy of pancreatic cancer by reading the title, abstract, and if necessary, the whole article. Different viewpoints would be resolved by negotiation or reviewed by an experienced corresponding author (XL O). The literature data was finally exported with the record content of "Full Record and Cited References" and downloaded in plain text format. Data analysis and visualization We used VOSviewer (version 1.6.18) to perform the bibliometric analysis based on the literature data. The annual output of publications related to autophagy of pancreatic cancer was plotted using GraphPad Prism (version 6.0.4). VOSviewer is a free JAVA-based computer program developed by Van Eck and Waltman, which is used for constructing and generating bibliometric maps visually. It provides a variety of easy-to-interpret visualization maps, including network visualization, overlay visualization, and density visualization (38). In VOSviewer, the co-authorship network map of countries/organizations/authors, the overlay visualization map of the citation analysis of sources, the density map of the co-citation analysis of cited authors and the co-occurrence analysis of all keywords were built. The data analyzing flow chart can be seen from Figure 1. Publication outputs and trend According to our search strategy, a total of 616 publications on autophagy of pancreatic cancer were remained for bibliometric analysis, including 479 articles (77.8%) and 137 reviews (22.2%). The annual number of publications on the autophagy of pancreatic cancer from 2011 to 2022 (June 9th, 2022) is presented in Figure 2. Generally, the number of publications increased year by year and it dramatically raised from 11 in 2011 to 101 (including 66 articles and 35 reviews) in FIGURE 1 Flow chart of the data collection and analysis for research on autophagy of pancreatic cancer. Countries and organizations All included publications in the field covered 45 countries and 871 organizations. The output of China ranked the first with 260 (accounting for 42.2% of the total publications), followed by the United States (n=211, 34.3%), Italy (n=40, 6.5%), Germany (n=37, 6.0%) and Japan (n=37, 6.0%) (Table 1). However, among the top 10 countries, publications from China had a low average number of citations (25.35 times per paper), while the United States (63.9 times) was in first place by the average number of citations, followed by Italy (62.4 times), Germany (49.08 times), England (39.95 times) and France (39.18 times). Besides, a co-authorship network map of countries was built ( Figure 3A) as the cooperation between different countries can be considered as a measure of international cooperation. Only the countries with a minimum of five publications were included and 25 countries were subsequently identified. China collaborated closely with the United States. The United States cooperated with 23 countries, ranked first, followed by Germany (n=12), Spain (n=11), China (n=10), Italy (n=10), and England (n=10). The top 11 active organizations based on publication number were listed in Table 2. The production from these organizations ranged 12 to 26 publications, accounting for 28.7% (177/616) of the total publications. Organizations from China and the United States account for 6 and 5 respectively. University of Texas MD Anderson Cancer Center contributed the most publications (n=26, 4.2%) with 2657 citations, followed by Fudan University (n=18, 2.9%) with 645 The annual output of autophagy and pancreatic cancer from 2011 to 2022. . Five documents were set as a minimum for each organization to be analyzed; therefore, 58 of 871 organizations were included for network analysis ( Figure 3B). The cooperation between organizations was a little stronger than that between countries based on the total link strength. Analysis of journals and co-cited journals A total of 263 academic journals published the 616 publications on autophagy of pancreatic cancer between 2011 to 2022. As is displayed in Table 3, the top 12 most frequent journals were distributed 153 papers, accounting 24.8% for all the obtained publications. The most productive journal has been Cancers with 23 papers (3.7% of the total), followed by Oncotarget Among 2719 co-cited journals, 14 journals had citations more than 500. As is shown in Table 4, Nature had the most cocitations up to 1415 times, followed by Cancer Research (1267 times), Autophagy (1238 times), and Cell (1116 times). Among the top 10 co-cited journals, 80% (8/10) had an IF of more than ten, 90% (9/10) were at the Q1 JCR division, and 80% (8/10) were from the United States. Analysis of authors and co-cited authors A total of 3,993 authors contributed the 616 included publications. The top author is defined as one who has published at least 5 papers and received over 600 citations. Finally, ten top authors were identified (Table 5). By the number of papers, Daolin Tang and Rui Kang published the most papers (n=18, respectively), followed by Alec C Kimmelman (n=16), Michael T Lotze (n=9), and Haoqiang Ying (n=7). Papers published by Alec C Kimmelman who comes from New York University had the highest total number of citations (3288 times). Notably, top authors all come from the United States. The authors (n=142) who had published with a minimum of three publications were entered into co-authorship network analysis of authors ( Figure 5A). There were strong collaborations among authors who were in the same cluster/ color, such as Daolin Tang, Rui Kang, and Michael T Lotze. However, sparse connection was observed among different clusters, indicating little cooperation between research groups. A total of 20,319 authors were co-cited at least once. There were 50 authors who had been co-cited with a minimum of 40 times. They were included to make the density visualization which can intuitively display the most co-cited authors ( Figure 5B). Specifically, SH Yang (n=227) was the most frequent co-cited authors, followed by N Mizushima (n=222) and RL Siegel (n=176). The remaining seven top authors were co-cited from 104 to 140 (Table 5). Analysis of papers and co-cited references Among the 616 papers in our study, 100 papers were cited more than 50 times. The most cited papers were summarized in Table 6. The results showed a total of 28,152 references were co-cited from 1 to 197. As is shown in Table 7, the most co-cited paper in the field of autophagy and pancreatic cancer by Shenghong Yang et al. require autophagy for tumor growth", indicating a wide influence and a highly proven peer recognition in the field. The overlay visualization map of the 69 keywords is showed in Figure 6B. The research focus can be intuitively observed by the evolution of high-frequency keywords over time. The yellow nodes represented the emerging keywords near 2019. Among which, the most co-occurrence keywords were resistance (n=49 co-occurrences), followed by tumor microenvironment (n=23), epithelial-mesenchymal transition (EMT) (n=22), mitophagy (n=21), and invasion (n=21). These keywords may become the future research hotspots in the field of autophagy and pancreatic cancer. Discussion In this study, we used VOSviewer software to perform a bibliometric analysis based on the literature related to autophagy of pancreatic cancer in WoSCC database from 2011 to 2022 (June 9th, 2022). A total of 616 studies were written by 3993 authors, covered 45 countries and 871 organizations, published in 263 journals and co-cited 28152 references from 2719 journals. Most of which are original articles (77.8%). An average of 45.70 references each publication were noted. The primary aim of the current study was to explore the global research features and hotspots and forecast the emerging trends which may be helpful to researchers in autophagy of pancreatic cancer field. Overall, the annual publication output has dramatically increased from 11 in 2011 up to 101 in 2021 which reveals that attention has been increasing in autophagy of pancreatic cancer field over the past 12 years. Autophagy plays an important role in tumor pathogenesis and contributes to tumor growth (48,49). The article published in Genes & Development (IF=11.361) by Shenghong Yang et al, in 2011 which confirmed that pancreatic cancers actually require autophagy for tumorgenic growth has been cited and co-cited the most frequently (28), indicating Shenghong Yang is an accomplished scholar in this field and his study is considered as the most fundamental and important study. Besides, it pointed chloroquine and its derivatives are powerful inhibitors of autophagy which could be used to treat pancreatic cancer patients (28). Therefore, more attention on the research of autophagy and pancreatic cancer field will be triggered (50). As far as countries for publication of papers are considered, a bibliometric analysis of autophagy showed that China and the United States were the most productive countries (27). Again, one bibliometric study on mitophagy (34) and the other bibliometric study on pancreatic cancer research (35) arrived the same conclusion. Our results also showed that China and the United States were the most frequent publishers in the field of autophagy and pancreatic cancer. 76.5% of the total publications was contributed by China and the United States, far more than any other country. This phenomenon could be called "Matthew effect". In the network visualization map, extensive cooperation was observed between countries with a minimum of five Mancias) were from this institution, showing the high influence of its published articles. Besides, cooperation between countries were found to be a little sparser than those between agencies, indicating that international cooperation should be strengthened in this field. Notably, University of Texas MD Anderson Cancer Center, the most productive organization, collaborated most closely with many United States universities and research institutions, and also with Universities from China, such as China Medical University, Fudan University, Sun Yat-Sen University, Xi'an Jiaotong University, and Tongji University, showing that the United States and China collaborated closely between organizations. When it comes to journals and co-cited journals, our results showed the journals published the most papers related to autophagy of pancreatic cancer were Cancers (n=23), Oncotarget (n=19), Frontiers in Oncology (n=15), Autophagy (n=13), and International Journal of Molecular Sciences (n=13). Among the top 12 journals, 66.7% had an IF of more than five, and 66.7% were at the Q1 JCR division. Nature (n=1415 times), Cancer Research (n=1267 times), Autophagy (n=1238 times), and Cell (n=1116 times) were the most high-frequency co-cited journals. Among the top 10 co-cited journals, 80% had an IF of more than ten, 90% were at the Q1 JCR division. These data indicated many high-quality and high-impact journals were particularly interested in and play a significant role in the field of autophagy and pancreatic cancer. Besides, it is worth noting that Cancers, the most productive journal, was also an emerging journal in recent 3 years, implying this journal was very pleasure to accept the researches in this field. Despite the most productive journals, Frontiers in Cell and Developmental Biology (IF=6.684, Q2), Biomedicine & Pharmacotherapy (IF=6.53, Q1), and Cells (IF=6.6, Q2) were the emerging journals that accepted related papers in recent 3 years. These results will also assist future scholars in selecting journals when submitting manuscripts associated to autophagy of pancreatic cancer. A high citation frequency indicating a wide influence and a highly proven peer recognition in the field. In this bibliometric analysis, the top 10 most-cited papers were as follows (Table 6): Shenghong Yang et al. published "pancreatic cancers require autophagy for tumor growth (28)" in Genes & Development in 2011, which was the most cited paper (957 citations). This study reported that pancreatic cancers have a distinct dependence on autophagy. The second cited paper, "Oncogene ablation-resistant pancreatic cancer cells depend on mitochondrial function", was published by Andrea Viale et al. (39) in Nature in 2014. This study illuminated a therapeutic strategy of combined targeting of the KRAS pathway and mitochondrial respiration to treat pancreatic cancer. The third cited paper, "Autophagy promotes ferroptosis by degradation of ferritin" was published by Wen Hou et al. (40) in Autophagy in 2016. This study found autophagy promotes ferroptosis by degradation of ferritin which provide novel insight into the interplay between autophagy and regulated cell death. The fourth This article reported autophagy plays a central role in pancreatic cancer and showed that autophagy inhibition may have therapeutical effect on pancreatic cancer, independent of p53 status. The tenth cited paper was published by Conan G Kinsey et al. (47) in Nature Medicine in 2019. This article represented trametinib combined with hydroxychloroquine may be a new strategy to treat RASdriven cancers. Besides, the most co-cited papers were listed in Table 7. These most co-cited studies have a major impact on autophagy of pancreatic cancer field. The first, second, third, sixth and seventh co-cited papers are the same as the first, fifth, ninth, fourth and sixth cited paper listed in Table 6. The eighth and ninth co-cited papers are about the epidemiology of cancers. The remaining 4 top co-cited articles are mainly about the role of autophagy in pancreatic cancer. Keywords represent the major topic of papers. To explore the global research features and hotspots, we constructed a co-occurrence analysis of all keywords in the field of autophagy and pancreatic cancer by VOSviewer. As autophagy broadly consists of macroautophagy, microautophagy, and chaperone-mediated autophagy, we individually searched for publications concerning the three types. The publication numbers were 18, 0, and 7, respectively. The content of the remaining publications was indistinguishable. Macroautophagy has been studied the most. The keywords appeared over 15 times were clustered into five main categories in the network visualization map ( Figure 6A) which can intuitively show the direction and scope in this field. After reviewing and summarizing relevant researches, we found the keywords in cluster 1 (red) and cluster 5 (purple) mainly focused on the regulation mechanisms of autophagy in pancreatic cancer onset and progression. Among which, expression, growth, and inhibition could represent the research hotspots. In the top cite and co-cited papers, Shenghong Yang et al. reported pancreatic cancers required autophagy for tumor growth in 2011 (28), which is considered as the most fundamental and important study in this field. The other article published in Genes & Development by Jessie Yanxiang Guo et al. in 2011, reported activated oncogene HRAS or KRAS could increase basal autophagy which was essential to maintain human cancer cell survival in starvation and in oncogenesis (51). As KRAS mutation was found in 70~95% of PDAC patients (52), researches on the regulation of autophagy in Ras-expressing pancreatic cancer cells were rapidly increasing. Notably, Mathias T Rosenfeldt et al. showed Inhibition of autophagy promotes cancer onset instead of blocking cancer progression in mouse model with oncogenic KRAS but without p53 (42), suggesting a dual role of autophagy in pancreatic cancer progression (53,54). In the transcriptional program, Rushika M Perera et al. presented MiT/TFE-dependent autophagy-lysosome activation is essential for pancreatic cancer growth, which is a novel hallmark of malignant tumor (43). Besides, Di Malta, C. et al. found transcriptional activation of Rag guanosine triphosphatases could control the mechanistic target of rapamycin complex 1 and regulate anabolic pathways related to nutrient metabolism, leading to excessive cell proliferation and tumor growth (55). Researches have also shown that autophagy supports the growth of pancreatic cancer through both cell-autonomous and nonautonomous pathways (56). These studies provide us insights into the role of autophagy in pancreatic cancer, which may be used to treat this malignant cancer in future. The keywords in cluster 2 (green) were mainly associated with the relationship between autophagy and tumor microenvironment as well as that between autophagy and EMT in pancreatic cancer. Notably, tumor microenvironment and EMT were the emerging keywords in recent years, indicating they may become the future hotspots in the field of autophagy and pancreatic cancer. Hypoxic tumor microenvironment is characterized as a hallmark of pancreatic cancer (57). Increased autophagy flux may mediate the survival of pancreatic cancer stem cells (CSCs) under a hypoxic tumor microenvironment. The inhibition of autophagy converts survival signaling to suicide and finally suppresses cancer development in mouse models (58). Besides, a top cited and cocited paper published in Nature by Cristovรฃo M Sousa et al. in 2016, reported PSCs-derived alanine is an alternative fuel source that can sustain the growth of cancer cells in the tumor microenvironment. And alanine release in PSC is dependent on PSC autophagy which is mediated by cancer cells (41). Despite of CSC and PSC, immune cells, endothelial cells, and fibroblasts may also promote tumor progression through the metabolic crosstalk with malignant cells in the tumor microenvironment (59), the role of autophagy in tumor microenvironment needs further study. In terms of EMT, it is a trans-differentiation process in which epithelial cells acquire mesenchymal features that promote the invasion and metastasis of cancers (60). Enhanced autophagy induced by HIF-1 alpha was reported to promote EMT and the metastatic ability of pancreatic CSCs (61). In RAS-mutated pancreatic cancer cells, the inhibition of autophagy activated the SQSTM1/p62-mediated NF-kappa-B pathway, subsequently enhancing EMT which finally promoted cancer invasion (62). This broadens the horizon for the research of the dual role of autophagy in pancreatic cancer. The keywords in cluster 3 (blue) were mainly related to the role of autophagy in the apoptosis of pancreatic cancer cells. Apoptosis represents a type of programmed cell death that can remove the damaged cells orderly and efficiently (63). Targeting apoptosis is a common therapy strategy for PDAC. However, the cancer cells can establish various mechanisms to reduce apoptosis, including autophagy (64). For example, mitochondrial uncoupling protein 2 (UCP2) plays an essential role in tumorigenesis and development. UCP2 induces autophagy through enhancing Beclin 1 and inhibiting the AKT-MTOR pathway, leading to anti-apoptosis effects or inhibiting other types of cell death in a reactive oxygen species (ROS)-dependent mechanism (65), implying an anti-apoptosis role of autophagy. Eicosapentaenoic acid, a common omega-3 fatty acid, can not only induce autophagy but impair its anti-apoptosis ability in pancreatic cancer cells (66). Ubiquitin specific peptidase 22 (USP22) is an epigenetic regulator, it was reported USP22 induced autophagy by activating MAPK1, thereby promoting cell proliferation and gemcitabine resistance in pancreatic cancer cell lines (67). As discussed above, CSCs and PSCs sustain tumor growth depend on autophagy. Studies have reported inhibiting autophagy also triggers apoptosis in CSCs and PSCs (58,68). These studies indicate that chemotherapy combined the regulation of autophagy could be a potential future direction in treating pancreatic cancers. Besides, mitophagy is an emerging keyword in this cluster. It is reported mitophagy involved the cell death and modulation of metabolism in pancreatic cancer. Again, mitophagy plays a double-edged action in the regulation of the antitumor efficacy of certain cytotoxic agents (69). The keywords in cluster 4 (yellow) were mainly about the autophagy regulation in the treatment of pancreatic cancer. Autophagy is an essential catabolic mechanism in pancreatic cancer onset and progression. The inhibitions of autolysosome formation, a lysosomotropic agent named chloroquine (CQ) and a V-ATPase inhibitor named bafilomycin A 1 , were reported to suppress tumorigenic growth of pancreatic cancers alone (28). However, a phase II and pharmacodynamic study showed hydroxychloroquine (HCQ, an inhibitor of autophagy) monotherapy did not result in a consistent autophagy inhibition as evaluated by peripheral lymphocytes LC3-II levels and achieved negligible benefits in previously treated patients with metastatic pancreatic cancer (70). The dual role of autophagy in pancreatic cancer makes it difficult to be a therapeutic target alone (54). Therefore, most studies focused on combination therapy for treating pancreatic cancer by inhibiting [e.g., CQ or bafilomycin A 1 (28), DQ661 (71)] or inducing [Quercetin (72), Demethylzeylasteral (73)] autophagy to increase therapeutic efficacy of gemcitabine or other antitumor drugs. As mentioned above, activation of autophagy has led to gemcitabine resistance by inhibiting apoptosis in the treatment of PDAC patients. A recent randomized phase II preoperative study reported resectable pancreatic adenocarcinoma patients treated by gemcitabine and nab-paclitaxel with HCQ resulted in an evidence of autophagy inhibition and immune activity and achieved greater pathologic tumor response and lower CA199 levels than patients treated by gemcitabine and nab-paclitaxel alone (74). Alternatively, autophagy induction may result in an antitumor efficacy through autophagymediated metabolic stress or injury. For instance, combined therapy with Demethylzeylasteral and gemcitabine induces autophagic cell death and demethylzeylasteral could increases the chemosensitivity to gemcitabine in treating pancreatic cancer (73). These results suggest autophagy-related drugs play a complex role in pancreatic cancer chemotherapy. As discussed above, the current research related to autophagy of pancreatic cancer mainly about basic research and clinical application. The focus of scholars has gradually switched from basic research to clinical application. The hot topics in current research have always been the mechanisms of autophagy in tumor onset and progression, the role of autophagy in tumor apoptosis, and autophagy-related drugs in treating pancreatic cancer (especially combined therapy). The emerging topics mainly focused on chemotherapy resistance mediated by autophagy, tumor microenvironment related to autophagy, autophagy-depended EMT, mitophagy, and the role of autophagy in tumor invasion, that may become the main future direction in the field of autophagy and pancreatic cancer. Our study first conducted a bibliometric analysis related to autophagy of pancreatic cancer, providing an objective and intuitive evaluation of the research features and hotspots and forecasting the emerging trends in that field. Admittedly, this study has some limitations. First, we collected the literature data only from WOSCC database and the related papers from other sources may be neglected. Secondly, the literature language was limited to English, that may result in the source of bias. Thirdly, since the total number of citations depends on various factors (e.g., time of publication, journal, research area), the number of citations may not accurately represent the impact of a paper, and some recent landmark papers may have been omitted. Conclusion This study showed research activities were multiplying in the field of autophagy and pancreatic cancer. China and the United States were the most frequent publishers and collaborated closely in this field. Cancers published most papers in this field and was very pleasure to accept the related researches. We have also listed the most cited and co-cited papers and authors. Importantly, the mechanisms of autophagy in tumor onset and progression, the role of autophagy in tumor apoptosis, and autophagy-related drugs in treating pancreatic cancer (especially combined therapy) were the research hotspots. The emerging topics were chemotherapy resistance mediated by autophagy, tumor microenvironment related to autophagy, autophagy-depended EMT, mitophagy, and the role of autophagy in tumor invasion. These results undoubtedly provide scholars with new clues and ideas in the field of autophagy and pancreatic cancer. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
Feasibility Analysis of Mutual Benefit Cooperation between Environment-Embedded Art Design Education and Local SMEs Development Based on Improved Grey Analysis With the continuous progress of the economic era, both art and design education and local small and medium-sized enterprises are facing the crisis of survival and the pressure of competition, forcing the two to join hands to resist this crisis. The protection of the ecological environment will not only affect people's lives, but also affect the design and creation of art. This paper adopts the methods of correlation degree and correlation coefficient to construct a feasibility analysis model of mutual benefit cooperation between environment-embedded art and design education and local SMEs development based on improved grey analysis. It is to help art and design education and local small and medium-sized enterprises to continue to develop. The research results of this paper show that: (1) The accuracy of the model in this paper is on the rise as a whole, with the highest accuracy rate of 96.2%; the highest accuracy rate of the improved neural network model is 87.1%; the highest accuracy rate of the random forest algorithm is 86.3%; and the traditional model is the highest. The accuracy rate is 80.3%. (2) The recall rate of the traditional model is between 0.0816 and 0.0984; the recall rate of the random forest algorithm is between 0.726 and 0.983; the recall rate of the improved neural network is between 0.752 and 0.961; the recall rate of the model in this paper is between 0.615 and 0.815. (3) The overall static payback period is decreasing year by year, and the overall rate of return is also increasing year by year, which shows that the cooperation between art and design education and enterprises can bring higher benefits to enterprises. (4) After the cooperation between the two companies in 2016, various indicators have risen significantly. The highest net present value is 92.55 million yuan; the highest profit index is 1.98; the highest net present value rate is 35.8%, and the highest internal rate of return is 79.2%. (5) After school 2 cooperates with the enterprises, the employment rate has increased year by year, with the highest employment rate of 88.3%. In contrast, the annual employment rate of school 1, which does not cooperate with enterprises, is irregular. (6) The percentages of environmental indicators such as total emission reduction, environmental quality, and pollution control have all increased, and resource consumption has decreased by 28%; the public's satisfaction with the results of environmental protection has also reached 90%. (7) The average evaluation of each index is above 8 points, the highest score for completeness is 8.5, the highest score for feasibility is 8.9, the highest score for recognition is 9.2, and the highest score for practicality is 8.7. Introduction As the economy continues to develop and improve, both art and design education and local SMEs are struggling. Art and design education promotes the development of local small and medium-sized enterprises, and local small and mediumsized enterprises also provide a guarantee for art and design education, and the two promote and cooperate with each other. Human beings are both creators and shapers of the environment, but in many areas, people can see signs of pollution in large areas, and the polluted environment is incompatible with the artistic beauty. In order to protect the environment and promote environmental awareness, it is also an obligation as an art designer. is paper adopts the methods of correlation degree and correlation coefficient to construct a feasibility analysis model of mutual benefit cooperation between environment-embedded art and design education and local SMEs development based on improved grey analysis. In order to better analyze these two possibilities, achieve the ideal goal of mutual benefit. is paper has received a lot of support based on the research results so far. Grey analysis is a method of multivariate statistical analysis [1]. e concept of grey is associated with the white system and the black system [2]. Grey relational analysis can be used not only for relational analysis, but also for evaluation [3]. e grey analysis reflects the degree of correlation between the curves [4]. Grey analysis method is a method to measure the correlation between factors according to the similarity or dissimilarity of development tendency between factors [5]. Generally speaking, grey analysis can be used to analyze the degree of influence of various factors on the results [6]. Grey system theory is a system science theory developed by Professor Deng Julong [7]. e application of grey analysis includes various fields of social sciences and natural sciences [8]. Correlation is divided into absolute correlation and relative correlation [9]. Art and design education can improve people's awareness and understanding of beauty [10]. Its basic purpose is to develop balanced people [11]. Feasibility analysis is a comprehensive system analysis method that provides the basis for project decision-making [12]. Feasibility analysis is predictable, fair, reliable, and scientific [13]. Feasibility analysis is an important activity at the beginning of a project [14]. It is very important for the entire national economy [15]. Grey Analysis 2.1.1. Concept. Grey Analysis [16]. It is based on grey system theory to deal with complex systems in research, and uses a series of methods such as grey generation, grey correlation analysis, grey cluster analysis, and grey prediction to maximize the use of collected information. Choose a reasonable generation method. e actual samples of each clustered object are qualitatively and quantitatively analyzed by the whitening function of abstract dimension and spatial series curve fitting and GM modeling prediction. e basic idea of grey analysis is relative ranking analysis. is is based on the similarity of the geometric shapes of the sequential curves to judge whether they are closely related. It is also a method for quantitatively describing and comparing the state of system development. Grey Analysis Method. e purpose of grey analysis is to quantitatively characterize the degree of correlation between various factors, to find the main relationship of various factors in the system, to find out the important factors that affect the development of the system, and to grasp the main characteristics. Data Transformation. e physical meaning of each element in the system is different, or the measurement unit is different, so the dimension of the data is different. And sometimes the magnitudes of the values are very different. If the dimension and the number of digits are different, it is inconvenient, or it is difficult to obtain the correct result during the comparison process. To facilitate analysis, the raw data must be processed before comparing the elements [17]. (1) Initial value processing. All data of a sequence is removed by its first number, and the method of obtaining a new sequence is called initializing. e sequence shows the multiples of different time values of the original sequence compared to the first value. e sequence has a common starting point, is dimensionless, and the data in the data are all greater than 0.With original sequence (1) After initializing x (0) (i) to get x (1) (i), then (2) Average processing. e method of removing all data of a series by its average value to obtain a new series is called mean processing. is new array indicates the multiples of the values at different times in the original array relative to the mean. e method of dividing all the data in a series by the mean and obtaining a new series is called averaging. is new series shows the multiples of the values of the original array at different time points relative to the mean. With original sequence Do the mean processing on x (0) (i), and get x (1) (i) as In Correlation Coefficient. e degree of correlation between systems or factors judges whether they are closely related according to the degree of similarity in geometry between the curves. erefore, a measure of the degree of association between curves can be used as a measure of the degree of association [18]. Let the parent factor sequence x 0 (i) and the child factor sequence x j (i) be, respectively, Journal of Environmental and Public Health Of which i ๏ฟฝ 1, 2, . . . , n; j ๏ฟฝ 1, 2, . . . , m. e correlation coefficient ฮพ 0j (i) between x 0 (i) and x j (i) can be expressed by the following relation: Among them, ฮพ 0j (i) is called the correlation coefficient of x 0 to x j at time i. Write down the minimum absolute value difference between the two levels at each moment as e maximum absolute value difference between the two levels is recorded as Relevance. A measure of the magnitude of the correlation between two systems or two factors is called the degree of correlation [19]. e degree of correlation represents the relative change between factors in the system development process, that is, the relativity of the magnitude, direction, and speed of the change. A correlation between two is considered large if two relative changes during development are substantially the same. Otherwise, the correlation between the two becomes smaller. e degree of association is denoted as r 0j , and its expression is Among them, r 0j represents the degree of correlation between the sub-number sequence j and the parent sequence 0 [20]. N represents the length of the sequence, that is, the number of data [21]. (1) Dun's correlation degree In ฯ is the resolution factor, and ฯ โˆˆ [0, 1]. (2) Grey absolute correlation degree the absolute correlation degree is the grey absolute correlation degree obtained by considering the polygonal change of the curve relative to the starting point, that is, by moving the starting point of the data series curve to the coordinate point and checking its proximity. (3) Relative degree of correlation Let X 0 and X i have the same length, and the initial value may not be equal to zero [22]. X 0 โ€ฒ and X i โ€ฒ are the initial value images of X 0 and X i , respectively, then the absolute grey correlation degree of X 0 โ€ฒ and X i โ€ฒ is called the grey-relative correlation degree of X 0 and X i , which is referred to as the relative correlation degree and denoted as r 0i . the relative degree of correlation is the absolute degree of grey-scale correlation of the initialized sequence calculated after initializing the original sequence. e grey-scale absolute correlation is the proximity of the absolute values of the sequence polygons, and the grey-relative correlation is the proximity of the absolute values of the sequence polygons. is formula is used to investigate the closeness of the rate of change of a series of polygons relative to the starting point. e grey relative and grey absolute relevance are calculated in the same way, so the range of applicable sequences is also the same, but the starting points of the two objects are different. (4) Slope correlation e slope correlation mainly considers the vicinity of the slope between the data series curves, so the internal slope of the data series needs to be calculated, but since the internal slope of the indicator series cannot be calculated, it is only suitable for time series. Journal of Environmental and Public Health 3 Grey Analysis Model (1) e dimensionlessization of the evaluation factor is aimed at eliminating incommensurability. ere is m evaluation factor x 1 , x 2 , ..., x m and n objects participating in the evaluation, then the original data matrix x ij can be obtained. Depend on e dimensionless evaluation factors can be obtained. (2) Find the absolute difference and find its maximum value z j max โ€ฒ and minimum value z j min โ€ฒ , where (j ๏ฟฝ 1, 2, . . . , n) . Use the formula: (5) Find the correlation degree r i of each factor. when r i is larger, the importance of index x i in each evaluation factor is stronger, which reflects the closeness and influence of the comparison sequence and the reference sequence. (6) Calculate the weights of each item w i Use the formula: From this, the weight of each evaluation factor can be obtained. It is shown in Figure 1. e grey analysis model first needs to initialize the original measurement data, how to initialize the original data, then calculate the absolute difference, the two-level maximum difference and the minimum difference, calculate the maximum difference and the minimum difference, calculate the correlation coefficient, and finally calculate the correlation degree and print the association matrix. Feasibility Analysis of Environment-Embedded Art and Design Education and Mutually Beneficial Cooperation in the Development of Local Small and Medium-Sized Enterprises 3.1. Environment Embedding. Environment refers to environmental protection, ecological environment pollution, etc. Embedded means firmly fixed or established. Environmental embeddedness is guided by national goals or strategic needs, in-depth art and design education process, and raising people's value expectations for education. Environment-embedded art design education is to protect the environment, protect ecological balance, and promote environmental awareness through art design. e goals and strategic needs of the country set what kind of people to cultivate for art and design education. SMEs provide talent support and market position for art and design education. Environmental embedding usually includes three levels: macro, meso, and micro. Macro-Level. From the macro-level environmental embedding, we will promote the high-quality reform of art and design education. Local SMEs promote economic development and guide the strategic direction of serving local economic development. Provide strong economic and technical support for art and design education. However, the current talent training in art and design education cannot meet the skills and quality requirements for the economic development of local SMEs. Art and design education needs to strengthen the implementation of quality reform to adapt to this new situation. In order to deepen the concept of cooperation with local SMEs, we will take the lead in taking measures. In the education quality reform, continuously improve the quality of personnel training and complete the reform of art and design education [23]. Meso-Level. e environmental embedding at the mesoscopic level enhances the value expectation of artistic talents [24]. Now, the development of SMEs is very effective, the possibility of growth is high, full of vitality, with broad prospects, talent cultivation in art and design education, and the expectation and value of work are getting stronger and stronger. Micro-Level. From the micro-level environmental embedding, improve the adaptability of students to the employment of art and design [25]. Local small and medium-sized enterprises require educators to focus on cultivating students' artistic literacy. More than any other company in the past, they attach great importance to the talent of art and design education, and further improve the staff's requirements for comprehensive quality, knowledge, and cultural level. rough the construction of art and design education, the awareness of cooperation among art students will be continuously improved, the employability of art students will be improved, and more guarantees and opportunities will be provided for art students. e Role of Environmental Protection in Art and Design Education. Art design education and environmental protection are closely linked. Art design alone and ignoring environmental protection will only damage the environment. erefore, it is the responsibility of art design education to establish environmental protection awareness. Art design education is reflected in environmental protection, combining art design with environmental protection, beautifying people's living environment through art design, and achieving the purpose of environmental protection. In Interior Design. Art design should combine landscaping interior design with green environmental protection. Interior design should not only pursue beauty and comfort, but also pursue green energy conservation. In Daily Life. People often encounter discarded items that most people choose to throw away, and this is the act that causes our environment to be destroyed. rough art and design education, we can choose to make reasonable use of these discarded items and turn waste into treasure. is can not only reduce economic investment, but also reduce waste of resources, thereby protecting the environment. In the Design of Landscape Facilities. You can choose to install the energy-saving device on the locust disaster landscape facilities, which can not only bring the required energy to the landscape facilities, but also reduce the dependence on environmental resources, thus playing a role in the environmental protection. e Development of Art and Design Education has Made Great Progress So Far. However, with the rapid development of the economy and the demand for high-level talents in modern industries, the confusion of "talented products" of art and design education and the demand for Chinese industries are becoming more and more serious. For example, in the traditional training method, the growth design talent lacks practicality and creativity, and does not meet the diverse market needs. e subject system is composed of a single repetition, without characteristics, and it is difficult to meet the needs of the multilevel market. e department of experts is too small, the boundaries of experts are too clear, and the integration of each other is insufficient. e teaching method is closed, the teaching method is not innovative, and the students cannot thoroughly practice learning. Development Status of Small and Medium-Sized Enterprises. Compared with large enterprises, SMEs are more innovative and dynamic. However, it is disadvantageous in terms of scale and strength, and SMEs in the development stage do not want to spend a lot of money on the design and planning of corporate image and product appearance. Products that symbolize low prices and low quality, lack of independent brands, lack of product design and development after technical level, lack of long-term corporate image and product development and design plans, etc., all limit the long-term development of enterprises. Analysis of the Relationship between Art and Design Education and Local SMEs. In developed countries, art and design education takes the form of a studio. Enterprises and universities cooperate to implement product research and development, design, packaging, publicity, etc., and use each other to bring benefits to each other. In recent years, many universities in our country have also adopted this method, but it has no substantial effect. Art and design education and small and medium-sized enterprises are independent, there is no complementary advantage, there is no resource sharing and mutually beneficial cooperation system. e favorable resources for artistic talents are wasted in large quantities, and small and medium-sized enterprises have been ignoring the careful development of products because of problems such as capital and thinking concepts, and the two have not entered the virtuous circle of the market economy. e Practical Value of Mutually Beneficial Cooperation between Art and Design Education and the Development of Local SMEs. Facing today's market economy, art and design education and small and medium-sized enterprises will face development opportunities and fierce competition pressure. Art and design education should be helpful to local SMEs. ere is a need to promote the development of art and design education. In order to have a broader development space, the two complement each other and cooperate with each other. Improve the Competitiveness of Enterprises and the Added Value of Products. e development of small and medium-sized enterprises is the bright spot of local economic development and the most powerful main force of local economy. Art and design education is historically important for improving the possibility of developing small and medium-sized enterprises, producing high-value-added high-end products, forming a benign corporate operation track, and creating an independent corporate brand. Improve Social Awareness and Brand Competitiveness. Today, education is completely introduced into the market, and design education is faced with both hot and difficult issues such as financing, registration, and employment. e training of design talents, in order to meet the needs of the market and promote the development and growth of the enterprise, needs the financial support of the enterprise. e growth of the enterprise provides an important guarantee for education, and puts forward higher educational requirements and goals. Improve Product Quality Standards for Design Talents and Solve Local Employment Problems. To cultivate highly innovative and practical art design talents for the society is the basic goal of art design education. SMEs have played an important role in the reform of art and design education, the establishment of the foundation of educational practice, and the transformation of educational outcomes. Accelerate the Transformation of Scientific Research Achievements in Art Design. Education is the main source of knowledge innovation, the driving force of the economy, and the driving force of enterprise reform. SMEs have excellent resources for capital and industrial implementation. Two double-edged swords, combining resources to complement each other, in-depth cooperation, improving their respective market adaptability and innovation vitality, effectively combining cultural, scientific research and economic interests, transforming cultural productivity into social productivity, and promoting the healthy development of local economy. Model Testing. is article conducts feasibility analysis based on the grey analysis model. is part needs to test the grey analysis model and compare the grey analysis model with the improved neural network and random forest algorithm and traditional models. e experiment selects the enterprise income after the cooperation between a university's art design and small and medium-sized enterprises as the experimental data. Comparing the accuracy of different models on the results of cooperation between art and design education and SMEs, the results are shown in Figure 2. As can be seen from Figure 2, the overall accuracy of the model in this paper is on the rise, but it has declined in 2018. e highest accuracy rate is 96.2% and the lowest accuracy rate is 86.7%; the overall upward trend of the improved neural network model is not obvious, and it is rising. ere were two downward trends in the process, the highest accuracy rate was 87.1%, and the lowest accuracy rate was 80.7%; the overall random forest algorithm also showed an upward trend, and it rose slowly after 2017, with the highest accuracy rate of 86.3% and the lowest accuracy rate. e accuracy rate of the traditional model is the lowest, although the increase is more obvious, the highest accuracy rate is 80.3%, but the lowest accuracy rate is 70.3%. It can be seen that the grey analysis model proposed in this paper has a high accuracy. On this basis, in order to further verify the effectiveness of the grey analysis model, the recall rate of the results of cooperation between art design education and small and medium-sized enterprises was used as an experimental indicator to test the performance of different models. e recall rate is inversely proportional to the accuracy rate. When the recall rate is low, the accuracy rate is high, which proves that the effectiveness of the model is higher. e recall rates of different models are shown in Figure 3. As can be seen from Figure 3, the recall rate of the traditional model is between 0.0816 and 0.0984; the recall rate of the random forest algorithm is between 0.726 and 0.983; the recall rate of the improved neural network is between 0.752 and 0.961; the recall rate of this model is 0.615-0.815. It can be clearly seen from the experimental results that the recall rate of the model in this paper is smaller than that of the other three models. According to the high accuracy rate when the recall rate is low, it can be concluded that the model in this paper is more effective. Feasibility Analysis. In order to better reflect the feasibility of mutual beneficial cooperation between art and design education and local small and medium-sized enterprises, this article compares the static and dynamic indicators of local small and medium-sized enterprises before and after cooperation with art and design education and the advantages of mutually beneficial cooperation. e feasibility analysis indicators of mutual benefit cooperation between art design education and local SMEs can be divided into static indicators and dynamic indicators. Static indicators are static payback period and rate of return; dynamic indicators are net present value, net present value rate, profit index, and internal rate of return. Static Indicators. In order to analyze whether the mutual cooperation between art and design education and local SMEs can bring better benefits to enterprises, this article compares the static indicators before and after cooperation. e experiment will make art and design education and local SMEs in 2016. Carry out mutually beneficial cooperation and calculate the value of various static indicators from 2014 to 2015. e results are shown in Table 1: It can be seen from Figure 4 that the overall static payback period is decreasing year by year, but the static payback period is increasing before the mutual benefit cooperation between art and design education and local SMEs. e highest static payback period is 3.5 years, and it is reduced to 1.3 years. e overall rate of return is also increasing year by year, but the rate of return is decreasing before the mutual cooperation between art and design education and local small and medium-sized enterprises. e lowest rate of return is 61.4% and the highest rate of return is 87.8%. It shows that the cooperation between art and design education and enterprises can bring higher benefits to enterprises. Dynamic Indicators. In order to analyze whether the mutual benefit cooperation between art and design education and local small and medium-sized enterprises can bring better benefits to enterprises, this article will also compare the dynamic indicators before and after cooperation. Enterprises conduct mutually beneficial cooperation and calculate the value of various dynamic indicators from 2014 to 2015. e results are shown in Table 2: As can be seen from Figure 5, various dynamic indicators generally show an upward trend, and the indicators are still decreasing before the cooperation between art and design education and enterprises. After the cooperation between the two parties in 2016, various indicators have risen significantly. e highest net present value is 92.55 million yuan; the highest profit index is 1.98, and the project is feasible only when the profit index is greater than 1 or equal to 1, indicating that the feasibility is fully reflected when the two cooperate; the highest net present value is 35.8%, the highest internal rate of return is 79.2%. Various dynamic indicators show that the mutual beneficial cooperation between art and design education and local small and medium-sized enterprises is feasible, and it can also bring greater benefits to small and medium-sized enterprises, and the income is increasing year by year. Analysis of the Employment Situation of Art and Design Students. is article analyzes the considerable benefits that reciprocal cooperation brings to companies, as well as the benefits for art and design education students. e experiment also adopts the comparison method, selecting two colleges and universities with art and design education majors, school 1 does not cooperate with enterprises, and school 2 cooperates with enterprises, and compares the employment rate of students after graduation from 2014 to 2020 in the two colleges and universities. e result is shown in Figure 6: It can be seen from Figure 6 that after School 2 cooperates with enterprises, the employment rate has increased year by year. Although there was a slight decline in 2016, it has not declined since 2016, with the highest employment rate of 88.3%. On the other hand, the annual employment rate of schools 1 that do not cooperate with enterprises is irregular, sometimes rising and falling, which is relatively unstable. e highest employment rate was 76.3%, which was 12% lower than the highest employment rate in School 2. It shows that schools that cooperate with enterprises have higher employment rates and provide more job opportunities for students. Comparison of Environmental Protection. is article also compares the environmental protection indicators before and after the cooperation between art and design education and local small and medium-sized enterprises. e comparison indicators are total emission reduction, resource consumption, environmental quality, pollution control, and public satisfaction with environmental protection. e results are shown in Figure 7. As can be seen from Figure 7, the percentages of environmental indicators such as total emission reduction, environmental quality, and pollution control are all increasing, and the increase rate is large; resource consumption is decreasing by 28%. Satisfaction also reached 90%. It shows that the cooperation between the two has played an obvious role in environmental protection and should be vigorously promoted. e environment embedded art design education based on improved grey analysis and the mutually beneficial cooperation with local small and medium-sized enterprises still need to be evaluated by people from all walks of life. In the experiment, experts in different fields, as well as students and business entities were invited to evaluate the various indicators of the cooperation. If the average value of each indicator is above 8 points, it means Journal of Environmental and Public Health 9 that the cooperation is feasible. e result is shown in Figure 8: As can be seen from Figure 8, experts, students, and business entities have high evaluations of the reciprocal cooperation proposed in this paper. e average evaluation of each index is above 8 points. e highest score for completeness is 8.5, and the highest score for feasibility is 8.9, the highest score for recognition is 9.2, and the highest score for practicality is 8.7. It shows that art and design education is feasible for local SMEs to develop mutually beneficial cooperation. Conclusion With the increasing number of college graduates year by year, it is also a big problem for college students to find jobs, and in the face of market competition, small and mediumsized enterprises do not have an advantage. Based on this phenomenon, this article proposes that art and design education and local small and medium-sized enterprises develop mutually beneficial cooperation, so that art and design education can serve small and medium-sized enterprises, and small and medium-sized enterprises can promote the development of art and design education, and play the ultimate goal of environmental protection. is article adopts the method of correlation degree and correlation coefficient to construct a feasibility analysis model based on improved grey analysis of environment-embedded art and design education and the development of mutual benefit cooperation between local SMEs. In order to make art and design education and local SMEs develop together in a better direction. e findings of the article show that: (1) e accuracy rate of the model in this paper is on the rise as a whole, with the highest accuracy rate of 96.2% and the lowest accuracy rate of 86.7%; the overall upward trend of the improved neural network model is not obvious, and the highest accuracy rate is 87.1%. In the upward trend, the highest accuracy rate is 86.3%; while the traditional model has the lowest accuracy rate, the highest accuracy rate is 80.3%. It can be seen that the grey analysis model proposed in this paper has a high accuracy. (2) e recall rate of the traditional model is between 0.0816 and 0.0984; the recall rate of the random forest algorithm is between 0.726 and 0.983; the recall rate of the improved neural network is between 0.752 and 0.961; the recall rate of the model in this paper is between 0.615 and 0.815. From the experimental results, it is obvious that the model in this paper is more effective. (3) e overall static payback period shows a decreasing trend year by year, the highest static payback period is 3.5 years, and the decrease is 1.3 years. e overall rate of return is also increasing year by year, the lowest rate of return is 61.4%, and the highest rate of return is 87.8%. It shows that the cooperation between art and design education and enterprises can bring higher benefits to enterprises. (4) e overall dynamic indicators showed an upward trend. After the cooperation between the two parties in 2016, the indicators increased significantly. e highest net present value is 92.55 million yuan; the highest profit index is 1.98; the highest net present value rate is 35.8%, and the highest internal rate of return is 79.2%. Various dynamic indicators show that the mutual beneficial cooperation between art and design education and local small and mediumsized enterprises is feasible, and it can also bring greater benefits to small and medium-sized enterprises, and the income is increasing year by year. (5) After school 2 cooperates with enterprises, the employment rate has increased year by year, with the highest employment rate of 88.3%. On the other hand, the annual employment rate of schools 1 that do not cooperate with enterprises is irregular, sometimes rising and falling, which is relatively unstable. It shows that the schools that cooperate with enterprises have higher employment rates and provide more job opportunities for students. (6) e percentages of environmental indicators such as total emission reduction, environmental quality, and pollution control are all on the rise, and the rate of increase is large; resource consumption is decreasing by 28%; the public's satisfaction with the results of environmental protection has also reached 90%. It shows that the cooperation between the two has played an obvious role in environmental protection and should be vigorously promoted. (7) e average evaluation of each indicator by experts, students, and business entities is above 8, the highest score for completeness is 8.5, the highest score for feasibility is 8.9, the highest score for recognition is 9.2, and the highest score for practicality is 8.5. e highest score was 8.7. It shows that art and design education is feasible for local SMEs to develop mutually beneficial cooperation. In order to promote mutually beneficial cooperation between art and design education and local small and medium-sized enterprises, it is necessary to establish a discipline system of art and design education with local economic characteristics; formulate a talent training model for art and design education with local economic characteristics; strengthen school-enterprise cooperation;, realize the personalized training of talents; actively promote the achievements of art and design scientific research to local small and medium-sized enterprises, and realize the transformation of knowledge into economic wealth. Although the experimental results in this paper have obvious advantages, they still have certain limitations. is model is limited to art and design education, and it is not obvious in other majors. It is hoped that further research can be carried out on the scope of application in the following research to increase the universality of the model. Data Availability e experimental data used to support the findings of this study are available from the corresponding author upon request.
How Accessible Is Genital Gender-Affirming Surgery for Transgender Patients With Commercial and Public Health Insurance in the United States? Results of a Patient-Modeled Search for Services and a Survey of Providers Introduction In the United States, 1.4โ€“1.65 million people identify as transgender, many of whom will seek genital gender-affirming surgery (GAS). The number of surgeons, geographic proximity thereof, and exclusionary insurance policies has limited patient access to genital GAS. Aim To assess the accessibility of both feminizing and masculinizing genital GAS (vaginoplasty, metoidioplasty, and phalloplasty) by identifying the location of GAS surgeons, health insurance, or payment forms accepted. Methods Between February and April 2018, genital GAS surgeons were identified via Google search. Surgeonsโ€™ offices were contacted by telephone or e-mail. Main Outcome Measure We queried the type of genital GAS performed, the health insurance or payment forms accepted, and the type of medical practice (academic, private, or group managed-care practice). Results We identified 96 surgeons across 64 individual medical centers offering genital GAS. The survey response rate was 83.3%. Only 61 of 80 (76.3%) surgeons across 38 of 53 (72%) locations confirmed offering genital GAS. Only 20 (40%) U.S. states had at least one genital GAS provider. 30 of 38 (79%) locations reported accepting any form of insurance. Only 24 of 38 (63%) locations (14 academic; 10 private/group) accepted Medicaid (P = .016); 18 of 38 (47%) locations (13 academic; 5 private/group) accepted Medicare (P = .001). Clinical Translation Reconciliation of the public policies regarding insurance coverage for GAS with the actual practices of the providers is necessary for improving access to GAS for transgender individuals. Strengths & Limitations We purposefully used a methodology mirroring how a patient would find GAS surgeons, which also accounts for key limitations: only surgeons whose services were featured on the internet were identified. We could not verify the services or insurance-related information surgeons reported. Conclusion This study suggests that access to genital GAS is significantly limited by the number of providers and the uneven geographic distribution across the United States, in which only 20 of 50 U.S. states have at least one genital GAS surgeon. Feldman AT, Chen A, Poudrier G, et al. How Accessible Is Genital Gender-Affirming Surgery for Transgender Patients With Commercial and Public Health Insurance in the United States? Results of a Patient-Modeled Search for Services and a Survey of Providers. Sex Med 2020;8:664โ€“672. INTRODUCTION In the United States, 1.4e1.65 million adults (0.30e0.78% of each state) and 150,000 teens identify as transgender. 1,2 Gender-affirming surgery (GAS), which refers to any surgical procedure that modifies an individual's body in accordance with their gender identity and expression, is sought after by some transgender and gender nonbinary individuals. Masculinizing genital GAS includes phalloplasty and metoidioplasty (creation of a phallic organ); feminizing genital GAS includes bilateral orchiectomy (at the time of vaginoplasty and as a stand-alone procedure) and vaginoplasty (with and without creation of a vaginal canal). These procedures have been performed by urologists, plastic surgeons, gynecologists, and general surgeons. Comprehensive perioperative care is ideally provided by a multidisciplinary surgical team. 3e5 Financial constraints, inadequate insurance coverage, and variation in expertise and competence among surgeons pose significant obstacles for patients seeking high-quality genital GAS in the United States. Transgender and gender nonconforming individuals are currently either more likely to use governmentfunded insurance (eg, Medicare/Medicaid) or to be uninsured, than the general (cisgender) population. 6,7 Historically, reimbursement for genital GAS under Medicaid or Medicare has been lower than the rates of private insurance. This study assesses the availability of genital GAS for patients seeking these medical services throughout the United States. Our approach sought to mirror how patients gain access to providers of genital GAS, using national transgender patient care and advocacy forums coupled with internet-based keyword searches for genital GAS services. 8e11 We aimed to identify not only objective barriers to access to care (eg, insurance coverage; whether or not there were surgeons in the area) but also more nuanced barriers, such as the ease of access to information regarding services offered. We present data from an internetbased search for U.S. surgeons (designed to emulate how ordinary patients might seek services) who self-report providing masculinizing and/or feminizing genital GAS, and from a telephone survey of the surgeons, we identified to learn more about what specific genital GASs they offered. We assessed the geographic distribution of genital GAS providers, the specific surgery options each provider offered, and, most significantly, the proportion of providers who accept commercial, and/or Federal/State health insurance, and/or cash-only reimbursement for genital GAS services. All survey data were collected between February and April of 2018. Identification of Providers We used 3 methods to identify providers of genital GAS: I. 3 of the most popular transgender community advocacy group's online GAS provider lists were queried in February of 2018: (1) transhealthcare.org, 12 (2) tssurgerguide.com, 13 and (3) Callen Lorde's 2015 TGNC surgery list. 14 Transhealthcare.org is a search engine managed by Trans Media Network where individuals can input a desired surgery, the geographic location, and the type of insurance to locate an appropriate provider. 15 Tssurgeryguide is a popular GAS information resource that is independently operated by an male-to-female transgender woman. 16 Callen-Lorde Community Health Center is a renowned NYC-based LGBTQ health-care center that provides comprehensive services to New York's lesbian, gay, bisexual, and transgender communities regardless of the ability to pay. 17 Using these 3 sources, we compiled a preliminary roster of surgeons listed as performing phalloplasty, metoidioplasty, and/or vaginoplasty for transgender patients on one or more of the lists. II. A Google search for genital GAS surgeons in the United States was performed using a variety of genital GAS-related search terms: 'gender affirming surgery surgeons,' 'gender confirming surgery surgeons,' 'sex reassignment surgery surgeons,' 'SRS surgeons,' gender reassignment surgery surgeons,' 'transgender genital reconstruction surgeons,' 'bottom surgery surgeons,' 'vaginoplasty surgeons,' 'phalloplasty surgeons,' 'metoidioplasty surgeons.' Publicly available contact information (practice phone number and e-mail) was retrieved for all surgeons from Google in February 2018. III. Surgeons and their office staff identified by methods I and II mentioned previously were asked to provide the names, specialties, types of surgeries offered, reimbursement options, and other survey items described in the following section. Those contacted were also asked to identify surgeons within the same practice/group if the latter were not named by search methods I and II. Survey Distribution and Data Collection Survey questions were generated to query providers about what specific surgery options they offer and what forms of reimbursement they accept. Each surgeon's office was contacted by phone to complete the survey. Up to 5 phone call attempts were made when necessary, and surgeons whom we were unable to reach by phone were contacted via e-mail where possible. Each surgeon or a designated staff member (ie, medical assistant or nurse practitioner) was asked the following questions: (1) whether or not they offer genital GAS; (2) what type(s) of genital GAS they performed (vaginoplasty, phalloplasty, and/or metoidioplasty); (3) what surgical techniques (eg, skin flap and graft types) they use; (4) the names of all surgeons at the institution who currently perform genital GAS; (5) what forms of payment they accept; and (6) whether surgeries were performed at a teaching hospital or at a private facility. A complete list of survey questions can be found in Appendix. Data Analysis We report descriptive statistics for the data collected stratified by the surgeon and location. The location was defined as a medical center, individual or group practice of one or more surgeons who offer genital GAS. Survey Respondent Characteristics Our internet search identified 96 individual surgeons across 64 different medical centers (which we refer to as "locations") who reported performing genital GAS. 80 of 96 surgeons (response rate: 83%) across 53 locations were successfully contacted and completed the survey. Of these respondents, only 61 of 80 (76.3%) surgeons across 38 of 53 (71.7%) locations confirmed that they performed genital GAS (any type of vaginoplasty, phalloplasty, or metoidioplasty, excluding orchiectomy as a stand-alone procedure). Regarding the geographic region, the highest number of surgeon respondents were located in the Western United States, with 23 of 61 (38%) surgeons completing the survey across 13 practice locations. This was closely followed by surgeons in the Northeast region, with 20 of 61 (33%) surgeon respondents across 11 practice locations. In the Southern United States, there were 11 of 61 (18%) surgeon respondents across 7 practice locations, while in the Midwestern United States region, there were 7 of 61 (12%) responding surgeons across 7 practice locations. Only 20 (40%) U.S. states had at least one genital GAS surgeon ( Figure 1). California had the greatest number of genital GAS surgeons: 13 surgeons across 7 practice locations. Massachusetts, Texas, New York, and Florida had 6, 6, 5, and 4 surgeons across 2, 2, 4, and 4 locations, respectively. private practice locations accepted only cash payment for surgery, and there were no geographical differences regarding where these were located ( Table 2). The locations that did not accept any form of insurance were all privately operated practices; there were no geographical differences in their location. Academic Center vs Private Practice We found relatively comparable numbers of academic centers (16 locations) and private practices (22 locations) offering GAS. In the Western United States, we identified 3 academic centers and 10 private practices. The Southern United States reported one academic center and 10 private practices. 5 academic centers and 2 private practices were identified in the Midwestern United States. The Northeastern United States had 7 academic centers and 4 private practices. Phalloplasty was also performed by providers in each region. There were no major differences in the number of locations that offered a specific technique among the regions. All of the western locations offered phalloplasty with urethral lengthening, and a majority offered the other techniques. There was no predominant technique in the Midwest. In the South, the technique offered by each provider was also phalloplasty with urethral lengthening; half of the locations performed anterolateral thigh flap and groin flap phalloplasty (3/6; 50% each). Within the Northeast, a slightly greater number (7/9; 78%) of locations offered anterolateral thigh flap phalloplasty as opposed to the other techniques. The Western United States contained the greatest number of locations offering groin flap phalloplasty (7/17; 45%), but this also reflected the overall proportion of providers within each region. Metoidioplasty was more likely to be offered in the Western United States (12/29; 76%) locations than elsewhere in the United States. Overall Accessibility of GAS Surgeons Through this study, the design of which was modeled after the perspective of a patient, we sought to assess the availability of genital GAS in the United States. We focused not only on the number of U.S. providers but also the degree of access to providers: the geographic location and regional distribution, ease of contacting and speaking with providers, affordability (the forms of payment that providers do and do not accept), and the specific surgical services offered. This report uniquely explores the nationwide accessibility of genital GAS across these important access and quality careerelated domains. Overall, we identified 96 providers across 64 locations, and of these, confirmed that 61 Consistent with what many transgender and gender nonbinary patient reports, our study findings suggest that the process to find a surgeon who offers genital GAS and accepts a specific type of insurance can indeed be difficult and frustrating. For example, despite numerous voice messages left by our group simply requesting a callback from the surgeon and/or knowledgeable staff to learn more about what surgical services the provider offered, 16 of 96 (17%) of providers across 11 of 64 (17%) locations either could not be reached or did not respond to phone calls and e-mails inquiring about GAS services. Although a number of surgeons did respond to explain what surgeries they offered, in the majority of cases, only office staff were ever available to explain what surgeries were and were not offered. Furthermore, of the 80 providers identified by our search, 19 of 80 (24%) of surgeons across 15 of 53 (28%) locations did not perform genital GAS. Approximately 80% of these providers were identified because they were featured on one or more of the advocacy group lists of providers to contact for genital GAS. Nonetheless, approximately 20% of the remaining providers who confirmed that they do not offer any type of genital GAS were identified by Google keyword searches for services that, at least at the time of our survey, they did not provide. Such discrepancies could presumably be frustrating for patients. Another aspect of our experience contacting surgeon's offices was that after speaking with 26 of 80 (33%) of providers or their offices, 1e4 additional calls (with or without accompanying follow-up e-mail), performed over the course of a median of 5 days, were necessary to complete the survey questionnaire. Our experience suggests that, from a patient perspective, for a patient to clarify what genital GAS services a surgeon offers is very timeconsuming and potentially frustrating. This inaccessibility potentially fosters increased wariness of GAS providers among transgender and gender nonbinary patients. The bold indicates the total number of locations performing vaginoplasty, phalloplasty, metoidioplasty -respectively -including all surgical techniques. Insurance Coverage Insurance coverage for genital GAS is the most important factor to improving the accessibility of genital GAS for patients. We found that only 30 of 38 (79%) locations nationwide reported accepting any form of health insurance for genital GAS. 8 medical centers, all of which were private practices, accepted cash only. 24 locations self-reported accepting Medicaid, and 18 locations self-reported accepting Medicare. Compared with private centers, a greater number of academic centers accepted Medicaid and Medicare. There were no differences in the number of locations accepting public insurance in each geographic region of the United States. Research has shown that genital GAS can decrease gender dysphoria and improve health and the quality of life for patients who undergo it. 18,19 This is recognized by all major U.S. medical organizations and is the basis for coverage of genital GAS under Medicaid and Medicare by the Department of Health and Human Services and the Affordable Care Act (ACA). 20 Insurance coverage, however, remains inconsistent and varies greatly by the geographic region. 2,20 As of March 2018, the Medicaid policies of 19 U.S. states (including the District of Columbia and Puerto Rico) explicitly include coverage for transition-related care, whereas in 11 states, Medicaid policies specifically exclude transition-related care, and in the remaining 22 states, there is no explicit Medicaid policy about transition-related care. 21 Interestingly, in our survey, we found 2 states-Ohio and Wisconsin-where Medicaid policy explicitly excludes coverage, but at least one provider reported accepting out-of-state Medicaid reimbursement for genital GAS (Figure 2). 21 In addition, our analysis identified 5 states-Florida, Idaho, Michigan, Texas, and Utah-where Medicaid has no explicit policy, but at least one provider reported accepting Medicaid reimbursement for genital GAS (Figure 2). 21 This highlights the discrepancies among provider-reported insurance coverage and official government policy for gender-affirming services. Within the United States, the cost of genital GAS can vary widely depending on the type of surgery, the surgeon's fees, length of hospitalization and hospitalization costs, travel, follow-up appointments, and related expenses. 22,23 Depending on a patient's geographic location, both insured patients and GAS providers can frequently face a lengthy process to simply confirm whether or not specific genital GAS procedures are a covered benefit under the particular health insurance plan, as these are often decided by some insurance companies and State Medicaid programs on a "case-by-case" basis. 22e25 In our professional experience, some insurance companies impose genital GAS eligibility requirements that are not consistent with World Professional Association for Transgender Health Standards of Care guidelines (eg, requiring that, in addition to the 2 referral letters by mental-health providers confirming readiness for surgery, that patients have undergone 3 months of continuous mental health therapy before surgery, regardless of the need). 18 17 Results from our study: States where Medicaid explicitly covers GAS and at least 1 provider in our survey reported accepting Medicaid reimbursement for genital GAS (CA, CT, MD, MA, MN, NJ, NY, NY, OR, and PA) (yellow with green vertical lines). States where Medicaid explicitly covers GAS, but our survey found no provider that accepts Medicaid for GAS (CO, DC, HI, MT, NV, NH, RI, VT, and WA) (yellow with red horizontal lines). States where Medicaid explicitly excludes GAS, but where our study identified 2 states with providers who reported that they accept Medicaid for GAS (OH, WI) (red with blue dots). States where Medicaid has no explicit policy on genital GAS, but where our survey identified 5 states with providers who reported that they accept Medicaid for GAS (FL, ID, MI, TX, and UT) (gray with green dots). Historically, public and commercial U.S. health insurance plans categorically excluded coverage for all GASs, with genital GAS performed only for those patients who could afford to pay cash. 26 Medicare's 1981 ban on genital surgery for transgender patients was lifted in May 2014, and before the gender nondiscrimination provisions set forth in the 2010 ACA, insurance carriers could legally refuse to insure transgender people outright, on grounds that being transgender constituted a pre-existing health condition. 2,4,27 Before the 2010 ACA, the majority of GAS performed in the United States was not covered by health insurance; since the ACA, the incidence of GAS in the United States that was covered by some form of health insurance has increased severalfold. 28 However, whether commercial, state, and federal insurance will continue to cover GAS is far from assured. For example, on June 12, 2020, the White House administration announced a healthcare policy change that narrows the definition of sex-based discrimination in health care to include discrimination based only on the sex an individual is assigned at birth and excludes discrimination on the basis of an individual's gender and sexual orientation. 29e31 A similar policy change by the current White House administration had also sought to narrow the definition of sex discrimination in the workplace to exclude discrimination based on the gender and sexual identity, which was challenged in U.S. courts; on June 14, 2020, the U.S. Supreme Court, in a landmark ruling, found that the gender and sexual identity could not be parsed from "personhood," and therefore, workplace discrimination based on the gender and sexual identity constitutes sex discrimination and is illegal. 31,32 The fate of the current White House administration efforts to narrow the definition of sex discrimination to exclude the gender and sexual orientation, which is being challenged in U.S. courts, is uncertain. Availability of Surgeons The relatively small number of U.S. surgeons who have specialized training to perform genital GAS limits accessibility to genital GAS as there are often long waitlists and patients may be required to travel far distances to see the provider. After significant changes to Medicare/Medicaid policy in 2014 under the ACA, genital GAS has quickly become more common than it was 3 years ago. As patient demand has increased, so has interest among surgeons. While the overall number of U.S. surgeons who perform genital GAS is still small relative to other surgical subspecialties, over the course of the last 3 years, there has been a sharp rise in both the number of surgeons who offer genital GAS and the overall interest in the field. Of note, this sharp rise has not necessarily translated into increased access to genital GAS, and surgeons with training to perform genital GAS do not always accept insurance as payment. 24 In addition, specific licensing programs for surgical trainees interested in performing GAS are limited in number. 3,33 Limitations This study has several limitations. Providers were identified by internet searches and based on surgeon lists generated by LGBTQ advocacy groups, and so very likely not all surgeons who perform GAS in the United States were identified by our method. In addition, some surgeon contact information was out of date (eg, some surgeons had retired or stopped performing GAS). The counterpoint to these limitations is that the methods we used to find GAS surgeons are likely used by actual patients. Another limitation is that data were self-reported, and therefore, data could not be verified (such that surgeons could in reality perform more or fewer services than what they or their staff reported) and respondents were free to potentially bias responses in a way that was positive for their institution or private practice group and/or failed to mention cost-related factors such as whether they required monetary deposits to hold a surgery date or charged additional fees outside of what is covered by insurance plans. For example, respondents from a significant number of locations were vague regarding whether they were potentially amenable to providing care for publicly insured patients as opposed to already regularly performing surgery on these patients-with systems in place (ie, letters of agreement with one or more Medicaid programs) to provide standardized care. Others were not clear about whether they already routinely accepted publicly insured patients vs whether this was something they planned to do in the future. Another limitation was that some offices were unfamiliar with the insurance they accepted or had to qualify their responses. For example, providers stated that although they accepted a particular insurance, the insurance policy usually did not cover genital forms of GAS. Because payment agreement contracts are negotiated and/or may often require additional action from providers, the medical provider being "open to" accepting insurance does not guarantee that coverage will be available. Moreover, in a majority of cases, survey questions were answered by practice staff members rather than by surgeons directly. It is important to note that the methodologic limitations described previously encapsulate the motivation behind this study-we aimed to expose the same challenges that patients experience when seeking services. Our experience in speaking with surgeons and staff who were unable to answer our questions about what GASs they offer, whether they accept Medicaid and Medicare health insurance (Figure 2), and what specific surgeries/techniques they offer demonstrates these challenges. In essence, the methodologic challenges and limitations of our study reflect the very same "real-world" challenges and limitations that actual patients face when seeking GAS services. 8e11 CONCLUSIONS Although feminizing (vaginoplasty) and masculinizing (phalloplasty and/or metoidioplasty) genital GAS are available in the United States, these are offered by only a relatively small number of U.S. providers, clustered in greater numbers on the West Coast and Northeast. Genital GAS is more widely accessible to patients with Medicaid and Medicare health insurance at academic medical centers, as compared with private single and group practices. Based on the geographic distribution of genital GAS services, transgender and gender nonbinary people seeking surgery may be required to travel long distances for care, particularly in instances where the patient has Medicaid or Medicare insurance. Reconciliation of the public policies regarding insurance coverage for GAS (ie, Medicaid or Medicare inclusions) with the actual practices of the providers (ie, what insurance types medical practices can accept) is necessary to improve access to GAS for transgender people in the United States. Finally, other key factors that would serve to improve access to GAS care include strategies that increase the reliability, transparency, and consistency of resources available to patients to help them identify GAS surgeons and their services. Examples include databases of GAS providers, centers, and the specific surgeries/services that they provide, verified by the health-care providers and/or insurance companies, with direct contact information for patients to use to discuss services.
Canagliflozin for Primary and Secondary Prevention of Cardiovascular Events Supplemental Digital Content is available in the text. P atients with type 2 diabetes mellitus suffer substantial morbidity and mortality from cardiovascular and renal disease. 1,2 Current drug therapies and lifestyle interventions are not adequate, with elevated relative and absolute risks of serious disease outcomes observed for both primary and secondary prevention cohorts. Although the largest absolute benefits of interventions for individual patients are achieved among those with established disease (secondary prevention), the large number of patients with diabetes mellitus without overt cardiovascular disease (primary prevention) makes knowledge about the effects of therapies on first events an additional priority. The CANVAS Program (Canagliflozin Cardiovascular Assessment Study) was designed to assess the cardiovascular safety and efficacy of canagliflozin in a broad range of patients with type 2 diabetes mellitus. [3][4][5][6] The main results demonstrated that canagliflozin reduced the relative risk of cardiovascular death, nonfatal myocardial infarction (MI), or nonfatal stroke by 14% (P=0.02 for superiority) compared with placebo. 6 In addition, hospitalized heart failure and serious declines in renal function were reduced by 33% and 40%, respectively. 6 An unanticipated โ‰ˆ2-fold increase in the risk of amputation was also observed. By design, the CANVAS Program enrolled patients with and without prior cardiovascular disease to provide insight into the effects of canagliflozin in the primary and secondary prevention settings. In the analyses presented here, the efficacy and safety of canagliflozin are described separately for the primary and secondary prevention cohorts enrolled in the CANVAS Program. METHODS Data from the CANVAS Program will be made available in the public domain via the Yale University Open Data Access Project (http://yoda.yale.edu/) once the product and relevant indication studied have been approved by regulators in the United States and European Union and the study has been completed for 18 months. The trial protocols and statistical analysis plans were published along with the primary CANVAS Program article. 6 The design of the CANVAS Program has been published. [3][4][5][6] In brief, the CANVAS Program was a double-blind comparison of the effects of canagliflozin versus placebo made by combining data from 2 large-scale trials. The CANVAS Program was sponsored by Janssen Research & Development, LLC, and was conducted as a partnership between Janssen Research & Development, LLC, an academic Steering Committee (Appendix in the online-only Data Supplement), and an Academic Research Organization, George Clinical. The first draft of this article was written by the first author, with all coauthors contributing comments and approving the final draft for submission. The authors had access to all the data and ensured the accuracy of the analyses. All participants provided informed consent, and ethics approval was obtained for every center. Participants The criteria for inclusion and exclusion have been previously published. [3][4][5][6] Participants were men and women with type 2 diabetes mellitus (glycohemoglobin โ‰ฅ7.0% and โ‰ค10.5%) who were either โ‰ฅ30 years of age with a history of symptomatic atherosclerotic cardiovascular events defined as stroke, MI, hospitalization for unstable angina, coronary artery bypass grafting, percutaneous coronary intervention, peripheral revascularization (surgical or percutaneous), and symptomatic with documented hemodynamically significant carotid or peripheral vascular disease or amputation secondary to vascular disease (secondary prevention cohort); or โ‰ฅ50 years of age with no prior cardiovascular events but with โ‰ฅ2 of the following cardiovascular risk factors: duration of diabetes mellitus โ‰ฅ10 years, systolic blood pressure >140 mm Hg on โ‰ฅ1 antihypertensive agents, current smoker, microalbuminuria or macroalbuminuria, or high-density lipoprotein cholesterol <1 mmol/L (primary prevention cohort). The primary and secondary prevention participants were categorized based on a review of their medical histories. Randomized Treatment Randomization was performed through a central web-based system and used a computer-generated randomization schedule. Participants were assigned to canagliflozin or placebo, and Clinical Perspective What Is New? โ€ข Canagliflozin reduces cardiovascular and renal outcomes in patients with type 2 diabetes mellitus. โ€ข No statistical evidence of heterogeneity was observed for the effects of canagliflozin on cardiovascular and renal outcomes in participants with prior cardiovascular events (secondary prevention) and without prior cardiovascular events but at elevated risk (primary prevention), although the power to detect differences was limited. โ€ข Lower extremity amputations were uncommon but increased with canagliflozin without statistical evidence of heterogeneity between the secondary and primary prevention cohorts. What Are the Clinical Implications? โ€ข Patients with type 2 diabetes mellitus are at high risk for cardiovascular and renal outcomes. โ€ข Canagliflozin should be considered to manage diabetes mellitus in patients at high risk for cardiovascular events to reduce cardiovascular and renal outcomes. โ€ข Further study of canagliflozin in patients with type 2 diabetes mellitus without prior cardiac events is needed to better define the benefits on cardiovascular death, myocardial infarction, or stroke outcomes. โ€ข Caution should be used in patients at risk for amputations. ORIGINAL RESEARCH ARTICLE use of other background therapy for glycemic management and other risk factor control was according to best practice instituted in line with local guidelines. By design, the secondary prevention cohort was to be โ‰ˆ70% (minimum of 60%) of all patients. Follow-Up Follow-up after enrollment was scheduled quarterly for 1 year and then every 6 months until the end of the study. Every follow-up included inquiry about primary and secondary outcome events and serious adverse events. Serum creatinine measurement with estimated glomerular filtration rate was performed at least every 26 weeks. Outcomes The efficacy outcomes for these analyses were the composite of cardiovascular mortality, nonfatal MI, or nonfatal stroke; the individual components of the composite; hospitalization for heart failure; and all-cause mortality. Effects on the kidney were assessed using a composite renal outcome comprising a 40% reduction in estimated glomerular filtration rate, requirement for renal replacement therapy, or renal death. The safety events of interest were adverse events attributable to genital infection, urinary tract infection, volume depletion events, hypoglycemia, diabetic ketoacidosis, acute pancreatitis, renal adverse events, thromboembolism, cancer, fracture, and lower extremity amputation. All major cardiovascular events, renal outcomes, and deaths as well as selected safety outcomes (diabetic ketoacidosis, acute pancreatitis, and fracture) were assessed by Endpoint Adjudication Committees (Appendix in the onlineonly Data Supplement) blinded to therapy. The definitions that were used for the clinical events have been published. [3][4][5][6] Statistical Analysis Evaluation of outcomes in the primary and secondary prevention participants was prespecified. Rates of cardiovascular disease, kidney disease, death outcomes, and selected adverse events were estimated for active and placebo groups combined. All analyses of the effects of canagliflozin compared with placebo on cardiovascular and renal outcomes were based on the intention-to-treat principle using all follow-up time (on or off study treatment) for all randomized participants. Safety outcomes were analyzed using an on-treatment approach (based on patient time and events accrued while on study drug or within 30 days of study drug discontinuation) except for diabetic ketoacidosis, fracture, cancer, and amputation outcomes, which were assessed using all followup time (on or off study treatment). Hazard ratios (HRs) and 95% confidence intervals (CIs) were estimated for participants assigned to canagliflozin versus participants assigned to placebo separately for the primary and secondary prevention cohorts. Cardiovascular, death, and safety outcomes were analyzed using a stratified Cox proportional hazards regression model, with treatment as the exploratory variable and study as the stratification factor. Renal outcomes were analyzed using a stratified Cox proportional hazards model with treatment and the stage of baseline chronic kidney disease measured by estimated glomerular filtration rate (<60 or โ‰ฅ60 mL/min/1.73 m 2 ) as the exploratory variables and study as the stratification factor. Homogeneity of treatment effects across the primary and secondary prevention groups was examined via a test for the treatment-byprevention interaction by adding this term and the prevention cohort as covariates to the respective Cox proportional hazards model. The risk differences were calculated by subtracting the incidence rate (per 1000 patient-years) with placebo from the incidence rate with canagliflozin and multiplying by 5 years. Similarly, the CI was estimated by multiplying the lower and upper CI values by 5 years. Analyses were undertaken using SAS version 9.2 and SAS Enterprise Guide version 7.11. Analyses were performed by statisticians at Janssen with verification by a statistician at George Clinical. RESULTS Overall, 10 142 participants at 667 centers in 30 countries were enrolled in the CANVAS Program. 6 Mean follow-up was 188 weeks. Discontinuation of the study drug was similar with placebo and canagliflozin in the overall population (30% versus 29%) and in the secondary prevention (29% versus 30%) and primary prevention cohorts (31% versus 28%). Vital status was available for 99.6% of patients. 6 Primary prevention participants (N = 3486; 34%) were younger (63 versus 64 years), were more often female (45% versus 31%), and had longer duration of diabetes mellitus (14 versus 13 years) compared with secondary prevention participants (N = 6656; 66%). Participants in the secondary prevention group had higher use of common cardiac medications, including statins, ฮฒ-blockers, and antiplatelet agents, as well as insulin, but lower use of oral antihyperglycemic agents (Table 1). Within each of the primary and secondary prevention cohorts, participant characteristics were all well balanced across canagliflozin and placebo groups ( Table 1). Risks of Cardiovascular, Renal, Death, and Safety Outcomes in the Primary and Secondary Prevention Cohorts Secondary prevention participants had higher rates of the primary cardiovascular composite outcome compared with the primary prevention participants (HR, 2.36; 95% CI, 2.03-2.74; P<0.001) ( Table 2). There were also more hospitalizations for heart failure (HR, 2.64; 95% CI, 1.90-3.65), more deaths (HR, 1.86; 95% CI, 1.57-2.22), and more of the composite renal outcome (HR, 1.56; 95% CI, 1.18-2.06) in the secondary prevention compared with the primary prevention group. Rates of safety outcomes were not different except for lower extremity amputation (HR, 2.85; 95% CI, 1.95-4.16) and volume depletion events (HR, 1.42; 95% CI, 1.10-1.83), which were more frequent among the secondary prevention participants, and urinary tract infection, which was less common in the secondary prevention participants (HR, 0.81; 95% CI, 0.67-0.97). (Figure 1). Likewise, no statistical evidence of heterogeneity was found between the primary and secondary prevention cohorts for hospitalization for heart failure, all-cause mortality, and the composite renal outcome (all P values for homogeneity โ‰ฅ0.10) (Figure 1). Kaplan-Meier curves for the composite cardiovascular outcome, cardiovascular death, hospitalization for heart failure, all-cause mortality, and the composite renal outcome are shown in Figure 2. Effects of Canagliflozin on Safety Outcomes in Primary and Secondary Prevention Cohorts The rates of adverse events, including genital infections, urinary tract infections, fractures, diabetic ketoacidosis, and acute pancreatitis, were not statistically different be- ANOVA indicates analysis of variance; BP, blood pressure; CABG, coronary artery bypass grafting; CANVAS, Canagliflozin Cardiovascular Assessment Study; CANVAS-R, Canagliflozin Cardiovascular Assessment Study-Renal; DPP-4, dipeptidyl peptidase-4; eGFR, estimated glomerular filtration rate; GLP-1, glucagon-like peptide-1; HDL, high-density lipoprotein; IQR, interquartile range; ITT, intention-to-treat; LDL, low-density lipoprotein; PCI, percutaneous coronary intervention; RAAS, renin-angiotensin-aldosterone system; and SD, standard deviation. *One participant was randomized at 2 different sites, and only the first randomization is included in the ITT analysis set. โ€ P value corresponds to Generalized Cochran-Mantel-Haenszel test for no general association. โ€กP value corresponds to the test for no difference between primary and secondary cohorts from ANOVA model with prevention cohort as a factor. ยงIncludes American Indian or Alaska Native, Native Hawaiian or other Pacific Islander, multiple, other, and unknown. โ€–Includes antiplatelets and anticoagulants. ยถSome participants had โ‰ฅ1 type of atherosclerotic disease. #Values for albuminuria categories calculated based on patients with available baseline albuminuria data: N of 3721 for canagliflozin, 2871 for placebo, and 6592 for the total population in the secondary prevention cohort and N of 2019 for canagliflozin, 1422 for placebo, and 3441 for the total population in the primary prevention cohort. **P value corresponds to Wilcoxon rank sum test of equal medians. โ€  โ€ P value corresponds to Van Elteren test for no association. tween treatment groups in the primary and secondary prevention participants (Figure 3). The adverse event profile for canagliflozin compared with placebo was consistent in the primary and secondary prevention participants (all interaction P values โ‰ฅ0.07). Figure 4 shows the event rates and risk differences for the primary composite (cardiovascular death, nonfatal MI, or nonfatal stroke), hospitalization for heart failure, renal composite outcome, and amputation for canagliflozin compared with placebo in the overall study, the secondary prevention participants, and the primary prevention participants. DISCUSSION The CANVAS Program included patients with established cardiovascular disease and those at risk for cardiovascular disease. Overall, 34% of participants were included in the primary prevention group. Secondary *Includes balanitis and phimosis. โ€ For these adverse events, the annualized event rates are reported with data from CANVAS alone through January 7, 2014, because after this time, only serious adverse events or adverse events leading to discontinuation were collected. In CANVAS-R, only serious adverse events or adverse events leading to discontinuation were collected. Owing to the differences between the 2 trials in methods of collection of the data, an integrated analysis of these adverse events is not possible. ORIGINAL RESEARCH ARTICLE prevention participants had higher rates of cardiovascular and renal outcomes compared with the primary prevention participants. Canagliflozin reduced the composite risk of cardiovascular death, nonfatal MI, or nonfatal stroke compared with placebo, and there was no statistical evidence of heterogeneity in the proportional treatment effect in the primary prevention and secondary prevention participants. Canagliflozin was also associated with better hospitalization for heart failure and renal outcomes, with a similar proportional Figure 1. Comparative effects of canagliflozin and placebo on cardiovascular, kidney, and mortality outcomes in the total population and the primary and secondary prevention cohorts in the CANVAS Program. Hazard ratios and 95% CIs were estimated using Cox regression models, with stratification by trial for all canagliflozin groups combined vs. placebo. CANVAS indicates Canagliflozin Cardiovascular Assessment Study; CI, confidence interval; CV, cardiovascular; eGFR, estimated glomerular filtration rate; HR, hazard ratio; and MI, myocardial infarction. *P<0.001 for noninferiority and P=0.02 for superiority for the primary outcome of CV death, nonfatal MI, or nonfatal stroke in the overall population. โ€ Incidence rates and HRs not calculated because of the small number of events. reduction achieved for the primary and secondary prevention participants. Some large cardiovascular outcome clinical trials in patients with type 2 diabetes mellitus have included primary and secondary prevention cohorts by design using various inclusion and exclusion criteria. [7][8][9][10][11] However, others did not include a primary prevention cohort. 12,13 For the CANVAS Program, the primary prevention cohort included participants โ‰ฅ50 years of age, whereas other programs typically used 40 or 50 years of age to define the entry criteria. Compared with trials with primary prevention participants, 7-11 the CAN-VAS Program included a higher proportion in the primary prevention group (โ‰ˆ35% versus โ‰ˆ15% to 25%). Similar to other programs, cardiovascular event rates were lower in the primary prevention participants, but there was no evidence of heterogeneity in relative treatment effects in the primary and secondary prevention groups by statistical testing. The design and results from the CANVAS Program suggest that a broader group of patients has been studied with canagliflozin compared with other drugs, including an SGLT2 inhibitor. 12 The absolute reductions in cardiovascular events with canagliflozin were numerically greater in patients in the secondary prevention cohort compared with the primary prevention cohort. The relative reductions in cardiovascular events, however, showed no statistical evidence of heterogeneity between the 2 prevention groups. There appeared to be consistent reductions in hospitalization for heart failure and renal outcomes in the primary and secondary prevention participants, as well as increases in amputations in both groups that were numerically less frequent than the reductions in cardiovascular and renal outcomes. The composite outcome (cardiovascular death, nonfatal MI, nonfatal stroke) was also clearly reduced in the secondary prevention population. Although formal statistical testing did not find evidence of heterogeneity in the results for this outcome in the primary prevention population, more data are required because the interaction testing has limited power based on the size of the subpopulation. The ongoing CRE-DENCE study (Canagliflozin and Renal Endpoints in Diabetes With Established Nephropathy Clinical Evaluation; ClinicalTrials.org; NCT02065791) will provide more evidence on the effects of canagliflozin on clinical renal outcomes, including end-stage kidney disease and renal and cardiovascular death, whereas the DECLARE (Multicenter Trial to Evaluate the Effect of Dapagliflozin on the Incidence of Cardiovascular Events; ClinicalTrials.org; NCT01730534) will provide additional data regarding the effects of SGLT2 inhibition in primary prevention. The general safety profile of SGLT2 inhibitors has been well described. 6,14 The rates of common adverse events in the CANVAS Program were generally similar in participants in the primary and secondary prevention groups. Bone fractures have been reported previously with canagliflozin, 6,15 and consistent findings were observed in the primary and secondary prevention participants in the CANVAS Program. The rate of lower extremity amputation was โ‰ˆ3-fold higher in the secondary prevention group compared with the primary prevention group. A statistically significant 2-fold increase in lower extremity amputation with canagliflozin versus placebo was observed in the secondary prevention group, with a statistically similar result between canagliflozin and placebo in the primary prevention group, although only 33 events were reported in that group. Additional analyses of these findings are ongoing to understand the potential mechanism for amputations with canagliflozin. Until further information is available, caution should be used in patients at risk for amputations. The balance of cardiovascular and renal benefits compared with the major safety event of amputations was evaluated by calculating the number of patients with events prevented or caused over 5 years for 1000 treated patients. A favorable profile was observed for the overall study population, with 23 fewer cardiovascular death, nonfatal MI, or nonfatal stroke events; 16 fewer hospitalizations for heart failure; and 18 fewer renal outcomes (40% reduction in estimated glomerular filtration rate, requirement for renal replacement therapy, or renal death) occurring in canagliflozin-treated patients compared with placebo, with an excess of 15 lower extremity amputations (10 toe or metatarsal, 5 above the ankle). As expected, numerically more events were prevented in the higher risk secondary prevention group compared with the primary prevention participants, and in both cohorts the number of excess amputation events was numerically lower than the number of cardiorenal outcomes that were prevented. These data may be helpful to clinicians and patients for shared clinical decisions in the management of diabetes mellitus to reduce cardiovascular and renal outcomes. Limitations These analyses have several limitations. The trial was not designed with appropriate statistical power to show definitive treatment differences in the outcomes CANVAS indicates Canagliflozin Cardiovascular Assessment Study; CANVAS-R, Canagliflozin Cardiovascular Assessment Study-Renal; and CI, confidence interval. *For these adverse events, the annualized event rates are reported with data from CANVAS alone through January 7, 2014, because after this time, only serious adverse events or adverse events leading to discontinuation were collected. In CANVAS-R, only serious adverse events or adverse events leading to discontinuation were collected. Owing to the differences between the 2 trials in methods of collection of the data, an integrated analysis of these adverse events is not possible. ORIGINAL RESEARCH ARTICLE in primary and secondary prevention participants. The primary prevention cohort was smaller, was lower-risk, and accrued fewer events than the secondary prevention cohort, and therefore the ability to exclude heterogeneity between the primary and secondary prevention cohorts is limited. The primary and secondary prevention participants were categorized based on investigator-reported inclusion and exclusion criteria and were not confirmed. We did not screen patients for subclinical atherosclerotic vascular disease in this large international trial, so patients with asymptomatic cardiovascular disease or clinically silent prior cardiovascular events could have been included in the primary prevention cohort. We followed participants for โ‰ˆ3.5 years; however, glucose-lowering agents are often used for a much longer duration, well beyond the horizon of this study. Further study with longer follow-up in a primary prevention population could potentially identify more long-term benefits because of greater life expectancy. Conclusions In the CANVAS Program, which evaluated patients with type 2 diabetes mellitus and elevated cardiovascular risk, participants with prior cardiovascular events (secondary prevention) compared with those without prior cardiovascular events (primary prevention) had greater absolute rates of cardiovascular, renal, and death outcomes. Canagliflozin reduced cardiovascular and renal outcomes overall, with no statistical evidence of heterogeneity of canagliflozin effects between the primary and secondary prevention participants. CI indicates confidence interval; CV, cardiovascular; eGFR, estimated glomerular filtration rate; and MI, myocardial infarction. *Excess number is relative to the placebo group. If the number is negative, then fewer subjects in the canagliflozin group experienced the event compared with the placebo group.
Influence of Glassy Carbon Surface Finishing on Its Wear Behavior during Precision Glass Moulding of Fused Silica Laser technology has a rising demand for high precision Fused Silica components. Precision Glass Moulding (PGM) is a technology that can fulfil the given demands in efficiency and scalability. Due to the elevated process temperatures of almost 1400 ยฐC and the high mechanical load, Glassy Carbon was qualified as an appropriate forming tool material for the moulding of Fused Silica. Former studies revealed that the toolsโ€™ surface finishing has an important influence on wear behaviour. This paper deals with investigation and analysis of surface preparation processes of Glassy Carbon moulds. In order to fulfil standards for high precision optics, the finishing results will be characterised by sophisticated surface description parameters used in the optics industry. Later on, the mould performance, in terms of wear resistance, is tested in extended moulding experiments. Correlations between the surface finish of the Glassy Carbon tools and their service lifetime are traced back to fundamental physical circumstances and conclusions for an optimal surface treatment are drawn. Introduction Production systems with laser beam sources are becoming increasingly powerful and tend towards more compact designs [1]. But only a few materials for optics for beam shaping can withstand the high thermal loads permanently [2]. For this reason, we are researching manufacturing processes with which lenses made of resistant Fused Silica can be produced more cost-effectively and according to high quality standards. The Institute is investigating the manufacturing of these high-precision Fused Silica optics using the Precision Glass Moulding process (PGM) (Figure 1) [3]. Optics, which were traditionally produced by multi-step grinding and polishing processes, are thus produced in just one process step. Due to the replicative character of PGM, even complex geometries can be realised efficiently [4]. In this publication, the investigation of contact behaviour between tool and glass in Fused Silica moulding is the main focus. Optics made of Fused Silica enjoy a high industrial demand. Due to its outstanding properties, such as the high transmission range from 185 nm to 2.5 ยตm regarding electromagnetic radiation, a high homogeneity and a very good temperature resistance, it offers excellent conditions for special applications [2,5]. Conventionally, optics made of this glass type are ground and polished, in some cases pursued with even more sophisticated machining technology such as Magneto-rheological Finishing (MRF) or Ion Beam Figuring (IBF) [6][7][8]. In order to form Fused Silica glass during precision Optics made of Fused Silica enjoy a high industrial demand. Due to its outstanding properties, such as the high transmission range from 185 nm to 2.5 ฮผm regarding electromagnetic radiation, a high homogeneity and a very good temperature resistance, it offers excellent conditions for special applications [2,5]. Conventionally, optics made of this glass type are ground and polished, in some cases pursued with even more sophisticated machining technology such as Magneto-rheological Finishing (MRF) or Ion Beam Figuring (IBF) [6][7][8]. In order to form Fused Silica glass during precision moulding, it is heated up to 1360 ยฐC. For this reason, our department is researching the use of the high-tech material Glassy Carbon as a corresponding forming tool material. Glassy Carbon offers exceptionally high thermal and mechanical load resistance in vacuum or inert gas [9][10][11]. Despite extreme resistance to temperature and mechanical stress, wear and tear can be seen on the surface after several cycles of the Fused Silica PGM process. These signs of wear exist due to various wear mechanisms. The growth of the defects is facilitated by repeated pressing processes. Not only tool are material chippings a problem but also Fused Silica, which adheres to the already existing chippings and micro hills, which creates adhesion between the two materials. The common way to reduce wear in PGM, that is, the application of a protective coating, cannot be applied to Fused Silica Moulding because of the extreme temperature conditions [12][13][14]. Therefore, the aim of the research is to understand the causes of defect formation during Fused Silica forming (and thus to guarantee the specified quality of the Fused Silica optics). In particular, this paper focuses on the investigation of the influence of the Glassy Carbon tool's surface finish on its wear behaviour in Fused Silica PGM. Method In order to study the influence of the Glassy Carbon tools' surface finish on its wear behaviour during Fused Silica moulding, a multi-step approach it used (Figure 2). At first, material related issues are discussed. A detailed analysis of the microstructure and subsurface damage during grinding of Glassy carbon is carried out and followed by polishing experiments. The relationship between material removal and the choice of polishing abrasives is discussed. Subsequently, the material-related findings are used to answer process-related issues. The preliminary polishing experiments give hints of the manufacturing of different surface topologies. These surface topologies are needed to study the influence of the tools' surface finishings on their wear behaviour during glass replication. The following paragraphs in this section give further information on the materials, machines and processes used to gain the results that are presented later on. Despite extreme resistance to temperature and mechanical stress, wear and tear can be seen on the surface after several cycles of the Fused Silica PGM process. These signs of wear exist due to various wear mechanisms. The growth of the defects is facilitated by repeated pressing processes. Not only tool are material chippings a problem but also Fused Silica, which adheres to the already existing chippings and micro hills, which creates adhesion between the two materials. The common way to reduce wear in PGM, that is, the application of a protective coating, cannot be applied to Fused Silica Moulding because of the extreme temperature conditions [12][13][14]. Therefore, the aim of the research is to understand the causes of defect formation during Fused Silica forming (and thus to guarantee the specified quality of the Fused Silica optics). In particular, this paper focuses on the investigation of the influence of the Glassy Carbon tool's surface finish on its wear behaviour in Fused Silica PGM. Method In order to study the influence of the Glassy Carbon tools' surface finish on its wear behaviour during Fused Silica moulding, a multi-step approach it used (Figure 2). At first, material related issues are discussed. A detailed analysis of the microstructure and subsurface damage during grinding of Glassy carbon is carried out and followed by polishing experiments. The relationship between material removal and the choice of polishing abrasives is discussed. Fused Silica as Optical Material In recent years, laser sources became increasingly powerful. This implies high loads on optical materials that are used for beam shaping and guidance. Conventional glass types, such as soda-lime or borosilicate glass, cannot be used for laser powers above ca. 1 kW. Their internal absorbance Subsequently, the material-related findings are used to answer process-related issues. The preliminary polishing experiments give hints of the manufacturing of different surface topologies. These surface topologies are needed to study the influence of the tools' surface finishings on their wear behaviour during glass replication. The following paragraphs in this section give further information on the materials, machines and processes used to gain the results that are presented later on. Fused Silica as Optical Material In recent years, laser sources became increasingly powerful. This implies high loads on optical materials that are used for beam shaping and guidance. Conventional glass types, such as soda-lime or borosilicate glass, cannot be used for laser powers above ca. 1 kW. Their internal absorbance (followed by warming and alternation of the refractive index) as well as the following thermal expansion lead to a "focus shift" that impairs the performance of the optical system [2,15]. A complete failure due to glass breakage (the releasing of mechanical stresses induced by thermal expansion) can destroy the entire system. Fused Silica as the purest kind of amorphous SiO 2 neither has any foreign atom inclusions nor cut-off points within its network leading to high broadband transmission as well as high thermal stability ( Figure 3, Table 1). Fused Silica as Optical Material In recent years, laser sources became increasingly powerful. This implies high loads on optical materials that are used for beam shaping and guidance. Conventional glass types, such as soda-lime or borosilicate glass, cannot be used for laser powers above ca. 1 kW. Their internal absorbance (followed by warming and alternation of the refractive index) as well as the following thermal expansion lead to a "focus shift" that impairs the performance of the optical system [2,15]. A complete failure due to glass breakage (the releasing of mechanical stresses induced by thermal expansion) can destroy the entire system. Fused Silica as the purest kind of amorphous SiO2 neither has any foreign atom inclusions nor cut-off points within its network leading to high broadband transmission as well as high thermal stability ( Figure 3, Table 1). (Right) Comparison of different glass types in terms of viscosity ฮท over temperature T [5,16,17]. For this study, the type Suprasil 300, provided by Heraeus, was selected because of its extremely low OH โˆ’ -impurity content. This qualifies that this glass type for high power IR-applications and parasitic chemical interactions during moulding induced by outgassing can be neglected. Glassy Carbon as a Tool Material for Precision Glass Moulding Glassy Carbon is mainly based on sp 2 bonds but it also contains in-plane defects and features variable bond-lengths. The sp 2 orbitals form a symmetrical hexagonal arrangement aligned on one plane. The close arrangement of the Glassy Carbon layer network consists of flat carbon hexagons. More detailed investigations of Glassy Carbon with transmission electron microscopes (TEM) show that the microstructure consists of crumpled sp 2 -hybridised carbon layers, including micropores of about 1-5 nm in diameter [18]. Figure 4a is a TEM image obtained during our experiments, and Figure 4b shows a simulated microstructural model of glassy carbon suggested by Harris [18]. Glassy Carbon as a Tool Material for Precision Glass Moulding Glassy Carbon is mainly based on sp 2 bonds but it also contains in-plane defects and features variable bond-lengths. The sp 2 orbitals form a symmetrical hexagonal arrangement aligned on one plane. The close arrangement of the Glassy Carbon layer network consists of flat carbon hexagons. More detailed investigations of Glassy Carbon with transmission electron microscopes (TEM) show that the microstructure consists of crumpled sp 2 -hybridised carbon layers, including micropores of about 1-5 nm in diameter [18]. Figure 4a is a TEM image obtained during our experiments, and Figure 4b shows a simulated microstructural model of glassy carbon suggested by Harris [18]. If certain polymers are carbonised under carefully controlled conditions, a carbon with a glasslike structure is formed. Carbonisation is based on the principle of pyrolysis. Thermal decomposition of chemical compounds takes place under exclusion of oxygen. All elements, with the exception of carbon, are removed from the structure. What remains is a brittle and hard material, which cannot be converted into graphite by the application of high temperatures [19]. Due to its glassy appearance and fracture behaviour, this carbon is similar to glass in its physical properties and is therefore called Glassy Carbon. This type of carbon offers a lot of particular properties. Glassy Carbon offers excellent resistance to gases in direct comparison to graphite. This value is comparable to that of glassbut it is not only this property that distinguishes Glassy Carbon as a material. Good electrical conductivity (42 ร— 10 โˆ’6 ฮฉm), high resistance to thermal shock and good corrosion resistance also highlight it. The comparably high stiffness and increasing tensile strength under temperature load both contribute to good thermo-mechanical resistance [20]. When heated, the material expands only slightly ( ~ 2.0-2.2 ร— 10 -6 1/K). Under inert gas the high temperature resistance can be up to 3000 ยฐC. Furthermore, this material behaves isotropically in comparison to other carbon forms. Low reactivity with chemical substances makes it very robust. The most important properties of Glassy Carbon as a forming tool for precision moulding of Fused Silica glass are its high temperature and corrosion resistance [9][10][11]21,22]. In order to qualify Glassy Carbon as a tool material for PGM, the generation of an optical surface quality ("mirror-like") is mandatory. Recent machining of Glassy Carbon is carried out by grinding, polishing and etching [19,23]. The material used in this study was made available by Tokai Carbon K.K. (Minato-ku (Tokio), Japan), quality grade GC30 [9]. Machining Setup and Process Both grinding and polishing were carried out on a Bรผhler Phoenix 4000 flat polishing machine (Bรผhler AG, Uzwil, Switzerland). The setup includes a rotation of the polishing pad (ฮฉ) and an If certain polymers are carbonised under carefully controlled conditions, a carbon with a glass-like structure is formed. Carbonisation is based on the principle of pyrolysis. Thermal decomposition of chemical compounds takes place under exclusion of oxygen. All elements, with the exception of carbon, are removed from the structure. What remains is a brittle and hard material, which cannot be converted into graphite by the application of high temperatures [19]. Due to its glassy appearance and fracture behaviour, this carbon is similar to glass in its physical properties and is therefore called Glassy Carbon. This type of carbon offers a lot of particular properties. Glassy Carbon offers excellent resistance to gases in direct comparison to graphite. This value is comparable to that of glassbut it is not only this property that distinguishes Glassy Carbon as a material. Good electrical conductivity (42 ร— 10 โˆ’6 ฮฉm), high resistance to thermal shock and good corrosion resistance also highlight it. The comparably high stiffness and increasing tensile strength under temperature load both contribute to good thermo-mechanical resistance [20]. When heated, the material expands only slightly (ฮฑ GC2 .0-2.2 ร— 10 โˆ’6 1/K). Under inert gas the high temperature resistance can be up to 3000 โ€ข C. Furthermore, this material behaves isotropically in comparison to other carbon forms. Low reactivity with chemical substances makes it very robust. The most important properties of Glassy Carbon as a forming tool for precision moulding of Fused Silica glass are its high temperature and corrosion resistance [9][10][11]21,22]. In order to qualify Glassy Carbon as a tool material for PGM, the generation of an optical surface quality ("mirror-like") is mandatory. Recent machining of Glassy Carbon is carried out by grinding, polishing and etching [19,23]. The material used in this study was made available by Tokai Carbon K.K. (Minato-ku (Tokio), Japan), quality grade GC30 [9]. Machining Setup and Process Both grinding and polishing were carried out on a Bรผhler Phoenix 4000 flat polishing machine (Bรผhler AG, Uzwil, Switzerland). The setup includes a rotation of the polishing pad (ฮฉ) and an independent rotation of the workpiece holder (ฯ‰) ( Figure 5). This kinematic enables different polishing paths, such as cycloid or hypocycloid. independent rotation of the workpiece holder (ฯ‰) ( Figure 5). This kinematic enables different polishing paths, such as cycloid or hypocycloid. For the preliminary evaluation of polishing processes, the Preston hypothesis is usually applied (1) [24]. It expresses the time-dependant material removal dz/dt as the product of contact pressure p between workpiece and polishing pad, the relative velocity โƒ— between the same contact partners as well as the empirical constant K that comprises all other process influences such as polishing, abrasive, chemical interactions and so forth. By keeping all process parameters and surrounding circumstances constant, a direct comparison of the abrasives can be derived by means of K (see Section 3.2). Since the contact pressure is a directly manipulatable variable, the relative velocity โƒ— is determined by the rotations ฮฉ as well as ฯ‰ and can be calculated by the following expression (2). For better handling, the cumulated velocity ฬ… is used subsequently (3). (2) The grinding and polishing specimens made of Glassy Carbon are disc-shape with a diameter of 34 mm. After cleaning and surface qualification, the same samples can be installed directly into a commercially available glass moulding machine. Precision Glass Moulding of Fused Silica The next coming description explains the glass moulding process gradually (also see Figure 1a). โ€ข After surface finishing and cleaning, the moulding tools are integrated into moulding dies and fixed in the machine (upper and lower mould); a dedicated glass preform is placed on the lower mould. For the preliminary evaluation of polishing processes, the Preston hypothesis is usually applied (1) [24]. It expresses the time-dependant material removal dz/dt as the product of contact pressure p between workpiece and polishing pad, the relative velocity โ†’ v between the same contact partners as well as the empirical constant K that comprises all other process influences such as polishing, abrasive, chemical interactions and so forth. By keeping all process parameters and surrounding circumstances constant, a direct comparison of the abrasives can be derived by means of K (see Section 3.2). Since the contact pressure is a directly manipulatable variable, the relative velocity โ†’ v is determined by the rotations ฮฉ as well as ฯ‰ and can be calculated by the following expression (2). For better handling, the cumulated velocity v is used subsequently (3). The grinding and polishing specimens made of Glassy Carbon are disc-shape with a diameter of 34 mm. After cleaning and surface qualification, the same samples can be installed directly into a commercially available glass moulding machine. Precision Glass Moulding of Fused Silica The next coming description explains the glass moulding process gradually (also see Figure 1a). โ€ข After surface finishing and cleaning, the moulding tools are integrated into moulding dies and fixed in the machine (upper and lower mould); a dedicated glass preform is placed on the lower mould. โ€ข The moulding process starts with an evacuation of the moulding chamber for preventing an oxidation of the moulding tools. The evacuation is followed by a nitrogen purge. โ€ข After this evacuation, the glass preform and the moulding tools are heated up to 1360 โ€ข C by using infrared lamps. โ€ข Subsequent to reaching a temperature of about 1360 โ€ข C a four-minute soaking phase starts, the soaking effectuates a homogeneous temperature propagation. โ€ข In the following, a four-minute moulding phase is starting. Due to a 2 kN moulding force that the Fused Silica is pressed into the desired shape. โ€ข When the lens is shaped, a two-step cooling phase begins. The first step cools the temperature gradually down to 600 โ€ข C by using nitrogen flow. The nitrogen effects a thermal convection. At 600 โ€ข C the flow increases. A faster cooling rate is the result. The experiments are carried out on a Toshiba GMP 207HV (Toshiba Machine Co. Ltd., Numazu (Shizuoka), Japan). The moulding cycles are repeated with constant set of parameters until a development of wear is observable. In this case, 30 moulding cycles were carried out. Fine Grinding of Glassy Carbon Moulding Tools-Analysis of Subsurface Damage In order to obtain information on possible influences of fine grinding on the final machining of Glassy Carbon forming tools, a forming tool was ground using an Aka Piatto 1200+ (Akasel A/S, Roskilde, Denmark) diamond grinding pad and subsequent examination of the subsurface region by TEM microscopy. During grinding, 20 ยตm of the original surface (delivery status) were removed. The roughness Ra dropped from 240 nm to roughly 40 nm. Since damage to the edge zone of the Glassy Carbon tool surface were suspected, the aim of this investigation was to obtain information about the atomic structure of the Glassy Carbon forming tools at the surface edge and subsurface zones. The investigation took place prior to the moulding experiments. Since the grinding process brings the most energy into the material and thus has the greatest potential for damage, it was not necessary to consider this analysis after polishing again. Besides alterations in the subsurface region, an analysis of the elemental composition of the Glassy Carbon material was also possible by means of TEM. The evaluation of the elements provides information about possible impurities before the press tests. After the preparation of sandwich-glued sample surfaces by wedge grinding (bonding of the interesting surface against itself, that is, Glassy Carbon glued to Glassy Carbon in order to minimise interfering artefacts), the presumed damage zone was mapped by TEM. Figure 6 shows an overview of the more closely examined points of the Glassy Carbon sample. Three positions were examined in more detail. Position 1 and Position 2 are each very close above and below the preparation-induced adhesive gap, Position 3 has been taken somewhat away from the adhesive gap in the very thin, yet near-surface volume of the sample. Already in this illustration ( Figure 6), it can be seen that possible damage to the edge zone of the Glassy Carbon surface caused by the pre-grinding manufacturing process has no major effect on the structure of the material. In order to make the atomic structure of the Glassy Carbon even more visible, a section of Position 1 (Figure 6b and Cut-out 1) has been further enlarged ( Figure 6c). The dashed line shows the boundary layer between the adhesive surface and the Glassy Carbon surface. Since the atomic radius of a carbon atom is about 0.1 nm, a further section has been enlarged ( Figure 6d, Cut-out 2). The atomic structure of the Glassy Carbon is thus clearly visible and no damage to the edge zones could be detected during pre-grinding. The other sections examined (Position 2 and Position 3) confirmed this result. interesting surface against itself, that is, Glassy Carbon glued to Glassy Carbon in order to minimise interfering artefacts), the presumed damage zone was mapped by TEM. Figure 6 shows an overview of the more closely examined points of the Glassy Carbon sample. Three positions were examined in more detail. Position 1 and Position 2 are each very close above and below the preparation-induced adhesive gap, Position 3 has been taken somewhat away from the adhesive gap in the very thin, yet near-surface volume of the sample. Further investigation methods included the STEM-(Scanning TEM: contrast shaping is based on inelastic scattering, that is, density differences are represented in contrast) and EDX-mode (element analysis). The STEM studies confirm the findings of the previous subsurface analysis. The result does not show any density deviations from the sample edge zone to the interior (Figure 7a,b). A homogeneous contrast formation is provided over the whole extent of the examination section. Already in this illustration ( Figure 6), it can be seen that possible damage to the edge zone of the Glassy Carbon surface caused by the pre-grinding manufacturing process has no major effect on the structure of the material. In order to make the atomic structure of the Glassy Carbon even more visible, a section of Position 1 (Figure 6b and Cut-out 1) has been further enlarged (Figure 6c). The dashed line shows the boundary layer between the adhesive surface and the Glassy Carbon surface. Since the atomic radius of a carbon atom is about 0.1 nm, a further section has been enlarged ( Figure 6d, Cut-out 2). The atomic structure of the Glassy Carbon is thus clearly visible and no damage to the edge zones could be detected during pre-grinding. The other sections examined (Position 2 and Position 3) confirmed this result. Further investigation methods included the STEM-(Scanning TEM: contrast shaping is based on inelastic scattering, that is, density differences are represented in contrast) and EDX-mode (element analysis). The STEM studies confirm the findings of the previous subsurface analysis. The result does not show any density deviations from the sample edge zone to the interior (Figure 7a,b). A homogeneous contrast formation is provided over the whole extent of the examination section. In the EDX spectrum of the sample area ( Figure 7c), only O (oxygen), Cl (chlorine) and S (sulphur) were present in addition to C (carbon). O and Cl were found in the area of the preparation adhesive. These elements are major components of the adhesive used for bonding (Uhu ยฎ Plus Endfest 300; 2-component-epoxy-adhesive). S was weak on the entire sample (due to its homogeneous distribution it had probably reached the TEM sample surface from the atmosphere). Within the scope of EDX detection sensitivity (element dependent, typ. approx. 0.1-1%), the sample surface does not differ from the sample volume. None of the found elements find their origin in the Fused Silica since no moulding experiments were performed until this point Summarising, the process step that induces the highest mechanical load during surface finishing, that is, grinding, does not affect subsurface damage or other structural changes in the microstructure of the Glassy Carbon samples. Preliminary Polishing of Glassy Carbon Moulding Tools-Analysis of Material Removal In order to manufacture optical surfaces, polishing provides the following features: low surface roughness, retention of shape accuracy and reduction of subsurface damage [25]. Since the grinding step did not lead to a damaged subsurface region and shape accuracy is not in focus of this study, there is no minimum height reduction mandatory. Nevertheless, it is very important to observe material removal performance on Glassy Carbon of different abrasives. Polishing is known to be a In the EDX spectrum of the sample area (Figure 7c), only O (oxygen), Cl (chlorine) and S (sulphur) were present in addition to C (carbon). O and Cl were found in the area of the preparation adhesive. These elements are major components of the adhesive used for bonding (Uhu ยฎ Plus Endfest 300; 2-component-epoxy-adhesive). S was weak on the entire sample (due to its homogeneous distribution it had probably reached the TEM sample surface from the atmosphere). Within the scope of EDX detection sensitivity (element dependent, typ. approx. 0.1-1%), the sample surface does not differ from the sample volume. None of the found elements find their origin in the Fused Silica since no moulding experiments were performed until this point Summarising, the process step that induces the highest mechanical load during surface finishing, that is, grinding, does not affect subsurface damage or other structural changes in the microstructure of the Glassy Carbon samples. Preliminary Polishing of Glassy Carbon Moulding Tools-Analysis of Material Removal In order to manufacture optical surfaces, polishing provides the following features: low surface roughness, retention of shape accuracy and reduction of subsurface damage [25]. Since the grinding step did not lead to a damaged subsurface region and shape accuracy is not in focus of this study, there is no minimum height reduction mandatory. Nevertheless, it is very important to observe material removal performance on Glassy Carbon of different abrasives. Polishing is known to be a very sensitive process with a comparatively low process understanding. Complex chemical interactions prevent analytical predictions of the machining result. Under constant process variables (see Figure 5, rotation speed ฮฉ = ฯ‰ = 150 min โˆ’1 , contrary rotation; contact pressure p = 75 kPa), different polishing abrasives were tested: The results of the height reduction dz over a time increment of dt = 1 (polishing) min are displayed in Figure 8a. Based on the Preston hypothesis (1) and considerations of the relative velocity (2) and (3) the empirical constant K can be derived (Figure 8b). Clearly discernible differences in the material removal behaviour can be seen. Despite 6 ฮผm D., the removal rate is directly proportional to the grain size of the utilised abrasive. A possible explanation for the low removal of 6 ฮผm could be the "rolling" of the grain over the Glassy Carbon surface, combined with micro ploughing instead of micro chipping. The incremental removal of 0.05 ฮผm OPS was expected, since it is widely used in micro electronic industry for the last finishing step. Polishing of Glassy Carbon Moulding Tools-Analysis of Achievable Surface Roughness For the manufacturing of optical surfaces, an adequately low surface roughness is crucial. Arithmetic mean values Ra of below than 5 nm are mandatory. Based on the findings in Section 3.2, the different polishing abrasives are evaluated regarding their ability to achieve high surface quality and integrity. The process parameters were kept constant, the roughness measurement was carried out by White Light Interferometry (WLI) on five points of the Glassy Carbon surface. Prior to polishing (t = 0), fine grinding down to a Ra value of 8 to 9 nm was applied. The results are displayed in Figure 9 and in the following numeration: โ€ข 1 ฮผm cubic boron nitride grain suspension (1 ฮผm cBN.): In the beginning, the polishing led to slight improvement of the surface roughness with a comparatively high spread of the measurement data. The spread increased further after five minutes of machining. At the same time, "orange peel" as a typical polishing damage was visible [26]. In order to avoid this effect, the contact pressure p was reduced form 75 kPa to 60 kPa. In Clearly discernible differences in the material removal behaviour can be seen. Despite 6 ยตm D., the removal rate is directly proportional to the grain size of the utilised abrasive. A possible explanation for the low removal of 6 ยตm could be the "rolling" of the grain over the Glassy Carbon surface, combined with micro ploughing instead of micro chipping. The incremental removal of 0.05 ยตm OPS was expected, since it is widely used in micro electronic industry for the last finishing step. Polishing of Glassy Carbon Moulding Tools-Analysis of Achievable Surface Roughness For the manufacturing of optical surfaces, an adequately low surface roughness is crucial. Arithmetic mean values Ra of below than 5 nm are mandatory. Based on the findings in Section 3.2, the different polishing abrasives are evaluated regarding their ability to achieve high surface quality and integrity. The process parameters were kept constant, the roughness measurement was carried out by White Light Interferometry (WLI) on five points of the Glassy Carbon surface. Prior to polishing (t = 0), fine grinding down to a Ra value of 8 to 9 nm was applied. The results are displayed in Figure 9 and in the following numeration: โ€ข 1 ยตm cubic boron nitride grain suspension (1 ยตm cBN.): In the beginning, the polishing led to slight improvement of the surface roughness with a comparatively high spread of the measurement data. The spread increased further after five minutes of machining. At the same time, "orange peel" as a typical polishing damage was visible [26]. In order to avoid this effect, the contact pressure p was reduced form 75 kPa to 60 kPa. In the following, the orange peel effect was reduced and low Ra values were achieved. Nevertheless, this abrasive does not lead to a surface that would be accepted in the optical industry. ): A decrease in diamond grain size was suspected to lead to even lower roughnesses. The outcome could not prove this assumption. Instead, a significant "over polishing" effect was observed, that is, beginning from minute 7, the spread of the measurement data rose. Even by neglecting this circumstance, 0.05 ยตm D. did not achieve better results than 0.25 ยตm D. โ€ข 0.05 ยตm colloidal silicon dioxide OPS (oxide polishing suspension) (0.05 ยตm OPS): OPS performed very well, leading to both low roughness and low spread. Thus, this polishing agent is able to fulfil the demands of optics industry. A decrease in diamond grain size was suspected to lead to even lower roughnesses. The outcome could not prove this assumption. Instead, a significant "over polishing" effect was observed, that is, beginning from minute 7, the spread of the measurement data rose. Even by neglecting this circumstance, 0.05 ฮผm D. did not achieve better results than 0.25 ฮผm D. With these results, it possible to layout process chains for the manufacturing of Glassy Carbon moulding tools for the replication of Fused Silica. Topology Generation on Glassy Carbon Moulding Tools-Analysis of Topology In order to study the influence of the Glassy Carbon surface finishing on its wear behaviour during Fused Silica moulding, it needs to be produced. For these investigations, three different surface finishings that led to different surface topologies were the centre of interest: โ€ข Best possible automatically producible Glassy Carbon surface (Case study A). For this case, a surface finish with 0.25 ฮผm D. was chosen (see results in Section 3.3 and Figure 9). โ€ข Glassy Carbon surface with polishing damages (Case study B). The orange peel of the 1 ฮผm cBN machining will not lead to damage error-free optics but it is assumed that this topology will lead to alterations in wear behaviour. โ€ข Glassy Carbon surface from specialist in mould manufacturing (Reference, Case study 0). Aixtooling GmbH (Aachen, Germany), a specialist for the industrial fabrication of moulds for PGM, took over the finishing step of the Glassy Carbon tools as a reference sample. The exact machining procedure is confidential. With these results, it possible to layout process chains for the manufacturing of Glassy Carbon moulding tools for the replication of Fused Silica. Topology Generation on Glassy Carbon Moulding Tools-Analysis of Topology In order to study the influence of the Glassy Carbon surface finishing on its wear behaviour during Fused Silica moulding, it needs to be produced. For these investigations, three different surface finishings that led to different surface topologies were the centre of interest: โ€ข Best possible automatically producible Glassy Carbon surface (Case study A). For this case, a surface finish with 0.25 ยตm D. was chosen (see results in Section 3.3 and Figure 9). โ€ข Glassy Carbon surface with polishing damages (Case study B). The orange peel of the 1 ยตm cBN machining will not lead to damage error-free optics but it is assumed that this topology will lead to alterations in wear behaviour. โ€ข Glassy Carbon surface from specialist in mould manufacturing (Reference, Case study 0). Aixtooling GmbH (Aachen, Germany), a specialist for the industrial fabrication of moulds for PGM, took over the finishing step of the Glassy Carbon tools as a reference sample. The exact machining procedure is confidential. The process chains of mould manufacturing for the case studies are displayed in Table 2. Process step 1 is a grinding procedure (water as cooling liquid), while step 2 and 3 are carried out by polishing and their dedicated polishing agents. After completing the process chains, the average of five arithmetic mean roughness Ra values of Case A reached 1.7 nm. The roughness value of the samples for Case B was considerably higher (Ra = 4.5 nm). In contrast to the other two process chains, the Case 0 shows no process of final polishing ( Table 2). The samples for Case 0 were pre-polished for two minutes with 1 ยตm diamond suspension and subsequently submitted to the company Aixtooling for final manual polishing. Prior to this, the average Ra after pre-polishing 3 nm. After the final polishing of the samples by Aixtooling, the roughness dropped to a value of Ra = 2 nm. Since special emphasis was placed on the desired diversity of the characteristic global topology during the manufacture of the Glassy Carbon mould pairs, this was checked by means of large-area WLI stitchings, performed on a Bruker Contour GT (Bruker Corp., Billerica, MA, USA). The field examined by the stitching method was circular and had a diameter of 10 mm corresponding to the contact surface of the Fused Silica glass preforms. This macroscopic view on the three forming tool pairs showed that the surface topology of Case B, which was finally polished with a 1 ยตm cBN suspension, differed strongly from the other two sample pairs. The forming tool pairs produced with the 1 ยตm cBN suspension showed crater-like defects on the surface. Investigations of the craters by white light images showed that the craters had large differences in diameter but the crater depth, with few exceptions, was in the range of about 20-60 nm. Looking at these defects over the entire surface of the sample, an overall picture similar to orange peel is obtained [26]. The crater-like defects can be seen with the naked eye on very close inspection. Of the pair of samples, that with the 0.25 ยตm diamond suspension and that processed by the company Aixtooling GmbH, show a globally homogeneous surface topology that is not disturbed by defects on the stitching images. A comparison of these surfaces is shown in Figure 10 (upper row), which compares the stitchings of the sample pairs of Case A, B and 0. The measurements serve exclusively to illustrate differences in the global characteristic surface topology. Since the sample pairs of Case A and Case 0 do not show any characteristic differences on the stitching images, the different procedure for the preparation of microstructural differences is to be assumed. However, this cannot be represented by the coarse measurement at 2.5-fold WLI magnification. The examination of the two sample pairs at higher magnification (objective with 10-fold WLI magnification) confirmed the assumption (Figure 10, lower row). The Case A samples machine-polished with 0.25 ยตm diamond suspension show a highly directional microsection structure characterised by superimposition of finest scratches. In contrast, the tools of Case 0 finished manually by Aixtooling have a rather granular isotropic surface. Hence, all three sample pairs showed differences in their topology and were suitable as input material for investigation regarding the influence of finishing of Glassy Carbon forming tools on wear behaviour during Fused Silica moulding. suspension, differed strongly from the other two sample pairs. The forming tool pairs produced with the 1 ฮผm cBN suspension showed crater-like defects on the surface. Investigations of the craters by white light images showed that the craters had large differences in diameter but the crater depth, with few exceptions, was in the range of about 20-60 nm. Looking at these defects over the entire surface of the sample, an overall picture similar to orange peel is obtained [26]. The crater-like defects can be seen with the naked eye on very close inspection. Moulding of Fused Silica-Analysis of Wear This section deals with the main findings of this publication. The Fused Silica moulding was carried out on a Toshiba GMP 207HV, while the geometrical circumstances of moulds (Glassy Carbon, ฮฆ 34 mm) and glass preform (Fused silica, ฮฆ 10 mm, 5 mm height) as well as the process parameters were equal to those for the investigations of Dukwen et al. [27] (i.e., moulding temperature 1360 โ€ข C, 2 kN moulding force, 2 min hold time). The evolution of wear phenomena and their dependencies on the surface finishing were observed with several measurement technologies: The following structure is leaned on this order. Light Microscopy An overview evaluation of the Light Microscopic images of the Glassy Carbon surface had shown that the forming tool pairs of all cases show different degrees of wear. This knowledge does not only refer to the differently machined tools, also differences within the cases, that is, certainly machined tools are recognisable. The comparison of a mould tool set (Case A) shows differences between the upper and lower mould. In comparison with the upper mould, the lower mould shows considerably stronger wear phenomena, for example, load traces. Figure 11 shows a comparison of the microscopic images of the moulds after 20-fold moulding of Fused Silica. The load traces occur mainly within the glass contact area in the form of scratches, grooves or streaks. These formations can spread up to a few millimetres; they can also be seen without a microscope. In most cases these defects of the surface are not an isolated phenomenon but rather an accumulation of defects in an agglomeration can be observed. Figure 11b clearly shows such a formation. One of the described formations is located on the left edge of the glass contact area, which can be easily recognised by a slight circle or wreath drawn on the sample that also shows minor discoloration effects. If a single scratch is considered, it is noticeable that the shape of the defect often has an oscillating shape (vibration lines), possibly according to the servo that provides the moulding force. Straight-line formations are less common. shown that the forming tool pairs of all cases show different degrees of wear. This knowledge does not only refer to the differently machined tools, also differences within the cases, that is, certainly machined tools are recognisable. The comparison of a mould tool set (Case A) shows differences between the upper and lower mould. In comparison with the upper mould, the lower mould shows considerably stronger wear phenomena, for example, load traces. Figure 11 shows a comparison of the microscopic images of the moulds after 20-fold moulding of Fused Silica. All cases show more or less the same behaviour, especially in terms of differences between upper and lower tool. It is remarkable that the lower tool of Case B (cBN treatment) exhibits a much wider and easily visible wreath at the edge of the glass contact area. Atomic Force Microscopy On a more detailed view, the wear phenomena mentioned above can be consolidated by means of AFM measurements. The measurements were carried out for the initial state of the tool surface and after 10, 20 and 30 moulding cycles, respectively. In that way, the evolution of wear was observable. One of the first findings was the relatively rapid increase in defects after ca. 10 moulding cycles. While up to that point, most of the surface remained unaltered, the AFM plots of 20-fold moulded surfaces showed breakout, build-ups (adhesion) and further scratches for all cases examined. For illustrating these results, AFM images of Case 0 were chosen to be displayed Figure 12. The load traces occur mainly within the glass contact area in the form of scratches, grooves or streaks. These formations can spread up to a few millimetres; they can also be seen without a microscope. In most cases these defects of the surface are not an isolated phenomenon but rather an accumulation of defects in an agglomeration can be observed. Figure 11b clearly shows such a formation. One of the described formations is located on the left edge of the glass contact area, which can be easily recognised by a slight circle or wreath drawn on the sample that also shows minor discoloration effects. If a single scratch is considered, it is noticeable that the shape of the defect often has an oscillating shape (vibration lines), possibly according to the servo that provides the moulding force. Straight-line formations are less common. All cases show more or less the same behaviour, especially in terms of differences between upper and lower tool. It is remarkable that the lower tool of Case B (cBN treatment) exhibits a much wider and easily visible wreath at the edge of the glass contact area. Atomic Force Microscopy On a more detailed view, the wear phenomena mentioned above can be consolidated by means of AFM measurements. The measurements were carried out for the initial state of the tool surface and after 10, 20 and 30 moulding cycles, respectively. In that way, the evolution of wear was observable. One of the first findings was the relatively rapid increase in defects after ca. 10 moulding cycles. While up to that point, most of the surface remained unaltered, the AFM plots of 20-fold moulded surfaces showed breakout, build-ups (adhesion) and further scratches for all cases examined. For illustrating these results, AFM images of Case 0 were chosen to be displayed Figure 12. Both figures show a similar measuring range close to the centre point of the moulding tool, that is, they were placed in the glass contact area. First of all, it should be mentioned that Figure 12a (mould after 10 moulding cycles) does not show any significant differences to the unpressed image. However, Figure 12b shows a massive accumulation of small breakouts, only a few 100 nm in size, from the Glassy Carbon surface. The depth of some of the grooves caused by the manufacturing process has also increased. The cases show different wear behaviour but the fact that the lower tool degrades more significantly than the upper tool validates the findings of the Light Microscopy. Figure 13 shows an overview of the wear phenomena evolution close to the tool's centre point of the three forming tools in comparison to the initial surface state. The lower Case 0 Glassy Carbon mould, which already showed signs of surface wear after 20 moulding cycles (Figure 12), now shows a further form of defect (Figure 13c, bottom row). In addition to the already existing breakouts, there were also sporadic adhesions. The adhesions have a height of up to 200 nm. Defects on the surface of the Case B sample are now also recognisable (Figure 13b, bottom row). These indicate numerous, homogeneously distributed, elongated breakouts from the Glassy Carbon surface. There are no adhesions as in the lower tool sample of Case 0. The Both figures show a similar measuring range close to the centre point of the moulding tool, that is, they were placed in the glass contact area. First of all, it should be mentioned that Figure 12a (mould after 10 moulding cycles) does not show any significant differences to the unpressed image. However, Figure 12b shows a massive accumulation of small breakouts, only a few 100 nm in size, from the Glassy Carbon surface. The depth of some of the grooves caused by the manufacturing process has also increased. The cases show different wear behaviour but the fact that the lower tool degrades more significantly than the upper tool validates the findings of the Light Microscopy. Figure 13 shows an overview of the wear phenomena evolution close to the tool's centre point of the three forming tools in comparison to the initial surface state. The lower Case 0 Glassy Carbon mould, which already showed signs of surface wear after 20 moulding cycles (Figure 12), now shows a further form of defect (Figure 13c, bottom row). In addition to the already existing breakouts, there were also sporadic adhesions. The adhesions have a height of up to 200 nm. Defects on the surface of the Case B sample are now also recognisable (Figure 13b, bottom row). These indicate numerous, homogeneously distributed, elongated breakouts from the Glassy Carbon surface. There are no adhesions as in the lower tool sample of Case 0. The imperfections are up to several micrometres long and have a depth of 100-200 nm. The last measuring field, which is located on lower sample Case A, shows the strongest signs of wear (Figure 13a, bottom row). Here, the largest (up to 0.6 ยตm) adhesions are recognisable. Outbreaks of this magnitude are also represented. The adhesions reach heights of up to 200 nm, also imperfections show depths of the same size, respectively. Figure 14 shows a measurement field of the lower Case A mould in a three-dimensional view. The aspect ratio of the axes in this view is adapted to reality. Figure 14 shows a measurement field of the lower Case A mould in a threedimensional view. The aspect ratio of the axes in this view is adapted to reality. The detail section shows an adhered particle. In this illustration, the connection of the particle with the Glassy Carbon surface is clearly visible. This study suggests an adhesive bond of Fused Silica, although the atomic composition cannot be assessed by means of AFM. A dimensional measurement by the software "Gwyddion" showed a maximum height of 200 nm and a maximum diagonal of 600 nm (elliptical but nearly circular geometry of the adhesion). The position of several adhesions cannot directly be assigned to previous breakouts or scratches from the finishing procedure. As the photos of the Light Microscopy examinations already suggested, the AFM photos showed differences in the wear phenomena of the upper and lower mould tool after moulding of Fused Silica. While the upper moulds still showed minor signs of wear, the state of wear on the lower moulds was significantly higher. This effect was visible for all surface topologies, that is, for all case studies. The comparison of the AFM measurements of the upper forming tool of Case A and the surface development of the dedicated lower tool illustrates the different wear behaviour (Figure 15). The upper mould remains largely free of signs of wear, no breakouts or alteration of the surface topologies induced by surface finishing can be observed. The lower mould exhibits significant wear as Figure 14 shows a measurement field of the lower Case A mould in a threedimensional view. The aspect ratio of the axes in this view is adapted to reality. The detail section shows an adhered particle. In this illustration, the connection of the particle with the Glassy Carbon surface is clearly visible. This study suggests an adhesive bond of Fused Silica, although the atomic composition cannot be assessed by means of AFM. A dimensional measurement by the software "Gwyddion" showed a maximum height of 200 nm and a maximum diagonal of 600 nm (elliptical but nearly circular geometry of the adhesion). The position of several adhesions cannot directly be assigned to previous breakouts or scratches from the finishing procedure. As the photos of the Light Microscopy examinations already suggested, the AFM photos showed differences in the wear phenomena of the upper and lower mould tool after moulding of Fused Silica. While the upper moulds still showed minor signs of wear, the state of wear on the lower moulds was significantly higher. This effect was visible for all surface topologies, that is, for all case studies. The comparison of the AFM measurements of the upper forming tool of Case A and the surface development of the dedicated lower tool illustrates the different wear behaviour (Figure 15). The upper mould remains largely free of signs of wear, no breakouts or alteration of the surface topologies induced by surface finishing can be observed. The lower mould exhibits significant wear as mentioned before (Figure 13 bottom row). The detail section shows an adhered particle. In this illustration, the connection of the particle with the Glassy Carbon surface is clearly visible. This study suggests an adhesive bond of Fused Silica, although the atomic composition cannot be assessed by means of AFM. A dimensional measurement by the software "Gwyddion" showed a maximum height of 200 nm and a maximum diagonal of 600 nm (elliptical but nearly circular geometry of the adhesion). The position of several adhesions cannot directly be assigned to previous breakouts or scratches from the finishing procedure. As the photos of the Light Microscopy examinations already suggested, the AFM photos showed differences in the wear phenomena of the upper and lower mould tool after moulding of Fused Silica. While the upper moulds still showed minor signs of wear, the state of wear on the lower moulds was significantly higher. This effect was visible for all surface topologies, that is, for all case studies. The comparison of the AFM measurements of the upper forming tool of Case A and the surface development of the dedicated lower tool illustrates the different wear behaviour (Figure 15). The upper mould remains largely free of signs of wear, no breakouts or alteration of the surface topologies induced by surface finishing can be observed. The lower mould exhibits significant wear as mentioned before (Figure 13 bottom row). Scanning Electron Microscope In addition, the Glassy Carbon forming tools-with the exception of the Case 0 sample pairwere measured with SEM (Scanning electron microscope) microscopically on their surface after each 20 pressing cycles. Since the SEM investigations conducted by Dukwen et al. [19] showed up to 0.5 ฮผm large damages of the Glassy Carbon surface after 20 pressings, the same images should be carried out for reasons of direct comparability. In addition, EDX analyses were performed to determine the material composition of defects or anomalies, which were not examinable by means of Light Microscopy or AFM. The SEM images confirmed the AFM measurement results. After 20 moulding cycles, none of the Glassy Carbon forming tools showed the suspected, numerous punctiform adhesive adhesions [19]. This could provide different advice on the influence of the surface finish on the wear behaviour, since the samples of Dukwen et al. reached a minimum roughness of Ra ~ 5 nm, while the tools used for this publication reached partial Ra values of below 2 nm. Figure 16 shows a SEM image of the glass contact area of the lower Case B moulding tool after 20-fold moulding of Fused Silica glass. The grooves on the surface caused by the manufacturing process are also clearly visible. Furthermore, slight contamination of the sample surface is visible but this does not testify to the suspected adhesive adhesions (Figure 16 Scanning Electron Microscope In addition, the Glassy Carbon forming tools-with the exception of the Case 0 sample pair-were measured with SEM (Scanning electron microscope) microscopically on their surface after each 20 pressing cycles. Since the SEM investigations conducted by Dukwen et al. [19] showed up to 0.5 ยตm large damages of the Glassy Carbon surface after 20 pressings, the same images should be carried out for reasons of direct comparability. In addition, EDX analyses were performed to determine the material composition of defects or anomalies, which were not examinable by means of Light Microscopy or AFM. The SEM images confirmed the AFM measurement results. After 20 moulding cycles, none of the Glassy Carbon forming tools showed the suspected, numerous punctiform adhesive adhesions [19]. This could provide different advice on the influence of the surface finish on the wear behaviour, since the samples of Dukwen et al. reached a minimum roughness of Ra~5 nm, while the tools used for this publication reached partial Ra values of below 2 nm. Figure 16 shows a SEM image of the glass contact area of the lower Case B moulding tool after 20-fold moulding of Fused Silica glass. The grooves on the surface caused by the manufacturing process are also clearly visible. Furthermore, slight contamination of the sample surface is visible but this does not testify to the suspected adhesive adhesions (Figure 16, bottom left corner). An EDX element analysis was performed in order to find the origin of the contaminations. The measurement revealed that the light-coloured contaminants present on the Glassy Carbon consist of pure carbon (C). The spectrum of elements showed virtually no traces of glass elements (e.g., silicon (Si)). Figure 17 shows the evaluation of such a particle with the associated EDX plot. A possible explanation for the imperfections could be graphite contaminations of the graphite mould dies, which embrace the Glassy Carbon moulds. Thermal expansion effects during heating and cooling of the mould system could have led to material subversion in the gap between graphite and Glassy Carbon components. Other particles investigated showed slight material impurities. One of these was the mineral wollastonite (CaSiO 3 ). Even very small amounts of silicon carbide were found. However, these shares were insignificantly small. Furthermore, the size of the particles varied greatly. There were particles that were only fractions of a micrometre in size but also particles up to 30 ยตm and occasionally larger, even outside of the glass contact area. out for reasons of direct comparability. In addition, EDX analyses were performed to determine the material composition of defects or anomalies, which were not examinable by means of Light Microscopy or AFM. The SEM images confirmed the AFM measurement results. After 20 moulding cycles, none of the Glassy Carbon forming tools showed the suspected, numerous punctiform adhesive adhesions [19]. This could provide different advice on the influence of the surface finish on the wear behaviour, since the samples of Dukwen et al. reached a minimum roughness of Ra ~ 5 nm, while the tools used for this publication reached partial Ra values of below 2 nm. Figure 16 shows a SEM image of the glass contact area of the lower Case B moulding tool after 20-fold moulding of Fused Silica glass. The grooves on the surface caused by the manufacturing process are also clearly visible. Furthermore, slight contamination of the sample surface is visible but this does not testify to the suspected adhesive adhesions (Figure 16, bottom left corner). An EDX element analysis was performed in order to find the origin of the contaminations. The measurement revealed that the light-coloured contaminants present on the Glassy Carbon consist of pure carbon (C). The spectrum of elements showed virtually no traces of glass elements (e.g., silicon (Si)). Figure 17 shows the evaluation of such a particle with the associated EDX plot. A possible explanation for the imperfections could be graphite contaminations of the graphite mould dies, which embrace the Glassy Carbon moulds. Thermal expansion effects during heating and cooling of the mould system could have led to material subversion in the gap between graphite and Glassy Carbon components. Other particles investigated showed slight material impurities. One of these was the mineral wollastonite (CaSiO3). Even very small amounts of silicon carbide were found. However, these shares were insignificantly small. Furthermore, the size of the particles varied greatly. There were particles that were only fractions of a micrometre in size but also particles up to 30 ฮผm and occasionally larger, even outside of the glass contact area. Considerations of the transition area, that is, the area that has emerging glass contact due to the compression of the cylindric glass preform, showed no special features. The image of the surface was similar to that of the glass contact area. Figure 18 shows the transition area. The ring-or wreathshaped circle is clearly visible. As already mentioned above, anomalies (specific individual defects) were documented. These defects were found on samples prepared by cBN polishing (Case B). In most cases, these were breakouts from the Glassy Carbon surface in the annular transition zone (Figure 18). Parts of silicon (Si) were found in these defects as well. These Fused Silica agglomerations settled in or close to outbreaks. Figure 19 shows such an isolated defect. Considerations of the transition area, that is, the area that has emerging glass contact due to the compression of the cylindric glass preform, showed no special features. The image of the surface was similar to that of the glass contact area. Figure 18 shows the transition area. The ring-or wreath-shaped circle is clearly visible. As already mentioned above, anomalies (specific individual defects) were documented. These defects were found on samples prepared by cBN polishing (Case B). In most cases, these were break-outs from the Glassy Carbon surface in the annular transition zone (Figure 18). Parts of silicon (Si) were found in these defects as well. These Fused Silica agglomerations settled in or close to outbreaks. Figure 19 shows such an isolated defect. An EDX element analysis was performed in order to find the origin of the contaminations. The measurement revealed that the light-coloured contaminants present on the Glassy Carbon consist of pure carbon (C). The spectrum of elements showed virtually no traces of glass elements (e.g., silicon (Si)). Figure 17 shows the evaluation of such a particle with the associated EDX plot. A possible explanation for the imperfections could be graphite contaminations of the graphite mould dies, which embrace the Glassy Carbon moulds. Thermal expansion effects during heating and cooling of the mould system could have led to material subversion in the gap between graphite and Glassy Carbon components. Other particles investigated showed slight material impurities. One of these was the mineral wollastonite (CaSiO3). Even very small amounts of silicon carbide were found. However, these shares were insignificantly small. Furthermore, the size of the particles varied greatly. There were particles that were only fractions of a micrometre in size but also particles up to 30 ฮผm and occasionally larger, even outside of the glass contact area. Considerations of the transition area, that is, the area that has emerging glass contact due to the compression of the cylindric glass preform, showed no special features. The image of the surface was similar to that of the glass contact area. Figure 18 shows the transition area. The ring-or wreathshaped circle is clearly visible. As already mentioned above, anomalies (specific individual defects) were documented. These defects were found on samples prepared by cBN polishing (Case B). In most cases, these were breakouts from the Glassy Carbon surface in the annular transition zone (Figure 18). Parts of silicon (Si) were found in these defects as well. These Fused Silica agglomerations settled in or close to outbreaks. Figure 19 shows such an isolated defect. Further SEM investigations after 30-fold Fused Silica moulding cycles confirmed the wear patterns of the AFM image that occurred after this moulding interval ( Figure 13). Figure 20 shows a SEM images of Case A and B on a centre-near position. The signs of wear visible on the AFM measurements can clearly be retrieved on the SEM images. The distribution, size and appearance are similar to those of Figure 13. On closer inspection of the image, it can be seen that the adhesions are preferentially but not generally located in outbreaks or peaks of the Glassy Carbon surface. An EDX element analysis showed that the punctiform wear phenomena, shown brightly in Figure 20a, are composed of Fused Silica. As stated before, no glass adhesion could have been found on the cBN treated moulds (Case B). Discussion The publications of Dukwen et al. [19,27] form the foundation of the approach to put Fused Silica moulding into an industrial context. This research dealt with preliminary investigations of the wear behaviour in Fused Silica forming by Precision Glass Moulding. In particular, the wear behaviour of the Glassy Carbon forming tools in relation to the process parameters used in the forming process and tribological conditions were considered. Differences in wear behaviour between the upper and lower moulds had already become known at this point. The investigation pointed to higher wear of the lower moulds. The investigations carried out in this publication make use of these previous findings and extend them based on more in-depth material qualification and extent measurement effort in order to gain more data on wear evolution. Dukwen et al. explain these increased wear phenomena occurring on the lower forming tools due to so called static adhesion [27]. Since the Fused Silica glass samples are already in contact with the Glassy Carbon mould during the heating and soaking phase, leading to temporary bonds between the two materials. There is no contact between the upper mould and the Fused Silica preform during the heating phase. Hence, no static adhesion is formed on the upper mould. In the moulding phase, these static bonds between the glass and Glassy Carbon molecules partly remain and lead to cohesion fractures within the glass volume. Figure 21 illustrates this process schematically and extends the model of static and dynamic adhesion. Further SEM investigations after 30-fold Fused Silica moulding cycles confirmed the wear patterns of the AFM image that occurred after this moulding interval ( Figure 13). Figure 20 shows a SEM images of Case A and B on a centre-near position. Further SEM investigations after 30-fold Fused Silica moulding cycles confirmed the wear patterns of the AFM image that occurred after this moulding interval ( Figure 13). Figure 20 shows a SEM images of Case A and B on a centre-near position. The signs of wear visible on the AFM measurements can clearly be retrieved on the SEM images. The distribution, size and appearance are similar to those of Figure 13. On closer inspection of the image, it can be seen that the adhesions are preferentially but not generally located in outbreaks or peaks of the Glassy Carbon surface. An EDX element analysis showed that the punctiform wear phenomena, shown brightly in Figure 20a, are composed of Fused Silica. As stated before, no glass adhesion could have been found on the cBN treated moulds (Case B). Discussion The publications of Dukwen et al. [19,27] form the foundation of the approach to put Fused Silica moulding into an industrial context. This research dealt with preliminary investigations of the wear behaviour in Fused Silica forming by Precision Glass Moulding. In particular, the wear behaviour of the Glassy Carbon forming tools in relation to the process parameters used in the forming process and tribological conditions were considered. Differences in wear behaviour between the upper and lower moulds had already become known at this point. The investigation pointed to higher wear of the lower moulds. The investigations carried out in this publication make use of these previous findings and extend them based on more in-depth material qualification and extent measurement effort in order to gain more data on wear evolution. Dukwen et al. explain these increased wear phenomena occurring on the lower forming tools due to so called static adhesion [27]. Since the Fused Silica glass samples are already in contact with the Glassy Carbon mould during the heating and soaking phase, leading to temporary bonds between the two materials. There is no contact between the upper mould and the Fused Silica preform during the heating phase. Hence, no static adhesion is formed on the upper mould. In the moulding phase, these static bonds between the glass and Glassy Carbon molecules partly remain and lead to cohesion fractures within the glass volume. Figure 21 illustrates this process schematically and extends the model of static and dynamic adhesion. The signs of wear visible on the AFM measurements can clearly be retrieved on the SEM images. The distribution, size and appearance are similar to those of Figure 13. On closer inspection of the image, it can be seen that the adhesions are preferentially but not generally located in outbreaks or peaks of the Glassy Carbon surface. An EDX element analysis showed that the punctiform wear phenomena, shown brightly in Figure 20a, are composed of Fused Silica. As stated before, no glass adhesion could have been found on the cBN treated moulds (Case B). Discussion The publications of Dukwen et al. [19,27] form the foundation of the approach to put Fused Silica moulding into an industrial context. This research dealt with preliminary investigations of the wear behaviour in Fused Silica forming by Precision Glass Moulding. In particular, the wear behaviour of the Glassy Carbon forming tools in relation to the process parameters used in the forming process and tribological conditions were considered. Differences in wear behaviour between the upper and lower moulds had already become known at this point. The investigation pointed to higher wear of the lower moulds. The investigations carried out in this publication make use of these previous findings and extend them based on more in-depth material qualification and extent measurement effort in order to gain more data on wear evolution. Dukwen et al. explain these increased wear phenomena occurring on the lower forming tools due to so called static adhesion [27]. Since the Fused Silica glass samples are already in contact with the Glassy Carbon mould during the heating and soaking phase, leading to temporary bonds between the two materials. There is no contact between the upper mould and the Fused Silica preform during the heating phase. Hence, no static adhesion is formed on the upper mould. In the moulding phase, these static bonds between the glass and Glassy Carbon molecules partly remain and lead to cohesion fractures within the glass volume. Figure 21 illustrates this process schematically and extends the model of static and dynamic adhesion. The cohesion fracture that leads to glass adhesion is said to take place due to the change in shape of the glass in the horizontal direction and the resulting shear loads, combined with higher stresses in the area of imperfections of the opposed Glassy Carbon surface, implying a significant influence of its surface finishing on wear behaviour. The stronger bonds between the Fused Silica and the Glassy Carbon cause the glass to break out of the sample and adhere to the Glassy Carbon as adhesive particles (Figure 21b). This study revealed that the glass adhesions are not only found on asperities or breakouts on the glassy Carbon surface (Figure 21c), that is, a high mechanical load is no singular reason for adhesive wear. Furthermore, the fracture resistance of Fused Silica is much higher than for other glass types (especially for low strain rates as performed in this study) [28]. This implies there might be several reasons for Fused Silica adhesions. Since the differences of upper and lower tool wear strongly depend on the glass contact time, also chemical interactions must be taken into consideration. Besides that, the observations showed that existing imperfections (e.g., outbreaks, outbroken particles, glass adhesions and glass splinters without significant bonding to a surface) reinforce the degradation during the following moulding cycles (Figure 21c). Since all of these imperfections are supposed to have edges, they are like abrasive particles that scratch the surface of the Fused Silica sample when the glass expands and vice versa. From this point of view, the status of the surface finish can be seen as the initial state of a chaotic system, since the formation of breakouts and particles affect all following states. In order to put the findings into functional relationships, the main influence factors on the wear phenomena (W-wear, WO-wear in the form of outbreaks, WA-wear in the form of adhesions) can be expressed as follows: The cohesion fracture that leads to glass adhesion is said to take place due to the change in shape of the glass in the horizontal direction and the resulting shear loads, combined with higher stresses in the area of imperfections of the opposed Glassy Carbon surface, implying a significant influence of its surface finishing on wear behaviour. The stronger bonds between the Fused Silica and the Glassy Carbon cause the glass to break out of the sample and adhere to the Glassy Carbon as adhesive particles (Figure 21b). This study revealed that the glass adhesions are not only found on asperities or breakouts on the glassy Carbon surface (Figure 21c), that is, a high mechanical load is no singular reason for adhesive wear. Furthermore, the fracture resistance of Fused Silica is much higher than for other glass types (especially for low strain rates as performed in this study) [28]. This implies there might be several reasons for Fused Silica adhesions. Since the differences of upper and lower tool wear strongly depend on the glass contact time, also chemical interactions must be taken into consideration. Besides that, the observations showed that existing imperfections (e.g., outbreaks, outbroken particles, glass adhesions and glass splinters without significant bonding to a surface) reinforce the degradation during the following moulding cycles (Figure 21c). Since all of these imperfections are supposed to have edges, they are like abrasive particles that scratch the surface of the Fused Silica sample when the glass expands and vice versa. From this point of view, the status of the surface finish can be seen as the initial state of a chaotic system, since the formation of breakouts and particles affect all following states. In order to put the findings into functional relationships, the main influence factors on the wear phenomena (W-wear, W O -wear in the form of outbreaks, W A -wear in the form of adhesions) can be expressed as follows: Generally, the wear can be expressed as the sum of outbreaks and adhesion effects (Equation (4)). The main influence factors of both phenomena are contact pressure ฮด, moulding temperature T, hold time t (including heating and soaking sequence) and surface topology ฮ“. According to theoretical considerations, specific adjustments of the process parameters would lead to a decrease in wear (Equation (6)). lim Unfortunately, these trivial relations cannot be realised in terms of a real production process, due to the following reasons [28,29]: โ€ข Decreasing moulding force F or contact pressure ฯƒ would lead to less mechanical load but the desired glass flow comes to a halt. โ€ข Decreasing moulding temperature would lead to less activation of wear processes but also to a higher viscosity of the glass, which would lead to higher mechanical load and an induction of stresses and fractures (The moulding temperature of 1360 โ€ข C equals a viscosity of ฮท = 10 10 dPas (Figure 3Right), which marks a process border already). โ€ข Decreasing contact time (especially during heating and soaking phase) would overcome the issues of temporary bonding but is not realisable in the existing machine set-up. In summary, the parameters mentioned above cannot be seen as independent variables, since they are interconnected by means of the viscoelastic behaviour of glass. By a variation of the process parameters, only incremental progress in terms of wear reduction can be expected. Nevertheless, this study put a strong focus on the influence of the surface finishing of the Glassy Carbon tools, leading to different surface topologies (ฮ“). It was assumed that the Glassy Carbon moulding tools of Case A, which had been finished by the 0.25 ยตm diamond suspension and had the best quantitative surface characteristics in terms of Ra value, would show the least wear phenomena after the moulding tests. This assumption was based on the general circumstance that the relatively best surface topology would have the least pronounced micro-contact sites. This should result in less shear stress on both contact partners. Likewise, the very smooth surface texture should result in fewer breakouts from the Glassy Carbon surface due to the manufacturing process. This was also expected to reduce wear behaviour. In contrast to the very homogeneous surface of the samples of Case A, which was characterised by superimposition of very fine craters, a slightly worse wear behaviour was expected from the sample surface prepared by the company Aixtooling, a specialist for tool manufacturing for PGM (Case 0). Since these samples on the WLI images showed a rather coarse-grained, isotropic microstructure, which also showed good roughness values, a slightly higher wear was assumed. Furthermore, it was assumed that the coarser microstructure would provide more surface area for the viscoelastic Fused Silica flow and thus lead to greater shear stress in the contact zone. The samples of Case B, which had the worst roughness values and was unsuitable for the production of optics due to craters on the surface, showed, contrary to expectations, the least signs of wear after the forming tests. Only fine, elongated grooves were visible. It was suspected that the insertion of the craters into the surface would lead to varying contact stresses on the material, which could have had an effect on the wear behaviour. It is possible that the wear behaviour, which was reflected by the many small elongated grooves, was related to possible variations of the contact stresses. However, this connection is only speculative. As already mentioned above, the best wear behaviour was expected from the Glassy Carbon samples processed with 0.25 ยตm diamond suspension. However, the observation was different. The AFM and SEM images showed up to 1 ยตm large build-ups and adhesions on the Glassy Carbon surface. The largest breakouts of all three measured surfaces were also documented on the lower tool of Case A. The pair of samples, which had been finished by Aixtooling (Case 0), unexpectedly showed signs of wear already after 20 moulding cycles of Fused Silica, in contrast to the other pairs of moulds. However, these phenomena were limited to small breakouts from the surface that were only a few 100 nm in size. After 30-fold forming of Fused Silica glass, the breakouts increased and were similar to those of sample C6. In addition, there were slight adhesions, which were also only a few 100 nm in size. In summary, the hypothetical existence of an influence of the surface finishing of Glassy Carbon moulding tools on the wear behaviour during Fused Silica moulding was proven correct. Nevertheless, the clear correlation of mechanisms of wear evolution remain unsolved, since the expectations of the different surface topologies regarding their wear behaviour were proven incorrect. Therefore, the target of this publication-the derivation of measures to optimise the tools' service lifetime-was not achieved due to a lack of deductive explanation approaches. Before negating the general assumption of adhesion evolution induced by shear stresses and cohesion fracture, a possible drawback of the experimental conduction should be taken into account: This could be the surface assessment by means of simple profilometric values. Since profilometric values cannot distinguish between imperfections on different dimensional scales (i.e., a surface with several small infinite scratches can lead to the same Ra value as a flawless surface with a singular crater), it could turn out that a clear differentiation between "good" and "bad" surfaces (e.g., Case A and Case B respectively) needs to be rethought. A possible solution in terms of a more sophisticated surface qualification could be an analysis by means of "Power spectral density" (PSD), an algorithm based on a Fourier transformation of the surface profile [30] (Figure 22a). A clear distinction between scratches, pits, outbreaks, adhesion, agglomerations and so forth, could be made possible. A PSD reprocessing of the collected raw data will be conducted in the near future in order to find more reliable correlations between surface finish and degradation of the tools. to those of sample C6. In addition, there were slight adhesions, which were also only a few 100 nm in size. In summary, the hypothetical existence of an influence of the surface finishing of Glassy Carbon moulding tools on the wear behaviour during Fused Silica moulding was proven correct. Nevertheless, the clear correlation of mechanisms of wear evolution remain unsolved, since the expectations of the different surface topologies regarding their wear behaviour were proven incorrect. Therefore, the target of this publication-the derivation of measures to optimise the tools' service lifetime-was not achieved due to a lack of deductive explanation approaches. Before negating the general assumption of adhesion evolution induced by shear stresses and cohesion fracture, a possible drawback of the experimental conduction should be taken into account: This could be the surface assessment by means of simple profilometric values. Since profilometric values cannot distinguish between imperfections on different dimensional scales (i.e., a surface with several small infinite scratches can lead to the same Ra value as a flawless surface with a singular crater), it could turn out that a clear differentiation between "good" and "bad" surfaces (e.g., Case A and Case B respectively) needs to be rethought. A possible solution in terms of a more sophisticated surface qualification could be an analysis by means of "Power spectral density" (PSD), an algorithm based on a Fourier transformation of the surface profile [30] (Figure 22a). A clear distinction between scratches, pits, outbreaks, adhesion, agglomerations and so forth, could be made possible. A PSD reprocessing of the collected raw data will be conducted in the near future in order to find more reliable correlations between surface finish and degradation of the tools. Furthermore, an FEM simulation could provide further hints on the contact situation [31]. In terms of convergence on the real problem, extracts of the real surface topologies should make up the tool interface in the simulation model (Figure 22b). Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript and in the decision to publish the results. Furthermore, an FEM simulation could provide further hints on the contact situation [31]. In terms of convergence on the real problem, extracts of the real surface topologies should make up the tool interface in the simulation model (Figure 22b). Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript and in the decision to publish the results.
Iron-dependent epigenetic modulation promotes pathogenic T cell differentiation in lupus The trace element iron affects immune responses and vaccination, but knowledge of its role in autoimmune diseases is limited. Expansion of pathogenic T cells, especially T follicular helper (Tfh) cells, has great significance to systemic lupus erythematosus (SLE) pathogenesis. Here, we show an important role of iron in regulation of pathogenic T cell differentiation in SLE. We found that iron overload promoted Tfh cell expansion, proinflammatory cytokine secretion, and autoantibody production in lupus-prone mice. Mice treated with a high-iron diet exhibited an increased proportion of Tfh cell and antigen-specific GC response. Iron supplementation contributed to Tfh cell differentiation. In contrast, iron chelation inhibited Tfh cell differentiation. We demonstrated that the miR-21/BDH2 axis drove iron accumulation during Tfh cell differentiation and further promoted Fe2+-dependent TET enzyme activity and BCL6 gene demethylation. Thus, maintaining iron homeostasis might be critical for eliminating pathogenic Th cells and might help improve the management of patients with SLE. Introduction Systemic lupus erythematosus (SLE) is a heterogeneous autoimmune disease characterized by aberrant differentiation of pathogenic T cells and overproduction of autoantibodies (1). T follicular helper (Tfh) cells are a specific subset of CD4 + T helper cells that are mainly localized in the germinal centers (GCs) to help B cell maturation and antibody production (2). The aberrant expansion of Tfh cells is closely related to the progression of autoimmune diseases (3). Indeed, circulating Tfh cells are increased in peripheral blood of patients with SLE and correlate closely to the disease activity (4). Aberrant Tfh cell differentiation triggers lupus-like autoimmunity in mice (5,6), while inhibiting Tfh cell accumulation reduces autoimmune responses and relieves the disease progression of murine lupus (7). These observations indicate a close link between Tfh cells and SLE pathogenesis, yet the precise mechanism causing Tfh cell expansion in SLE remains unclear. Iron is an essential trace element that is widely involved in biological processes (8). Iron homeostasis is firmly regulated, and both iron deficiency and iron overload can cause various patholog-ical conditions (9,10). Recently, substantial advances have been achieved in understanding the role of iron in modulating immune cell functions and related human diseases (11). Intracellular iron drives the pathogenic Th cell differentiation by promoting the production of proinflammatory cytokine GM-CSF in neuroinflammatory diseases (12). Iron deficiency impairs B cell proliferation and antibody responses, suggesting the role of iron in regulating humoral immunity and vaccination (13). Knowledge about the role of iron in SLE development remains limited, and studies have reported variable findings. Some studies reported that sufficient iron status plays a protective role against SLE (14,15). On the contrary, a case report showed that iron dextran supplementation induced SLE-like symptoms in a childbearing-age woman with iron deficiency anemia (16). In lupus mice, renal iron accumulation occurs in lupus nephritis, and iron chelation treatment delays the onset of albuminuria (17,18). Furthermore, a recent study reported that hepcidin therapy reduces iron accumulation in the kidney and alleviates the disease progression of lupus nephritis in MRL/lpr lupus-prone mice (19), suggesting that modulating iron homeostasis may be a promising therapeutic strategy for lupus nephritis. TET enzymes oxidize 5-methylcytosine to 5-hydroxymethylcytosine in nucleic acids under the presence of Fe 2+ and 2-oxoglutarate, driving DNA demethylation/hydroxymethylation and gene transcription (20). Modulating intracellular ferrous iron enhances the activity of TET enzymes and alters DNA methylation to regulate gene expression (21,22). DNA demethylation/ hydroxymethylation plays an important role in modulating CD4 + T cell differentiation (23,24). We have reported that intracellular iron accumulation drives DNA hydroxymethylation and demeth- The trace element iron affects immune responses and vaccination, but knowledge of its role in autoimmune diseases is limited. Expansion of pathogenic T cells, especially T follicular helper (Tfh) cells, has great significance to systemic lupus erythematosus (SLE) pathogenesis. Here, we show an important role of iron in regulation of pathogenic T cell differentiation in SLE. We found that iron overload promoted Tfh cell expansion, proinflammatory cytokine secretion, and autoantibody production in lupus-prone mice. Mice treated with a high-iron diet exhibited an increased proportion of Tfh cell and antigenspecific GC response. Iron supplementation contributed to Tfh cell differentiation. In contrast, iron chelation inhibited Tfh cell differentiation. We demonstrated that the miR-21/BDH2 axis drove iron accumulation during Tfh cell differentiation and further promoted Fe 2+ -dependent TET enzyme activity and BCL6 gene demethylation. Thus, maintaining iron homeostasis might be critical for eliminating pathogenic Th cells and might help improve the management of patients with SLE. Iron-dependent epigenetic modulation promotes pathogenic T cell differentiation in lupus iron diet (ND, 50 mg/kg) were served as controls. HID reduced the body weight of MRL/lpr mice in the last 4 weeks (Supplemental Figure 3A). The serum iron levels were increased in 23-weekold HID-treated MRL/lpr mice compared with the ND group (Supplemental Figure 3B). After 20 weeks of HID treatment, the proportion and number of CD4 + T cells were increased significantly in spleens but not in dLNs, compared with the ND-fed mice (Supplemental Figure 3, C and D). On the contrary, the proportion and number of CD8 + T cells were significantly reduced in HID-fed mice (Supplemental Figure 3D). These results were consistent with those of previous studies that showed that the percentage of CD8 + T cells was inversely correlated with iron storage (31). In addition, HID increased the proportion and number of F4/80 + CD11b + macrophages (Supplemental Figure 3E) but did not affect CD3 + CD4 -CD8double-negative T cells (Supplemental Figure 3D), DCs (Supplemental Figure 3F), and B220 + B cells (Supplemental Figure 3G). CD4 + T cells in HID-treated mice showed increased proliferation ( Figure 2A) as well as obvious expansion of CD44 + CD62Leffector memory (EM) T cells ( Figure 2B), compared with the ND-treated controls. We sought to determine which effector CD4 + T cell subsets were affected by HID. The results showed that HID significantly increased the percentages and numbers of Tfh cells and GC B cells in MRL/lpr mice (Figure 2, C and D). The size and number of GCs were also significantly increased in MRL/lpr mice fed with HID (Supplemental Figure 3H). Moreover, HID reduced the Tfr/Tfh cell ratio in the spleens of MRL/lpr mice without a significant difference in Tfr cell number ( Figure 2E). The percentage and number of Tregs were not affected by HID ( Figure 2F). Effector CD4 + T cells exert control on immune responses by cytokine secretion (32). We detected significant increases in the proportions and numbers of IFN-ฮณ + CD4 + T cells and IL-17A + CD4 + T cells in dLNs and spleens of HID-fed mice (Figure 2, G and H). However, we did not observe significant change in IL-4 expression in CD4 + T cells of HID-fed mice compared with the ND group ( Figure 2I and Supplemental Figure 4, A and C). Although the proportion of IL-21 + CD4 + T cells was slightly elevated in the HID group, without significant difference compared with the ND group, the number of IL-21 + CD4 + T cells in dLNs and the mRNA expression of IL21 in splenic CD4 + T cells were increased significantly in HID-fed mice ( Figure 2J and Supplemental Figure 4, B and C). HID did not affect the percentage of B220 -CD138 + plasma cells, but the number of plasma cells was elevated in the spleens of HID-fed mice (Supplemental Figure 3I). Furthermore, HID elevated the serum levels of anti-dsDNA IgG in MRL/lpr mice ( Figure 2K). In the last 3 weeks of treatment, HID significantly increased the urine protein levels in MRL/lpr mice ( Figure 2L). The histological analysis also exhibited more serious injury and more T cell infiltration in the kidneys of HID-treated mice compared with the ND group ( Figure 2M and Supplemental Figure 3J). These results suggest that HID promotes the differentiation of pathogenic Th cells, as well as CD4 + T cell proliferation and effector CD4 + T cell expansion, contributing to autoimmune responses and disease progression in lupus mice. HID promotes exogenous antigen-induced GC response. To address the role of iron in T cell biology, we fed 3-week-old female C57BL/6 (B6) mice with HID for 7 weeks. HID did not alter the proportions and numbers of total CD4 + and CD8 + T cells in B6 ylation, promoting gene transcription and CD4 + T cell overactivation in lupus (25). So far, the role of TET proteins in Tfh cell differentiation and function remains unclear. Here, we show that a high-iron diet (HID) favored pathogenic T cell differentiation and autoantibody production, accelerating disease progression in MRL/lpr lupus-prone mice. We identified iron accumulation in lupus CD4 + T cells, which was related to the increase in Tfh cells in SLE. Mechanistically, miR-21 overexpression repressed 3-hydroxybutyrate dehydrogenase-2 (BDH2) to promote iron accumulation and enhanced the activity of Fe 2+ -dependent TET enzymes in Tfh cells, leading to BCL6 gene hydroxymethylation and Tfh cell differentiation. Overall, our data show that iron overload favors pathogenic T cell differentiation and autoantibody production, providing strong evidence for the important role of iron homeostasis in lupus pathogenesis. Results Increased intracellular iron in lupus CD4 + T cells. First, we sought to determine the iron levels in lupus CD4 + T cells. Ferrous iron represents the soluble and bioavailable form of iron involved in cell metabolism (8). Therefore, we used a cell-permeable probe, Ferro-Orange (26,27), to examine the level of free Fe 2+ in patients with SLE. We observed that the levels of Fe 2+ were highly increased in SLE CD4 + T cells ( Figure 1A), especially in CD4 + T cells of active patients with SLE ( Figure 1B). Furthermore, the mRNA levels of FTH and FTL, encoding the heavy chain and light chain of human ferritin, respectively (28), and the protein levels of ferritin were markedly increased in lupus CD4 + T cells (Figure 1, C-E). To confirm whether the increase of iron is a feature of T cell activation in SLE, we compared the levels of iron in naive CD4 + T cells and effector T cell subsets between healthy donors and patients with SLE by flow cytometry. We found that not only CD4 + effector T cell subsets, including Th1, Th2, Th17, and Tfh cells, but also naive and memory CD4 + T cells, have significantly increased levels of Fe 2+ in patients with SLE compared with healthy donors (Supplemental Figure 1, A and B; supplemental material available online with this article; https://doi.org/10.1172/JCI152345DS1). Although the level of Fe 2+ in Tfh cells was not higher than that in other CD4 + effector T cells in patients with SLE (Supplemental Figure 1C), the percentage of Tfh cells in patients with SLE was positively correlated with the Fe 2+ level in CD4 + T cells ( Figure 1F). In addition, we also observed that Tfh cells in draining lymph nodes (dLNs) and spleens of lupusprone mice have higher levels of Fe 2+ compared with the activated non-Tfh cells (Supplemental Figure 2, A and B). Interestingly, we also detected the Fe 2+ level in peripheral helper T (Tph) cells, which share many B helper-associated functions with Tfh cells and induce B cell differentiation toward antibody-producing cells (29,30). Our results showed that both the percentage of Tph cells and Fe 2+ levels were increased in patients with SLE compared with healthy donors, but there was no significant correlation between the Fe 2+ level in Tph cells and the percentage of Tph cells (Supplemental Figure 1, D-F). HID contributes to pathogenic T cell differentiation in lupus mice. To investigate whether iron overload affects pathogenic T cell differentiation and the progression of lupus, we fed 3-week-old female MRL/lpr lupus-prone mice with a HID (500 mg/kg) for 20 weeks; age-matched female MRL/lpr mice fed with a normal after 7 weeks of HID treatment ( Figure 3D). However, there was no significant difference in expression of Tfrc, encoding transferrin responsible for iron uptake, between the HID and ND group (Figure 3D). To clarify how HID affected intracellular iron in CD4 + T cells, we detected the expression of Fth, Tfrc, and Slc40a1 (encoding the ferroportin responsible for iron export) during the process of HID treatment. The results showed that the expression of Tfrc, Fth and Slc40a1 genes was significantly increased after 2 weeks of HID feeding (Supplemental Figure 6). Then, the expression levels of Tfrc and Slc40a1 were gradually reduced, and Fth expression still remained at a high level (Supplemental Figure 6). These results suggest that T cells maintain intracellular iron homeostasis in the iron-sufficient environment by promoting iron storage and dynamically regulating the expression of iron transport-related genes. After 14 days of SRBC immunization, HID-treated mice showed a significant increase in percentage and numbers of Tfh cells ( Figure 3E). No significant differences were observed in the Tfr/Tfh ratio ( Figure 3E). HID also elevated the frequency and number of B220 + GL-7 + Fas + GC B cells in the spleen and dLNs of HID-treated mice ( Figure 3F). However, the proportion and number of B220 -CD138 + plasma cells were not affected by HID ( Figure 3G). Furthermore, the proportions and numbers of IFN-ฮณ + , IL-17A + , IL-21 + CD4 + T cells were significantly increased in the spleens of HID-treated mice immunized with SBRCs, while the frequency and number of IL-4-secreting CD4 + T cells was not affected by HID (Figure 3, H-K). These results indicate that iron reshapes mice (Supplemental Figure 5A). After 7 weeks of HID treatment, though the proportion and number of CD62L + CD44naive CD4 + T cells exhibited a variable response to HID (Supplemental Figure 5B), the percentage and number of EM CD4 + T cells were markedly increased in the spleen and dLNs of HID-treated mice (Supplemental Figure 5B), suggesting that HID promotes EM CD4 + T cell differentiation. Increased iron involved in ROS production can be harmful to cell viability (33). Therefore, we examined the levels of intracellular ROS in T cells. The results showed that HID increased the level of ROS in naive CD4 + T cells of dLNs (Supplemental Figure 5, C and D) but had no influence in EM CD4 + T cells (Supplemental Figure 5E). To access the requirement of iron for T cell-dependent (TD) humoral response, we immunized the 2 groups of mice with sheep red blood cells (SRBCs) by i.p. injection after 5 weeks of HID treatment and continuously fed them with HID for 2 weeks ( Figure 3A). Seven weeks of HID treatment did not affect the body weight (Figure 3B), but the level of serum iron was significantly increased (Figure 3C). We next determined the expression of iron-related genes to evaluate the conditions of intracellular iron in CD4 + T cells. The transcription of Fth, encoding the heavy chain of ferritin required for intracellular iron storage (34), can be quickly regulated by intracellular iron through the iron-responsive elements/iron-regulatory protein system (9). The expression of Fth was markedly elevated in splenic CD4 + T cells of mice with 7 weeks of HID, suggesting that the level of intracellular iron was increased in CD4 + T cells cell differentiation in vitro. We observed that intracellular iron levels were progressively elevated alongside the differentiation of Tfh cells ( Figure 4A). Furthermore, iron dextran supplementation significantly enhanced Tfh cell differentiation ( Figure 4B). In addition, we found the same changes, in terms of differentiation of Tfh cells, in Th1 and Th17 cells (Supplemental Figure 9, A and B and Supplemental Figure 10A). However, iron dextran supplementation did not affect the effector functions of T cell subsets under neutral conditions (only with anti-CD3/CD28 antibodies; Supplemental Figure 10B). The mammalian siderophore 2,5-dihydroxybenzoic acid (2,5-DHBA) is a high-affinity iron-binding molecule that traffics iron from the cytoplasm to mitochondria (39). Cells lacking 2,5-DHBA accumulate high levels of cytoplasmic iron (39). We detected intracellular iron level changes in CD4 + T cells treated with 2,5-DHBA and an intracellular iron chelator, CPX (40). The results showed an approximately 20% decrease in intracellular iron after treatment (Supplemental Figure 11, A and B). Compared with the PBS control group, 2,5-DHBA significantly inhibited the differentiation of Tfh cells by reducing intracellular iron accumulation in CD4 + T cells ( Figure 4C). CPX treatment for 4 hours also led to an approximately 15% reduction in Tfh cell percentage without affecting the cell viability ( Figure 4, D and E). Collectively, these results indicate that intracellular iron overload enhances Tfh cell differentiation in vitro. miR-21 favors iron accumulation in Tfh cells. We sought to investigate the mechanism causing iron accumulation in lupus Tfh cells. BDH2 serves as the enzyme responsible for 2,5-DHBA synthesis in mammals (39). Both in vivo and in vitro deletion of BDH2 cause intracellular iron accumulation (41,42). Our previous work demonstrated that miR-21 targets BDH2 to promote iron accumulation in lupus CD4 + T cells (25). Therefore, we asked whether the same mechanism also operates in Tfh cell differentiation. We transfected naive CD4 + T cells with Agomir-21 to overexpress miR-21 and then cultured them in Tfh cell-polarized conditions in vitro. After 3 days of Tfh polarization, Agomir-21 increased intracellular iron levels in induced Tfh cells (Supplemental Figure 12A). Conversely, cells transfected with Antagomir-21, a specific inhibitor of miR-21, showed a reduced level of iron in induced Tfh cells (Supplemental Figure 12B). To test the role of miR-21 target gene BDH2 in intracellular iron accumulation of Tfh cells, we transfected Tfh cells with the constructed recombinant plasmid pCMV6-BDH2 to overexpress BDH2. The result showed that pCMV6-BDH2 promoted intracellular iron accumulation in induced Tfh cells (Supplemental Figure 12C). On the contrary, inhibition of BDH2 by siRNA-BDH2 reduced the levels of intracellular iron in induced Tfh cells (Supplemental Figure 12D). These results indicate that miR-21 and BDH2 are involved in iron accumulation during Tfh cell differentiation. miR-21 promotes the differentiation and function of Tfh cells. We asked whether the expression kinetics of miR-21 overlaps with the progress of Tfh cell differentiation. Therefore, we determined the expression of miR-21 and the frequency of Tfh cells during Tfh cell differentiation progress. Results showed that the increase in Tfh cell percentage was parallel with the gradually elevated expression of miR-21 ( Figure 5, A and B). A similar trend was also observed in the ex vivo differentiation process of Tfh cells of B6 mice (Supplemental Figure 13, A-C). the cytokine milieu, by favoring proinflammatory cytokine production, in TD humoral response. We examined the changes of serum antibodies at day 0, day 7, and day 14 after immunization with SRBCs to evaluate the effect of iron on antigen-specific antibody production. HID significantly increased the production of anti-SRBC IgG2a but reduced the levels of anti-SRBC IgM in B6 mice ( Figure 3L and Supplemental Figure 7). Increased production of IgG isotypes and reduction of antigen-specific IgM are related to the maturation of GCs (35,36). Besides, the cytokine milieu also plays a role in the outcome of which IgG isotype gains predominance. IFN-ฮณ promotes a IgG2a-predominant antibody response, whereas IL-4 favors a IgG1-predominant antibody response in mice (37,38). Therefore, elevated secretion of IFN-ฮณ might promote the production of antigen-specific IgG2a in HID-treated mice. Consistently, histological analysis also showed a stronger GC response in HID-treated mice ( Figure 3M). To confirm whether the increased humoral response from SRBC-immunized mice after HID treatment is T cell dependent, we isolated CD4 + T cells from ND-and HID-fed mice and mixed them well with CD19 + B cells isolated from ND-fed mice. T/B cell suspensions were injected into the tail veins of Rag2 -/mice. After 7 days of T/B cell transfer, Rag2 -/mice were immunized with SRBCs by i.p. injection. Mice were sacrificed for analysis after 7 days of SRBC immunization (Supplemental Figure 8A). The results showed that Rag2 -/mice receiving HID CD4 + T cells had higher percentages of Tfh cells and GC B cells compared with the mice receiving ND CD4 + T cells (Supplemental Figure 8, B and C). Furthermore, ELISA showed that the titers of anti-SRBC IgG1, IgG2a, IgG2b, and IgM were increased significantly in the Rag2 -/mice transferred with HID CD4 + T cells compared with that in the mice transferred with ND CD4 + T cells, suggesting that the increased humoral response from SRBC-immunized mice with HID is T cell dependent (Supplemental Figure 8D). Together, these results suggest that HID promotes the expansions of Tfh and GC B cells, and the production of proinflammatory cytokines in CD4 + T cells, as well as the secretions of antigen-specific IgG isotypes in TD humoral response. Intracellular iron promotes human Tfh cell differentiation in vitro. To investigate the role of iron in Tfh cell differentiation, we examined the changes of intracellular iron in the process of Tfh test for B, D, and E). results suggest that iron supplementation rescues the defect of Tfh cell-mediated humoral response in miR-21 cKO mice. miR-21 promotes Tfh cell differentiation in patients with SLE and the lupus mouse model. We examined whether miR-21 is involved in Tfh cell differentiation in lupus. We isolated CD4 + T cells from the peripheral blood from healthy donors and patients with SLE to evaluate the correlation between miR-21 and Tfh cell-related genes. The expression of miR-21 was higher in CD4 + T cells from patients with SLE than in those from healthy controls ( Figure 7A). The mRNA levels of Tfh signature genes, including CXCR5, PDCD1, BCL6, and IL21, were also increased in CD4 + T cells from patients with SLE (Figure 7, B-E). Furthermore, the expression of miR-21 was positively correlated with the SLE Disease Activity Index (SLEDAI) score and the mRNA level of CXCR5 in CD4 + T cells from patients with SLE (Figure 7, F and G). Consistent with the mRNA expression, the frequency of CD4 + CXCR5 + PD-1 + Tfh cells was also highly elevated in peripheral blood of patients with SLE ( Figure 7H). To downregulate the expression of miR-21 in lupus CD4 + T cells and evaluate the effect on Tfh cell differentiation, we isolated CD4 + T cells from peripheral blood of patients with SLE and transfected them with Antagomir-21 and then stimulated them with anti-CD3/CD28 antibodies for two days. After two days of stimulation, Antagomir-21 reduced the expression of miR-21 ( Figure 7I), the percentage of Tfh cells ( Figure 7J), and mRNA levels of Tfh signature genes in lupus CD4 + T cells ( Figure 7K). In addition, our previous study demonstrated that miR-21 promotes iron accumulation in lupus CD4 + T cells (25). Together, these results suggest that miR-21 favors aberrant Tfh cell expansion via increasing iron accumulation in SLE. To further confirm the role of miR-21 in lupus, we treated the 12-week-old WT and miR-21 cKO mice with pristane by i.p. injection, which can induce a series of lupus-like symptoms in mice (43). After 12 weeks of pristane stimulation, the proportions of Tfh cells were decreased in the dLNs and spleens of miR-21 cKO mice, and the number of Tfh cells was decreased in the spleens of cKO mice (Supplemental Figure 17A). However, no significant changes were observed in Tfr/Tfh ratio (Supplemental Figure 17B). Consistent with the Tfh cells, the proportions of GC B cells were decreased in the dLNs and spleens of miR-21 cKO mice, and the cell number of GC B cells in dLNs of miR-21 cKO mice were also decreased (Supplemental Figure 17C). The serum levels of anti-dsDNA IgG and ANA total Ig were decreased in miR-21 cKO mice compared with the WT controls (Supplemental Figure 17D). Furthermore, miR-21 cKO mice exhibited alleviated urine protein in the last 4 weeks of treatment (Supplemental Figure 17E). Although we did not observe typic pathological changes of lupus nephritis due to the limitation of the observation period, morphological examination by H&E and PAS staining showed less lymphocyte infiltration and cell proliferation in the kidney of miR-21 cKO mice (Supplemental Figure 17F). Consistently, histological analysis showed that renal C3 and IgG immune complex depositions were also decreased in miR-21 cKO mice (Supplemental Figure 17G). Collectively, these data indicate that miR-21 contributes to SLE progression in human and mice. BDH2 is the target gene of miR-21 in regulating Tfh cells. We further investigated the role of BDH2 in miR-21-mediated Tfh cell differentiation. First, we examined the expression levels of BDH2 in the process of Tfh cell differentiation. We observed a reduced ditions in vitro (Supplemental Figure 14A). However, overexpressing miR-21 did not affect the effector functions of T cell subsets under neutral conditions (Supplemental Figure 14B). Next, we compared gene expression between WT Tfh cells and miR-21 cKO Tfh cells induced in vitro by RNA-Seq. Among the approximately 1400 differently expressed genes, 686 were downregulated (Supplemental Table 4) and 727 were upregulated (Supplemental Table 5) in miR-21 cKO Tfh cells relative to their WT control cells (Supplemental Figure 13H). Several Tfh signature genes, such as Tiam1, Cd200, Pdcd1, Bcl6, Cd28, Blta, Slamf6, and Pou2af1, were significantly downregulated, while Prdm1, Fasl, and Tbx21 were upregulated in miR-21 cKO Tfh cells (Supplemental Figure 13I). These results suggest that miR-21 promotes the differentiation of Tfh cells both in humans and mice in vitro. To investigate whether iron depletion prevents miR-21-mediated Tfh cell differentiation, we treated the induced Tfh cells with Agomir-21 or Agomir-21 plus 2,5-DHBA. We found that 2,5-DHBA prevented the increase of Tfh cell differentiation induced by Agomir-21 to levels equivalent to those of the Agomir-NC controls ( Figure 5I). Similarly, CPX treatment also counteracted the increase of Tfh cell differentiation induced by miR-21 overexpression (Supplemental Figure 15). On the contrary, iron dextran supplementation recovered the differentiation of Tfh cells inhibited by Antagomir-21 to levels equivalent to those of the Antagomir-NC controls ( Figure 5J). We next asked whether miR-21 affects Tfh cell-mediated humoral response in vivo. We immunized age-matched WT and miR-21 cKO mice with SRBCs to induce TD humoral response. After 7 days of immunization, mice were sacrificed for analysis. KO of miR-21 did not affect the percentage and number of total CD4 + T cells ( Figure 6A). The percentages and numbers of Tfh cells and GC B cells were significantly decreased in the spleens of miR-21 cKO mice compared with WT controls (Figure 6, B and C). Furthermore, the serum levels of anti-SRBC IgG1, IgG2a, IgG2b, and IgG3 were markedly reduced in miR-21 cKO mice ( Figure 6D). Histological analysis also confirmed an attenuated GC response in miR-21 cKO mice, as shown by the reduced size and quantities of PNA + GC areas in the spleens of miR-21 cKO mice ( Figure 6E). To investigate whether HID can rescue the deficient differentiation of Tfh cells in miR-21 cKO mice, we treated 3-week-old WT mice with ND and age-matched miR-21 cKO mice with ND and HID for 5 weeks and then immunized them with SRBCs by i.p. injection. After 2 weeks of SRBC immunization, mice were sacrificed for analysis. The percentages and numbers of Tfh cells and GC B cells were markedly reduced in miR-21 cKO mice compared with those in the WT group, while miR-21 cKO mice fed with HID showed elevated percentages of Tfh cells and GC B cells compared with the miR-21 cKO mice fed with ND (Supplemental Figure 16, A and B). Furthermore, the size and number of GCs were significantly reduced in miR-21 cKO mice compared with WT mice, which were rescued by HID (Supplemental Figure 16, C and D). We collected the sera of mice at day 0, day 7, and day 14 of SRBC immunization. ELISA showed that the serum levels of anti-SRBC IgG1 and IgG2b, but not IgG3, at day 14 were significantly reduced in miR-21 cKO mice compared with the WT group, but the levels of anti-SRBC IgG1 and IgG2b were significantly enhanced in miR-21 cKO mice fed with HID (Supplemental Figure 16, E-H). These were significantly increased in cells transfected with siRNA-BDH2 (Figure 8, B and C). On the contrary, in cells transfected with pCMV6-BDH2, the expression of BDH2 was highly increased compared with controls ( Figure 8D), and the frequency of Tfh cells and mRNA levels of CXCR5, PDCD1, IL21, and BCL6 were significantly decreased (Figure 8, E and F). Furthermore, rescuing the expression of BDH2 by transfecting pCMV6-BDH2 in Agomir-21-treated cells prevented Tfh cell differentiation to levels lower than in Agomir-NC controls ( Figure 8G). These results suggest that BDH2 is the target gene of miR-21 in regulating Tfh cell differentiation. Because BDH2 plays an important role in intracellular homeostasis (39, 41, 42), we asked whether changing the iron bioavail-expression of BDH2 during the process of Tfh cell differentiation, together with increased BCL6 and ferritin (Supplemental Figure 18, A and B). Furthermore, BDH2 was downregulated in induced Tfh cells transfected with Agomir-21 (Supplemental Figure 18, C and D), while it was increased in cells transfected with Antagomir-21 (Supplemental Figure 18, E and F). These results indicate that BDH2 might be involved in Tfh cell differentiation. Next, we examined whether changing BDH2 expression affects Tfh cell differentiation. We used siRNA-BDH2 to inhibit BDH2 expression in naive CD4 + T cells ( Figure 8A) and then induced them to differentiate into Tfh cells. The results showed that the percentage of Tfh cells and mRNA expression of CXCR5, PDCD1, IL21, and BCL6 nificant changes in TET2/TET3 expression (Supplemental Figure 19, C-F). We then performed methylated DNA immunoprecipitation (MeDIP) and hydroxymethylated DNA immunoprecipitation-qPCR (hMeDIP-qPCR) to determine the effect of miR-21/ BDH2 on DNA methylation/hydroxymethylation of Tfh signature genes. In cells transfected with Agomir-21 or siRNA-BDH2, we observed increased DNA hydroxymethylation and decreased DNA methylation in the BCL6 gene promoter, but no significant differences were detected in promoter regions of CXCR5, PDCD1, and IL21 compared with their corresponding controls (Figure 9, B, C, E, and F). On the contrary, inhibiting miR-21 by Antagomir-21 or overexpressing BDH2 by pCMV6-BDH2 reduced TET enzyme activity (Figure 9, G and J), but the expression of TET2 and TET3 was not affected compared with the control groups (Supplemental Figure 19, G-J). We observed decreased DNA hydroxymethylation and increased DNA methylation in the BCL6 gene promoter of cells inhibiting miR-21 or overexpressing BDH2 (Figure 9, H, I, K, and L). Besides, DNA methylation in the PDCD1 gene promoter was also increased in cells inhibiting miR-21 ( Figure 9I). However, there were no significant differences in DNA methylation/ hydroxymethylation of CXCR5 and IL21 gene promoters (Figure 9, H, I, K, and L). The changes in the genomic distribution of 5-methylcytosine and 5-hydroxymethylcytosine in the BCL6 promoter were also verified using MeDIP-Seq and hMeDIP-Seq in Tfh cells transfected with Agomir-21 or siRNA-BDH2 (Supplemental Figure 20). We detected another Fe 2+ -dependent epigenetic enzyme, JMJD3, in Tfh cells; it is responsible for the demethylation of H3K27me3 during T cell differentiation (45). However, JMJD3 enzyme activity was not affected by miR-21/BDH2 in Tfh cell differentiation, (Supplemental Figure 21). These results suggest that the miR-21/BDH2/intracellular iron axis promotes Tfh cell differentiation via inducing TET enzyme-mediated DNA hydroxymethylation of the BCL6 promoter. Discussion Iron overload has been reported in murine and human lupus, but the pathogenic mechanism is poorly understood (16,17,46). We showed here that intracellular iron was increased in lupus CD4 + T cells (Figure 1). Iron overload promoted pathogenic T cell differentiation, accelerating the disease progression of lupus mice. Notably, iron overload favored the expansion of spontaneous Tfh cells and GC B cells, as well as the proinflammatory cytokine-producing CD4 + T cells, aggravating autoantibody production and disease progression of MRL/lpr lupus-prone mice (Figure 2). In addition to driving pathogenic T cell differentiation in lupus, iron ability affects the role of BDH2 on Tfh cell differentiation. We used 2,5-DHBA to deplete intracellular iron in induced Tfh cells transfected with siRNA-BDH2 and found that iron depletion inhibited the differentiation of Tfh cells in cells downregulating BDH2 to levels comparable with the controls ( Figure 8H). Conversely, iron dextran supplementation recovered the frequency of Tfh cells in cells overexpressing BDH2 to levels equivalent to the pCMV6-NC controls ( Figure 8I). These results suggest that BDH2 modulates intracellular iron to affect Tfh cell differentiation. Inhibition of BDH2 promotes DNA hydroxymethylation of the BCL6 promoter by increasing intracellular iron. Fe 2+ can serve as a cofactor of several epigenetic enzymes, such as TET enzymes, to regulate immune cell biology (13,20,44). Iron-dependent TET enzymes catalyze 5-methylcytosine to 5-hydroxymethylcytosine, which leads to DNA hydroxymethylation and demethylation, activating gene transcription (13,44). We asked whether the miR-21/BDH2 axis affects TET enzyme activity and then alters DNA methylation/hydroxymethylation of genes that control Tfh cell differentiation. In naive CD4 + T cells with TET2 or TET3 deficiency (Tet2 cKO and Tet3 cKO), overexpression of miR-21 did not affect the differentiation of Tfh cells (Supplemental Figure 19, A and B), suggesting that TET enzymes are required for miR-21 to regulate Tfh cell differentiation. We next examined the TET enzyme activity in cells transfected with Agomir-21 or siRNA-BDH2. Indeed, we observed increased TET enzyme activity in induced Tfh cells overexpressing miR-21 or downregulating BDH2 (Figure 9, A and D), but we saw no sig- Figure 10. Schematic illustration of the contribution of iron overload to the pathogenic T cell differentiation and pathogenesis of SLE. In lupus CD4 + T cells, iron accumulation promotes Tfh cell differentiation and Tfh cell-mediated autoimmune responses, autoantibody production, as well as inflammatory cytokine secretion, driving disease progression of lupus. Mechanically, miR-21 represses BDH2 to induce iron accumulation in lupus CD4 + T cells by limiting the synthesis of siderophore 2,5-DHBA, which enhances Fe 2+ -dependent TET enzyme activity and promotes BCL6 promoter hydroxymethylation and transcription activation, leading to excessive Tfh cell differentiation in SLE. Together, iron overload is an important inducer of the autoimmune response in lupus, and maintaining iron homeostasis will provide a good way for therapy and management of SLE. J Clin Invest. 2022;132(9):e152345 https://doi.org/10.1172/JCI152345 13). In SRBC-induced TD humoral response, miR-21 promoted GC formation and antigen-specific antibody production ( Figure 6). miR-21 was also involved in aberrant Tfh cell expansion in patients with SLE ( Figure 7). Furthermore, the deletion of miR-21 in CD4 + T cells alleviated the disease progression of pristaneinduced lupus in mice (Supplemental Figure 17). On the contrary, BDH2 inhibited Tfh cell differentiation (Figure 8). The miR-21/ BDH2 axis modulated iron homeostasis in Tfh cells (Supplemental Figure 12). Modulating intracellular iron recovered effects of the miR-21/BDH2 axis on Tfh cell differentiation ( Figure 5, I and J; Supplemental Figure 15; and Figure 8, H and I). These observations indicate the potentially novel function of miR-21 in regulating Tfh cell differentiation, which further confirms that miR-21 is the key target for SLE therapy. DNA methylation is involved in regulating the plasticity of CD4 + T cell differentiation (23). Several lineage-determining transcription factors, such as FOXP3 and RORC, can be affected by DNA methylation during Th cell differentiation (55,56). TET enzymes catalyze DNA hydroxymethylation to promote gene expression, and this process requires Fe 2+ to serve as a cofactor (20). A body of evidence has demonstrated that DNA demethylation contributes to T cell overactivation in SLE (57)(58)(59). In addition, our previous studies have also shown that iron overload enhances DNA hydroxymethylation and demethylation, promoting gene transcription and CD4 + T cell overactivation in SLE (25). Here, we demonstrated that miR-21-mediated Tfh cell differentiation is dependent on TET enzymes. miR-21/BDH2/intracellular iron axis affects TET enzyme activity in Tfh cells (Figure 9). Previous studies have reported that BCL6 gene is sensitive to DNA methylation (60,61). Using MeDIP and hMeDIP techniques, we found that the promoter region of BCL6 gene was sensitive to miR-21/BDH2 axis-mediated DNA methylation/hydromethylation ( Figure 9 and Supplemental Figure 20). Our results demonstrated the iron can control cell differentiation by inducing TET enzyme-mediated DNA demethylation of key genes, and iron overload is an important manipulator for aberrant epigenetic modifications occurring in SLE CD4 + T cells. In summary, our study shows that iron overload is an important inducer of the autoimmune response in lupus and indicates that regulating iron homeostasis may be a good target for SLE therapy. Our study also provides the experimental basis for the importance of controlling dietary iron in clinical management for patients with SLE. We demonstrate a potentially novel mechanism, in which miR-21 overexpression enhances Fe 2+ -dependent TET enzyme activity through repression of BDH2 and promotes BCL6 promoter hydroxymethylation and transcription activation, leading to excessive Tfh cell differentiation in SLE; this mechanism provides potentially novel strategies for SLE therapy ( Figure 10). Further study of iron-based treatment for SLE in mouse models and clinic experiments will be needed. Further information can be found in Supplemental Methods. Patients. 193 patients with SLE were recruited from outpatient dermatology clinics and in-patient wards, and all of them met at least 4 of the American College of Rheumatology Revised Criteria (62). The SLEDAI was used to assess lupus disease activity. Activity categories overload also augmented the expansion of Tfh cells and GC B cells, and the secretion of proinflammatory cytokines, as well as the production of antigen-specific IgG isotypes in TD humoral response ( Figure 3). This study extends our knowledge about the role of iron in autoimmune diseases and humoral immunity and highlights that maintaining iron homeostasis might be critical for eliminating pathogenic Th cells and balancing the protective humoral response with autoimmune response. Tfh cells have emerged as a central player in SLE pathogenesis (47,48). Some evidence has suggested that the proportions of Tfh or Tfh-like cells were increased in patients with SLE (49,50). Therefore, there is a great need for understanding the mechanism causing aberrant Tfh cell differentiation in lupus. Here, we showed that iron was required for Tfh cell differentiation. HID increased the proportion of Tfh cells and enhanced GC response in mice (Figures 2 and 3). 2,5-DHBA, which is catalyzed by BDH2, serves as a siderophore to reduce cytoplasmic iron (39,42). Treatment with 2,5-DHBA or iron chelator CPX significantly inhibited the differentiation of Tfh cells by decreasing intracellular iron ( Figure 4, C-E, and Supplemental Figure 11). Furthermore, the iron levels in CD4 + T cells from patients with SLE were positively correlated with the percentage of Tfh cells ( Figure 1F). This finding indicates that, in patients with SLE, iron overload plays a positive role in Tfh cell differentiation. Further investigation of iron metabolism in Tfh cells will develop the understanding of the mechanism that causes aberrant Tfh cell differentiation and function in SLE. The trace element iron is capable of affecting effector T cell activation and function. A previous study showed that iron promotes proinflammatory cytokine GM-CSF and IL-2 expression in T cells by regulating the stability of an RNA-binding protein, PCBP1 (12). Iron uptake blockade in autoreactive T cells inhibits their capability to induce experimental autoimmune encephalomyelitis. In addition, tetrahydrobiopterin (BH4) production in activated T cells is related to changes in iron metabolism and mitochondrial bioenergetics, and blockading BH4 synthesis improved T cell-mediated autoimmunity and allergic inflammation (51). These studies indicate that iron metabolism plays an important role in T cell functions and autoimmune diseases. However, the role and mechanism of iron in regulating T cell activation, differentiation, and autoantibody production in autoimmune diseases, such as SLE, remain unclear. Here, our data showed that iron overload enhanced Tfh cell differentiation and GC response as well as increased Th1 and Th17 cell differentiation, which promoted antibody production and autoimmune response in SLE. These data confirm the important role of iron in the pathogenesis of SLE. Previous studies have demonstrated that miR-21 promotes T cell activation and apoptosis (52,53). Furthermore, CD4 + T cells with miR-21 overexpression acquire increased capacities to support B cell maturation and Ig production, which is similar to the function of Tfh cells, indicating that miR-21 might be related to Tfh cell differentiation (54). Our finding reveals a potentially novel role of miR-21 in regulating Tfh cell differentiation and GC response via modulating intracellular iron homeostasis. We have reported that miR-21 targets BDH2 to catalyze intracellular iron accumulation in human and murine CD4 + T cells (25). We showed here that, in both humans and mice, miR-21 was capable of promoting Tfh cell differentiation ( Figure 5 and Supplemental Figure staining, rat anti-mouse CD45R antibody (Abcam) and HRP-conjugated goat anti-rat IgG(H+L) antibody (Proteintech) were used. After incubation of primary antibody and HRP-conjugated antibody, the opal 7-Color Manual IHC Kit (Perkin Elmer) was used for fluorescence labeling. Images were captured by Perkin Elmer, and the images were analyzed by the Mantra system. Information on antibodies is provided in Supplemental Table 2. Transfection of Agomir, Antagomir, siRNA, and plasmid. For Agomir and Antagomir transfection, CD4 + T cells or naive CD4 + T cells were cultured under Opti-MEM (Gibco) with Agomir-21/Antagomir-21 (400 nM) or their corresponding controls for 6 hours. Then, RPMI 1640 complete medium (Gibco) was added at 200 nM to the final concentrations of Agomir-21/Antagomir-21 or their controls. For siRNA and plasmid transfection, naive CD4 + T cells were transfected with siRNA or plasmid using the Human T Cell Nucleofector Kit and Amaxa Nucleofector system (Lonza) according to the manufacturer's protocols. Briefly, naive CD4 + T cells were collected and resuspended in 100 ฮผL transfection reagents, and 10 ฮผL siRNA (20 ฮผM) or 5 ฮผg plasmid was added and transfected into the cells by electroporation using the nucleofector program V-024 in the Amaxa Nucleofector apparatus (Lonza). After being cultured under RPMI 1640 complete medium (Gibco) for 6 hours, the cells were transferred to fresh complete medium and incubated for 48 to 72 hours and then harvested for subsequent experiments. In vitro iron depletion and iron supplementation. For iron dextran supplementation, naive CD4 + T cells were isolated from peripheral blood from healthy donors and treated with iron dextran (20 ฮผM) (MilliporeSigma) and cultured under Tfh cell-polarized conditions for 3 days. For 2,5-DHBA treatment, naive CD4 + T cells were cultured under Tfh cell-polarized conditions for 3 days and treated with 2,5-DHBA (20 ฮผM) (Selleck) for the last two days. For CPX treatment, naive CD4 + T cells were cultured under Tfh cell-polarized conditions for 3 days and treated with CPX (20 ฮผM) (Selleck) for the last 4 hours. Flow cytometry. The expression of cytokines, surface markers, and transcriptional factors was determined by flow cytometry using FACS Canto II (BD Biosciences) or CYTEK NL-3000 (CYTEK Biosciences), and the data were analyzed by the Flowjo software (Tree Star). In brief, for surface markers, cells were incubated with fluorochrome-labeled antibodies against surface markers at 4ยฐC for 30 minutes protected from light. For cytokines, cells were stimulated with PMA and ionomycin with the addition of GolgiPlug (BD Biosciences) at 37ยฐC and 5% CO 2 were evaluated based on the SLEDAI score: inactive (SLEDAI โ‰ค 4), active (SLEDAI > 4). Age-, sex-, and ethnicity-matched healthy donors were enrolled by medical staff at the Second Xiangya Hospital. Patient information is listed in Supplemental Table 1. Mice and HID treatment. B6 and MRL/lpr mice were purchased from Slack Company. For the HID lupus mouse model, 3-week-old female MRL/lpr mice were fed with a HID (500 mg/kg) for 20 weeks. Age-matched MRL/lpr mice receiving a ND (50 mg/kg) for 20 weeks served as controls. The ingredients of the chow used in the HID group were the same as those in chow used in the ND group, except for iron content. Serum was collected monthly from 8 weeks of age onward to detect the anti-dsDNA IgGs in MRL/lpr mice. Urine protein was assessed using a colorimetric assay strip (URIT). For miR-21 cKO mice generation, the loxP-miR-21-loxP mice were constructed by Shanghai Biomodel Organism Science & Technology Development Co. Ltd. Cd4cre mice (stock no. 022071) were purchased from Shanghai Biomodel Organism Science & Technology Development Co. Ltd. The miR-21floxed allele (WT) mice were bred with Cd4-cre transgenic mice to generate miR-21 fl/fl Cd4-cre (miR-21 cKO) mice. Tet2 fl/fl and Tet3 fl/fl mice were provided by Akihiko Yoshimura at the Department of Microbiology and Immunology, Keio University School of Medicine, Tokyo, Japan (63). We generated Tet2 fl/fl ;Cd4-cre (Tet2 cKO) and Tet3 fl/fl ;Cd4-cre (Tet3 cKO) mice by crossing Tet2 fl/fl and Tet3 fl/fl mice with Cd4-cre mice. All mice were raised in specific pathogen-free conditions. SRBC immunization and ELISA. For HID feeding and SRBC immunization, 3-week-old B6 mice were pretreated with HID for 5 weeks and then were i.p. immunized with 5% SRBCs in Alsever's solution. Histology. Mouse kidney and spleen tissues were fixed in formalin and embedded in paraffin. H&E staining was used to assess the histological features of the kidney. The renal pathology was scored according to the criteria of previous studies (64). To assess the immune complex deposition in the kidney, we stained paraffin-embedded renal sections with rabbit anti-C3 antibody (Abcam) and HRP-conjugated anti-rabbit antibody (Abcam) for mouse C3 and HRP-conjugated antimouse IgG antibody (Abcam) for mouse IgG. PNA (GC zone), CD3 (T cell zone), and B220 (B cell zone) were used to determine the GC area. For PNA staining, spleen tissues were stained with biotinylated anti-peanut agglutinin (Vector Laboratories ), and then incubated with biotinylated anti-peanut agglutinin antibody (Vector Laboratories). For CD3 staining, rabbit anti-CD3 antibody (Abcam) and HRP-conjugated anti-rabbit antibody (Abcam) were used. For B220 (CD45R) or displayed unequal variances between 2 groups, the 2-tailed Mann-Whitney U test was used for statistical analysis. One-way ANOVA with relevant post hoc tests was used for multiple comparisons. When the sample data were not normally distributed or displayed unequal variances between multiple groups, the Kruskal-Wallis test and Dunn's multiple-comparisons test were used for multiple comparisons. Pearson's correlation was used for the correlation analysis. When the sample data were not normally distributed, Spearman's correlation was used for the correlation analysis. P < 0.05 was considered significant. Study approval. All human studies were approved by the Ethics Committee of the Second Xiangya Hospital of Central South University. All participants provided written informed consent. All animal procedures were approved by the Animal Care and Use Committee of the Laboratory Animal Research Center at the Second Xiangya Medical School, Central South University.
Measuring Shapes of Galaxy Images II: Morphology of 2MASS Galaxies We study a sample of 112 galaxies of various Hubble types imaged in the Two Micron All Sky Survey (2MASS) in the Near-Infra Red (NIR; 1-2 $\mu$m) $J$, $H$, and $K_s$ bands. The sample contains (optically classified) 32 elliptical, 16 lenticulars, and 64 spirals acquired from the 2MASS Extended Source Catalogue. We use a set of non-parametric shape measures constructed from the Minkowski Functionals (MFs) for galaxy shape analysis. We use ellipticity ($\epsilon$) and orientation angle ($\Phi$) as shape diagnostics. With these parameters as functions of area within the isophotal contour, we note that the NIR elliptical galaxies with $\epsilon>0.2$ show a trend of being centrally spherical and increasingly flattened towards the edge, a trend similar to images in optical wavelengths. The highly flattened elliptical galaxies show strong change in ellipticity between the center and the edge. The lenticular galaxies show morphological properties resembling either ellipticals or disk galaxies. Our analysis shows that almost half of the spiral galaxies appear to have bar like features while the rest are likely to be non-barred. Our results also indicate that almost one-third of spiral galaxies have optically hidden bars. The isophotal twist noted in the orientations of elliptical galaxies decreases with the flattening of these galaxies indicating that twist and flattening are also anti-correlated in the NIR, as found in optical wavelengths. The orientations of NIR lenticular and spiral galaxies show a wide range of twists. INTRODUCTION Galaxy morphology in different wave-bands provides useful information on the nature of galaxy evolution as well as the overall distribution of galaxy constituents such as old red giants, young luminous stars, gas, dust etc. For example, the younger Population I stars associated with massive gas-rich star formation regions light up the disk galaxies in optical wavelengths. The distribution of older Population II stars, the dominant matter component near the central regions of galaxies, remains hidden. The presence of interstellar dust hides the old stellar population especially in late-type disk galaxies. The NIR light, on the other hand, is much less affected by the interstellar dust and more sensitive to the older populations. Thus it provides a penetrating view of the core regions in disk galaxies. Therefore careful analysis of morphological differences between the optical and infrared images would not only provide valuable insight into the role of population classes in morphology but also reveal whether the discrepancies are due to singular or combined effects of extinction and population differences (Jarrett et al. 2003). In the morphological studies of cosmological objects the most widely used technique is the ellipse-fitting method, (Carter 1978;Williams & Schwarzschild 1979;Leach 1981;Lauer 1985;Jedrzejewski 1987;Fasano & Bonoli 1989;Franx, Illingworth & Heckman 1989;Peletier et al. 1990). In this study we use a set of measures known as the Minkowski functionals (hereafter MFs, Minkowski 1903) to analyze the morphology of NIR galaxies. Contrary to the conventional method, the MFs provide a non-parametric description of the images implying that no prior assumptions are made about the shapes of the images. The analyses based on the MFs appear to be robust and numerically efficient when applied to various cosmological studies, e. g., galaxies, galaxyclusters, CMB maps etc. (Mecke, Buchert & Wagner 1994;Schmalzing & Buchert 1997;Kerscher et al. 1997;Schmalzing & Gorsky 1998;Hobson, Jones & Lasenby 1999;Novikov, Feldman & Shandarin 1999;Schmalzing et al. 1999;Beisbart 2000;Kerscher et al. 2000a;Novikov, Schmalzing, & Mukhanov 2000;Beisbart, Valdarnini & Buchert 2001b;Kerscher et al. 2001b;Shandarin 2002;Shandarin et al. 2002;Sheth et al. 2003;Rahman & Shandarin 2003, hereafter paper 1) This is the second in a series of papers aimed to study the morphology of galaxy images using a set of measures derived from the MFs. In this paper we analyze a larger sample of 2MASS galaxies imaged at J, H, and Ks band in NIR (Jarrett 2000;Jarrett et al. 2000;Jarrett et al. 2003). We have described and tested the set of Minkowski parameters derived from the two-dimensional scalar, vector and several tensor MFs to quantify galaxy shapes for a small sample of 2MASS images in paper I. The analyses in paper I used contour smoothing to reduce the effect of background noise. We have used the same technique in the present sample which contains NIR galaxies over the entire range of Hubble types including ellipticals, lenticulars and spirals. The present investigation is aimed at obtaining structural information on 2MASS galaxies by measuring their shapes quantified by ellipticity and orientation. As dusty regions of galaxies become transparent in the NIR, the imaging in this part of the spectrum should provide a clear view of the central core/bulge regions of these objects. A systematic study of NIR images should provide valuable information regarding the central structures of galaxies (e. g., optically hidden bar) which would otherwise remain absent when viewed in optical wavelengths. If only the old red giants illuminate galaxies at NIR wavelengths and are decoupled from Population I star lights, then the NIR galaxies should show weak isophotal twist in their orientations compared to those in the visual wavelengths. Therefore it would be interesting to check whether or not isophotal twist is a wavelength dependent effect. The organization of the paper is as follows: the 2MASS sample and selection criteria are described in ยง2, a brief discussion of the parameters is given in ยง3. We discuss the robustness of the measures to identify and discern galaxy isophotes of various shapes and present our results ยง4. We summarize our conclusions in ยง5. In the appendix ( ยง6) we demonstrate the sensitivity of several Minkowski measures to image contamination by foreground stars. 2MASS DATA The 2MASS catalogue contains near-infrared images of nearby galaxies within redshift range from cz โˆผ 10, 000 km s โˆ’1 to 30, 000 km s โˆ’1 . The survey utilizes the NIR band windows of J(1.11 โˆ’ 1.36 ยตm), H(1.50 โˆ’ 1.80 ยตm) and Ks(2.00 โˆ’ 2.32 ยตm). The 2MASS images have 1 โ€ฒโ€ฒ pixel resolution and 2 โ€ฒโ€ฒ beam resolution. The seeing FWHM values for these images are typically between 2.5 โ€ฒโ€ฒ and 3 โ€ฒโ€ฒ in all three bands. For details of the 2MASS observations, data reduction and analysis, readers are referred to Jarrett (2000) and Jarrett et al. (2000Jarrett et al. ( , 2003. Our sample contains 112 galaxies imaged in NIR J, H, and Ks bands. It includes 32 elliptical, 16 lenticulars, and 64 spirals acquired from the 2MASS Extended Source Catalogue (XSC; Jarrett et al. 2003). The spiral sample contains 19 normal (SA), 21 transitional (SAB), and 24 barred (SB) galaxies. The galaxy types are taken from the RC3 catalogue (de Vaucouleurs et al. 1992). The sensitivity and resolution (โˆผ 2 โ€ฒโ€ฒ ) of the NIR data obtained in the 2MASS is not adequate to derive independent galaxy sub-classification (Jarrett 2000), therefore, we rely on the morphological clas-sification based upon optical data derived in combination with both imaging and spectroscopy. We construct the sample by hand with a moderately large number of galaxies of each type to make statistical inferences. The primary motivation behind constructing the sample is to make a comparative analysis with previous results and to investigate the galaxies with new tools to gain further insight into overall galaxy morphology in infrared wave-bands. We consider only bright galaxies in three different bands; the Ks band total magnitude for the galaxies is 7 โ‰ค Ks โ‰ค 12. Spiral galaxies with inclination up to i โˆผ 60 o are included in the sample. No deprojection has been made to any of these galaxies prior to the analysis since the projection effect does not pose a serious threat to the reliability of the analysis when using a parameter such as ellipticity in the structural analysis of low inclination spiral galaxies (Martin 1995;Abraham et al. 1999). All galaxies in our sample are flat fielded and background subtracted. Except for three ellipticals, foreground stars have been removed from the rest of the sample. Those galaxies where a foreground star is left embedded in images are included purposely to illustrate the sensitivity of the morphological measures, as explained in the appendix. MINKOWSKI FUNCTIONALS AS SHAPE DESCRIPTORS For an object with arbitrary shape a complete morphological description requires both topological and geometrical characteristics. The MFs consist of a set of measures carrying both geometric (e. g., area, perimeter) and topological (the Euler Characteristic, EC) information about an object. The functionals obey a set of properties such as motion invariance, additivity and continuity (see Schmalzing 1999;Beisbart 2000). For this study we derive morphological parameters using a selection of two-dimensional scalar, vector, and tensor MFs as described in paper I. We treat every image as a set of contour lines corresponding to a set of surface brightness levels. A contour is constructed by linear interpolation at a given level. For every contour, the first step of the functional analysis provides three scalars: AS, PS, and EC (also represented by the symbol, ฯ‡); three vectors or centroids: Ai, Pi, and ฯ‡i; and a total of nine components of three symmetric tensors Aij, Pij , and ฯ‡ij. Here i, j = 1, 2 (for details see paper 1). In the next step, the eigenvalues (ฮป1 and ฮป2; ฮป1 > ฮป2) of the tensors are found taking centroids as the origins of corresponding tensors. After calculating the eigenvalues, we proceed to construct the axes and orientations of the "auxiliary ellipse" (hereafter AE). To construct the area tensor AE, for example, we take the eigenvalues of Aij and ask what possible ellipse may have exactly the same tensor. When we find that particular ellipse, we label it as the area tensor AE. The orientation of the semi-major axis of the AE with respect to the positive x-axis is taken as its orientation. The AEs corresponding to the perimeter and EC tensors are constructed in a similar manner. To discern morphologically different objects, therefore, we use ellipticities (วซi) and orientations (ฮฆi) of the AEs rather than the eigenvalues of the tensors. We define ellipticity of the AEs as where i corresponds to one of the three tensors, and a and b are the semi-major and semi-minor axes of the AEs. The use of AEs effectively relates a contour to an ellipse: the similarity of three AEs is a strong evidence that the shape of the contour is elliptical. For example, in case of a perfect elliptic contour, the AEs will be the same. In particular, the areas of all three AEs will be equal to the area of the contour, i. e., AA = AP = Aฯ‡ = AS, and the perimeters of the ellipses will be equal to the perimeter of the contour, i. e., PA = PP = Pฯ‡ = PS. In addition, the orientations of all three ellipses will coincide with the orientation of the contour. Therefore, if plotted, all three AEs will be on top of each other, overlapping with the contour. For that contour, all three vector centroids will also coincide with each other and with the center of the contour. Note that the latter alone does not guarantee that the contour itself is elliptical in nature since for any centrally symmetric contour the centroids would coincide. However, for a non-elliptical contour all three AEs will be different in size and orientation (see Fig. 1, paper I). Note that the sets of eigenvalues from three tensors can be used to construct three "anisotropy" parameters instead of three AEs (see also Beisbart 2000). The parameters can be defined as where i has the same meaning as before. To better understand the behaviors of วซi and Ai, we show these parameters in Fig. 1 as functions of contour area (AS) for four elliptic profiles of different flattening. Panel 1 of the figure shows ellipticity for all four profiles. Panels 2, 3, and 4 show, respectively, AA, AP , and Aฯ‡ where each of these panels also has four profiles. For each profile, we find that the วซi of the AEs are identical and coincide with each other (panel 1). The Ai, on the other hand, do not coincide even for perfect elliptic contours (panels 2, 3, and 4). We also find that the relative separations between different Ai change with the flattening of the contours. From the behavior of the parameters one can think that the วซi of the AEs act as parameters that are scaled with respect to the Ai. For contours with arbitrary flattening, the Ai from the area and EC tensors need to scale down to match with the วซi of the respective AEs. The A from the perimeter tensor, however, needs to scale up for spherical and moderately elongated contours. For highly elongated contours, however, all three As need to scale down. The illustration of the "anisotropy" parameter serves two purposes. First, it gives us a feeling of the AEs compared to the conventional parameter that deals with eigenvalues. Second, it demonstrates that one can derive various shape measures from the set of MFs. Apart from this parameter, one can also derive the shapefinder statistic as suggested by Sahni, Sathyaprakash & Shandarin (1998). However, we restrict ourselves to ellipticity and orientation since these are the two widely used measures in astronomy. We will explore the sensitivity and robustness of Ai in our future work on optical galaxies. The non-parametric approach for shape analysis, such as moments technique, has been known to the astronomical community for some time (Carter 1978;Carter & Metcalf 1980). It should be mentioned here that the morphological analyses based only on the moments of inertia would provide incomplete and sometimes misleading results. As an example, let us assume that one has a galaxy image which has been kept in a black box and analyzed using simply the inertia tensor without having a priori knowledge of the shape of the image. The analysis based only on the moments of inertia will provide a resultant ellipticity of the object regardless of its actual shape. Using this result one can always infer an elliptical shape for the unseen object. If one raises the question of the likeliness of the elliptical shape of the object, the analysis based on the inertia tensor alone will not be able to give a satisfactory answer. One needs to invoke additional measure(s) in order to justify the result. It is at this point where the measures derived from the set of MFs appear to be effective. Subsequent analyses of the image using moments of the perimeter and EC tensor enables one to pin down the type of the galaxy and thus ensures the objectivity of the analysis. The ellipticities obtained from different AEs provide information (regarding shapes) similar to the conventional shape measure based on inertia tensor. The main difference is that the conventional method finds the eigenvalue of the inertia tensor for an annular region enclosing mass density or surface brightness. The method based on MFs, however, finds the eigenvalues of contour(s) where the region enclosed by the contour(s) is assumed to be homogeneous and to have constant surface density. In order to reduce the effect of noise present in the image we use a simple smoothing technique. Instead of smoothing the whole image, we smooth contours at each brightness level using the procedure known as the unequally weighted moving average method. The goal of this smoothing is to restore the initial unperturbed contour as much as possible and measure its morphological properties. The implementation of the smoothing is described in detail in paper I. Contour smoothing considerably improves the estimates of ellipticity derived from the tensor functionals (see Figs. 8 and 9, paper I), therefore, in this study, we focus on the results obtained only from the smoothed contours. However, we note two effects that arise as a result of smoothing. First, in the outer regions of galaxies a smoothed, outer contour crosses the inner one. It happens occasionally; however, the area within the smoothed, outer contour remains greater than the inner one and does not affect the measurement. Second, in case of very large contour, we loose information. The smoothing technique is an iterative process depending on the number of points along a contour (see paper 1 for details). A highly irregular contour that consists of large number of points eventually shrinks to a point due to excessive smoothing. As a result we do not get any contribution from it. However, this does not pose any threat on getting relevant information of the shape of image contours since we can always compare with the unsmoothed profiles to see how much information is lost. RESULTS In this paper we use a different notation to express ellipticity than in paper I where we used E = 10 * (1โˆ’b/a) in the range 0 to 10. The symbol E for the ellipticity parameter is often used for the definition of the Hubble types for ellipticals and this symbol and range can be misleading when used as a parameter for other types of galaxies (e. g., spirals). We, therefore, use วซi as the characteristic of shape with the range 0 to 1. Galaxies with different morphologies appear to show characteristic signatures in the ellipticity profile. For example, the profile of elliptical galaxies is generally monotonic. A barred galaxy, however, shows a distinct peak with a spherical central region, signaling the presence of a central bulge and a bar. On the other hand, for a multi-barred system several peaks appear in the profile (Wozniak et al. 1995;Erwin & Sparke 1999). Elliptical galaxies usually show twists (i. e., a change in the orientation of AE) in the orientation profiles with varied strength depending on the flattening of the contours. Barred galaxies, on the other hand, not only have large twists in their orientations but also have characteristic features such as a sharp peak, or two different but approximately constant orientations with an abrupt change in between (Wozniak et al. 1995;Erwin & Sparke 1999). In the contemporary studies, therefore, a barred galaxy is identified as the system whose ellipticity and orientation profile simultaneously show the distinctive signatures mention above. In this study we follow the same criterion to analyze disk galaxies, i. e., we identify bars by visual inspection with the condition that both ellipticity and orientation profiles show the characteristic features simultaneously. However, identifying disk galaxies as barred systems with the above criterion (as used in previous studies mostly in optical wavelengths) should be used with caution. Our experience shows that optically classified barred galaxies appear with distinct peaks in their ellipticity profiles but show continuous orientation over the region where the peak in the shape profiles persists. Therefore we feel that more elaborate treatment is needed for identifying barred systems rather than simply relying on the behaviors of ellipticity and orientation. In this study, therefore, when we encounter these types of systems, we refrain from drawing any conclusions about these galaxies. We will explore the detailed shape properties of these galaxies in our future study including other structural measures such as Fourier decomposition technique (Quillen, Frogel & Gonzalez 1994;Buta & Block 2001). We measure ellipticity and orientation for a set of contours obtained at different surface brightness levels. Gener-ally these two parameters vary with the area of the contour, i. e., วซi = วซi(AS) and ฮฆi = ฮฆi(AS). We analyze each galaxy at 30 different brightness levels where the levels correspond to equally spaced areas on a logarithmic scale covering almost the entire region of each galaxy. At every brightness level, contours are found and subsequently smoothed. All three AEs are constructed from the smoothed contours. In order to reduce information content we present our final results using only the area tensor AE and hence drop the subscript i. Below we briefly discuss the justification of the choice. For each galaxy, therefore, we show วซ and ฮฆ of this particular AE as a function of contour area (AS). For each galaxy, a thick dashed line is used to show the mean values of วซ and ฮฆ calculated from J, H, and Ks bands. The mean values are used to estimate the overall change (โˆ†วซ and โˆ†ฮฆ) which is defined as the difference between the highest and lowest value of the corresponding mean value. It is a single number and independent of AS. Two thin solid lines are used to show the maximum and minimum of the measures obtained from different bands. The difference between these two thin solid lines is used to quantify the scatter (ฮดวซ and ฮดฮฆ) in the parameters. The ฮด varies at different regions of a galaxy and so depends on AS. We use it as an indicator to measure the dependence of galaxy shapes on different colors. For each sample of galaxies, the result is presented in increasing order of 2MASS ellipticity obtained from the 2MASS catalogue. The 2MASS ellipticity and orientation are measured for the 3 ร— ฯƒn isophote in the Ks band (ฯƒn is the rms amplitude of the background noise provided by the catalogue). For each galaxy, the 2MASS shape parameter is, therefore, a single number. Our analysis, on the other hand, provides a range of values obtained at different regions (recall that วซ or ฮฆ is a function of area) from the galactic center. We use the 2MASS estimate of ellipticity and position angle as the reference. It is shown by the horizontal dashed line. In all plots the vertical dashed line represents the contour area (AS) corresponding to Ks band 3 ร— ฯƒn isophote. We rescale the orientation profiles of a few galaxies to fit the desired ranges that have been chosen to show the ฮฆ profiles. Note that our results presented below correspond to the area log 10 AS โ‰ฅ 1.5. The range is chosen in order to exclude discreteness effects due to the grid. The deviation (โˆ†) as well as the scatter (ฮด) in the parameters, therefore, will be considered in this range. Shape of Isophotal Contour and The Role of Tensor Ellipses From section ยง3 we know that for a perfect elliptic contour, the areas enclosed by the AEs will be the same whereas for non-elliptic contours they will be different. Therefore, comparing the areas enclosed by different ellipses one can probe to which extent the shape of a contour can be approximated by an ellipse. Galaxies appear more regular in NIR wave bands than in the visual wavelengths. For example, a galaxy may appear grand in the visual bands with its giant spiral arms. However, it will lose much of its grandeur in infrared wavebands. Since spiral arms consist mostly of gas and young bright stars, they will be absent in long wavelength parts of the spectrum. To see up to what extent galaxy contours of different Hubble types retain their characteristic signatures in NIR and how tensor ellipses help us to understand their shapes, we draw readers attention to Figs. 2 and 3. In these figures we show the relative difference in areas enclosed by three different AEs as a function of contour area (AS) for a selection of elliptical and spiral galaxies. The figures highlight only the interesting parts along the vertical axis. Each panel contains a total of nine curves: three curves corresponding to three AEs from each band. The dark, medium and light colors represent, respectively, the J, H, and Ks band. We show the area, perimeter, and EC ellipses, respectively, by the solid, dotted, and dashed-dotted lines. Apart from three galaxies that are marked by "S", the isophotal contours of most of the galaxies in Fig. 2 show elliptic nature in the NIR. Few of these, e. g., NGC 3158 and NGC 2778 (galaxy number 8 and 15 respectively) appear slightly non-elliptic in one or two bands. The galaxies marked by "S" (number 1, 4, and 7) are galaxies which have foreground stars embedded in their image. For these galaxies the notable feature is the sharp increase in area around the region of the images where the star is embedded. One can see that the same ellipse overlaps in different bands. This is most notable from the EC ellipse (dashed-dotted line) since the EC tensor is the most sensitive to any disturbance along the contour. The other two ellipses also behave in a similar manner but with less sensitivity. This particular behavior shows that the AEs detect unusual features attached to otherwise perfectly elliptical image body. For example, when we compare NGC 3158 or NGC 2778 with those three galaxies, we notice that Aฯ‡ not only differs in different bands but it also spreads out differently around the edge. It tells us that the contours of these galaxies are simply non-elliptic around the edge without any abnormal feature. When we compare galaxies of different Hubble types, we find that almost all spiral galaxies (Fig. 3) show more nonelliptic nature than the elliptical galaxies. The non-elliptic nature of spiral galaxy isophotes is reflected strongly in the behavior of the EC ellipse in all bands. For these galaxies, the EC ellipse is different not only from other two ellipses but it is also spread out arbitrarily in different bands. From Figs. 2 and 3, it is clear that all three AEs are useful for better characterization of a galaxy image. The information provided by the AEs definitely help to get finer distinction of galaxies. Since our current interest is to focus on the gross morphological features rather than looking at the finer details of galaxy isophotes, to reduce the information content, therefore, we will only use the area tensor AE to present our final results. Ellipticals The sample of elliptical galaxies has been divided into two groups. Group 1 contains galaxies with small ellipticities (วซ โ‰ค 0.2) while Group 2 has galaxies with วซ > 0.2. To simply locate a galaxy within a group, we label them with an integer number. For example, NGC 4374(2) means that NGC 4374 is the second galaxy in its relevant group. In each group galaxy number 1 has the lowest 2MASS ellipticity whereas number 16 has the highest value. We follow the same format for all types of galaxies. Note that below we highlight only those galaxies that have interesting/unusual morphological features. The general trends of galaxies will be investigated in ยง5. Group 1 Ellipticity: The ellipticity profiles of galaxies in this group are shown in Fig. 4. NGC 4374(2) and NGC 4261 (9), have uncommon profiles compared to other galaxies. These galaxies are more elongated in the region 1.8 < log 10 AS < 3.0 than either around the center or near the edge which makes their profiles look "centrally arched" (marked by "A"). This particular type of behavior is shown only by sphericals and is absent in elongated galaxies. The following three galaxies are shown with an "S" mark in Fig. 4: NGC 4278(1), NGC 3193(4), and NGC 3379(7). These galaxies have sharp kinks in the วซ profiles. These sharp changes are caused by a foreground star and do not correspond to any structural feature (see appendix). The rest of the galaxies show profiles typical of ellipticals. Galaxies in this group, in general, have small scatter. Orientation: The orientation profiles of galaxies are presented in Fig. 5. NGC 4458 (3) is stable from the center up to log 10 AS โ‰ˆ 2.8 and shows a large deviation near the edge. NGC 3193(4) shows a notable peak in its orientation. Its profile shows a dip around the region where the foreground star is embedded. NGC 3379 (7) does not show any unusual feature in its profile. Its position angle, as well as the scatter, gradually increases towards the galactic center. NGC 3379 (7) has larger scatter at all distances from the galactic center than NGC 4278(1) and NGC 3193(4). Note that the presence of a star on the galaxy contours may or may not be detected by the ฮฆ profile. The วซ profile, on the other hand, is sensitive to any kind of disturbance on the contour. If the unusual feature on the contour attributed by the star happens to be along the major axis, which is the case for NGC 3379 (7), it remains unnoticed and no peak appears in the ฮฆ profile. This is the reason why the presence of a star on the contours may be missed as a signature on the ฮฆ profile. On the other hand, if the feature is slightly off the major axis, it changes the orientation of the contour significantly with respect to the inner and outer contours. This is the case for NGC 4278(1) and NGC 3193(4). In spite of being a spherical galaxy (E1+), NGC 4494(6) has remarkably low twist (โˆ†ฮฆ โˆผ 3.5 o ) and scatter in its orientation. It is very unusual because a spherical contour is highly susceptible to background noise. A slight perturbation caused by the noise distorts the contour's shape and the direction of perturbation becomes its orientation. VCC 1737(10), NGC 3159 (12), and NGC 3226(13) have orientation profiles quite similar to disk galaxies as shown later in this section. The ฮฆ profiles of VCC 1737(10) and NGC 3226(13) appear "arch like" where the latter galaxy has a longer arch than the former. These three galaxies also have large scatter in their orientations. Group 2 Ellipticity: The ellipticity profiles of galaxies in this group are shown in Fig. 6. Galaxies with วซ > 0.2 show a "centrally spherical" trend in their profiles. The variation in the profile becomes stronger with the increasing flattening of the galaxies. The overall change in วซ observed from the center to the edge varies from galaxy to galaxy. It changes from as low as โˆผ 0.12 (NGC 315(2)) to as high as โˆผ 0.33 (NGC 5791 (15)). The measurements for most of the galaxies in all three bands nicely coincide. As a result the galaxies in this group show the lowest amount of scatter. Orientation: The orientation profiles of galaxies in this group are shown in Fig. 7. The flattened ellipticals have quite stable orientations with relatively low twist and scatter compared to those in group 1. For all galaxies the โˆ†ฮฆ is within 10 o . The relatively small twist observed in the elliptical galaxies with วซ > 0.2 suggests that flattening and isophotal twists are most likely anti-correlated in the NIR wave-bands (see ยง6 and Fig. 18 for more). This result is in agreement with the (optical) correlation found by Galletta (1980) who noted that the maximum apparent flattening and the highest observed twist are inversely related. Lenticulars The sample of lenticulars contains 16 galaxies. The ellipticity and orientation of these galaxies are shown in Figs. 8 and 9 respectively. Orientation: The orientations of lenticular galaxies generally have larger scatter than the elliptical galaxies. NGC 4659(4) shows the largest twist in this sample, (โˆ†ฮฆ โˆผ 68 o ); its orientation experiences a large change within 1.5 โ‰ค log 10 AS โ‰ค 2.3. Lenticular galaxies with (2MASS) ellipticity วซ > 0.30 (from galaxy number 8 and above in Fig. 9) are observed to have less scatter than their spherical counterparts. This property is quite similar to flattened ellipticals. The scatter is large for lenticulars that are spherical in shape and it varies at different regions from the galaxy centers. โ€ข Both shape and orientation profiles of NGC 4620(9) and NGC 2544(14) exhibit a bar like feature. These galaxies are labeled as "IB" where "IB" stands for "infrared bar". Spirals The sample of spirals has been divided into four groups based on the degree of scatter (ฮดวซ) observed in the วซ profiles. ฮดวซ is measured for all AS in the range log 10 AS โ‰ฅ 1.5. Group 1 contains spirals that have the least scatter while group 4 has galaxies with the largest scatter. Galaxies with ฮดวซ โ‰ค 0.05 are included into group 1, for group 2 the range is 0.05 < ฮดวซ โ‰ค 0.1, those with 0.1 < ฮดวซ โ‰ค 0.2 are in group 3, and finally galaxies with ฮดวซ > 0.2 define group 4. The grouping of the spiral sample has been done by visual examination of the ellipticity profiles and therefore it is quite crude; our intention is simply to highlight the interesting features apparent in the shape profiles of spiral galaxies viewed in the NIR. In these groups, galaxies are organized by increasing order of (2MASS) ellipticity. Group 1 Ellipticity: The ellipticity profiles of galaxies in this group are shown in Fig. 10. Only four galaxies are optically classified (RC3) as barred systems. These are NGC 4262(2), NGC 4024 (3), NGC 3384(10), and NGC 4394(13). Except the last one, the shape profiles of other three galaxies posses distinctive feature, i. e., a clear appearance of peak(s) in the profiles, indicating (qualitatively) similar morphology both in optical and NIR. These three galaxies are labeled as "OIB" where "OIB" stands for "optical and infrared bar". We label NGC 4394(13) as "OB" where 'OB" stands for "optical bar" only. We give our explanation below for this labeling. NGC 4151 (6) is a highly spherical galaxy compared to other members of this group. It has วซ < 0.05 from the galactic center up to log 10 AS โ‰ˆ 3.2; beyond this limit the ellipticity increases a little bit. This galaxy, an intermediate ringed transitional spiral, is most likely dominated by a very large spherical bulge. The profiles of IC 3831 (8), NGC 4143 (9), NGC 3418 (11), and NGC 5326 (12) are quite similar to elliptical galaxies. NGC 3912 (15) and UGC 5739 (16) are two highly flattened galaxies in this group. Their profiles appear "centrally arched", an unusual feature shown by several NIR elliptical galaxies. Orientation: The orientation profiles of galaxies in this group are presented in Fig. 11. The overall twists (โˆ†ฮฆ) in these galaxies range in between โˆผ 5 o โˆ’ 55 o . NGC 4151 (6), a galaxy of spherical shape, shows the largest scatter in its orientation. Since highly spherical contours can be perturbed easily by noise, it is not surprising that the galaxy would have large variations in its orientation at different wave-bands. The profile of NGC 4143 (9) is quite similar to the barred galaxy NGC 3384(10), although its orientation changes smoothly compared to that of NGC 3384(10). โ€ข The profiles of NGC 3177(4) and NGC 3684 (5) show a bar like feature although these galaxies are optically classified as non-barred systems. However, we label these galaxies as "IB" after careful visual inspection of their unsmoothed and smoothed profiles. The profiles of NGC 4651 (7) and NGC 4494(13) deserve attention because of their complex morphologies. The ellipticity profile of NGC 4651(7) has a peak that is confined within 2.5 < log 10 AS < 3.0. In this region its orientation is different from the bulge and the disk. Since it is an early type spiral with rs subclass, we think the feature is due to the ring. According to RC3 classification NGC 4494 (13) is a barred galaxy. But from the NIR profile it is not clear whether the galaxy really possesses a bar like structure. The galaxy is devoid of any distinct peak in its shape profile eventhough it has considerable amount of twist (โˆ†ฮฆ โˆผ 32 o ). It shows higher flattening near the edge which is probably the elongation of the disk. We are uncertain about the actual shape and therefore label the galaxy as "OB" (optical bar). We, therefore, label them as "OIB". The profiles of NGC 3277(2) and NGC 4714(8) are quite similar to elliptical galaxies. Orientation: The orientation profiles of the galaxies in this group are shown in Fig. 13. The overall twists range between โˆผ 5 o โˆ’71 o . The profiles of NGC 3974(1) and IC 357 (7) have similar characteristics: a constant direction along the central bars while the orientation changes in the central bulge and outside disk. โ€ข The following three galaxies are optically-classified non-barred spirals: NGC 4146(3), IC 863 (6), and UGC 5903(9). However, these galaxies show bar like features in their NIR profiles and therefore we label them as "IB". Note that the orientation profile of NGC 4152(4) has two distinct peaks. Its ellipticity profile, however, does not possesses any feature to be considered as a barred galaxy. Therefore we are not certain whether or not it can be considered as a barred galaxy. The orientation profile of NGC 5478 (12) shows two different directions. Its shape changes considerably around the region where the position angle changes. The variation in ellipticity is extended over a wide region. From visual inspection a question can be raised about whether or not the galaxy is a barred system. A careful inspection of both unsmoothed and smoothed profiles indicates that it is not a barred galaxy. Orientation: The orientations of galaxies are shown in Fig. 15. The overall twists range between โˆผ 10 o โˆ’ 71 o . The twist as well as scatter in the orientations of these galaxies are high compared to those in the previous two groups. โ€ข The following three galaxies are optically classified as non-barred spirals: UGC 2303(1), UGC 3053(2), and NGC 3618(8). The profiles of these galaxies in NIR, however, show features similar to barred systems. We label these galaxies as "IB" after visual inspection of their unsmoothed and smoothed profiles. Group 4 Ellipticity: The ellipticity profiles of galaxies in this group are shown in Fig. 16. According to RC3 catalogue, the following four galaxies are barred systems: UGC 2705(4), UGC 3171(9), NGC 5510(13), and UGC 1478(16). It is interesting to note that the NIR profiles of none of these galaxies have characteristic features resembling bar like systems. Therefore we label these galaxies as "OB". Orientation: The orientations of galaxies in spiral group 4 are shown in Fig. 17. The overall twists range between โˆผ 15 o โˆ’ 70 o . Galaxies in this group have the highest scatter in their orientations compared to those in the other three groups. Among "OB" galaxies, UGC 1478(16) has a unique profile. The galaxy has a distinct, highly flattened, central bar. Its orientation is constant along the central bar and changes sharply to the orientation of the disk. Comparison with the 2MASS Estimate We use shape parameters estimated by the 2MASS as the reference. Our analysis should provide estimates similar to the 2MASS for contours enclosing larger areas. Since 3 ร— ฯƒn isophote corresponds to a low surface brightness level, i. e., a region near the edge of a galaxy image, we would expect agreement only in this region. For the purpose of comparison with we use only the relative error between the shape of the contour closest to Ks band 3ร—ฯƒn isophote and the 2MASS estimate. In most cases we find excellent agreement (within โˆผ 10% of the 2MASS). However, in a few cases the agreement is poor (beyond โˆผ 10% of the 2MASS) and in this section we report only these cases. We want to stress that the result presented here is obtained after careful comparison of both unsmoothed and smoothed profiles of the parameters. We do not include galaxies whose unsmoothed profiles agree with the 2MASS (i. e., within โˆผ 10% of the 2MASS). For example, we do not include NGC 4394 (13, spiral group 2; Fig. 12) in the list of galaxies given below since its unsmoothed profile shows excellent agreement with the 2MASS. The apparent mismatch between the 2MASS and our estimate for smoothed profile is due to the effect of smoothing (as mentioned in ยง3). If the unsmoothed profile of a galaxy shows disagreement (i. e., beyond โˆผ 10% of the 2MASS) in the first place, it is also reflected in its smoothed profile and only then we include the galaxy in the list. For these galaxies, therefore, the estimates of the parameters from both unsmoothed and smoothed profiles disagree with the 2MASS. We note that that an agreement in the วซ profile a galaxy does not necessarily mean that it will also have an agreement in its ฮฆ profile. Note particularly that for several elliptical galaxies we do not have any contour near the 3 ร— ฯƒn level. We do not attempt to make any comparison for these galaxies. Ellipticity: The difference in the estimate of ellipticity is seen only for lenticular and spiral galaxies. The galaxies showing the difference comprise โˆผ 32% of the entire sample. We want to point out that while for some galaxies (e. g., 3 and 8 in Fig. 4) the estimates of วซ are in agreement with 2MASS, the estimates of ฮฆ for these galaxies (3 and 8 in Fig. 5) differ. In some cases our analyses do not reach the 3 ร— ฯƒn isophote level (e. g., 3 and 8 in Fig. 4; 8 and 9 in Fig. 8; 6 in Fig. 10; 7 in Fig. 12) which is a likely reason for disagreement. We draw reader's attention to galaxies numbered 11 and 14 in Fig. 4 and listed above. The วซ and ฮฆ profiles of these galaxies may give an impression to the reader that they do not reach the 3 ร— ฯƒn level (recall that the figures showing these profiles are for smoothed contours). In fact they do reach the level for the unsmoothed profiles and disagree with the 2MASS. This particular appearance of the profiles arise due to excessive smoothing at the outer part of these galaxies (see ยง3). We provide analyses for smoothed isophotes. Contour smoothing can be thought of as a method to minimize the effect of noise to restore the original shape. Since the 2MASS results have not been estimated from smoothed isophotes, we want to emphasize that there is no guarantee that the shape estimates provided by the survey represent the actual shape of galaxy contours. After careful comparison of both unsmoothed and smoothed profiles of galaxies, we feel reasonably confident that the disagreement is not entirely due to the methodological difference. It is likely that our approach is better capable of revealing the actual shapes of galaxy isophotes, especially around the edge. CONCLUSIONS We have analyzed a sample of galaxies of various Hubble types obtained from the 2MASS catalogue. The sample contain 112 galaxies imaged in the NIR J, H, and Ks bands. We have used ellipticity (วซ) and orientation angle (ฮฆ), as functions of area within the isophotal contour, as the diagnostic of galaxy shape. These measures have been constructed from a set of non-parametric shape descriptors known as the Minkowski Functionals (MFs). The ellipticity and orientation for each galaxy have been derived at 30 different surface brightness levels in each of these bands. The ellipticity and position angle (for Ks band only) provided by the 2MASS are used as the reference in our analysis. Our results show that the elliptical galaxies with วซ โ‰ฅ 0.2 appear to be centrally spherical. These galaxies show smooth variations in ellipticity with the size, quantified by the area within the contour, and are more flattened near the edge than the central region. A variation as strong as โˆ†วซ โ‰ฅ 0.25, from the center towards the edge, is noticeable in more flattened systems, e. g., NGC 4125, NGC 3377, NGC 4008, and NGC 5791 (9, 10, 12, and 15 in group 2 of elliptical galaxies; Fig. 6). This behavior is similar to previous studies of ellipticals in visual bands (Jedrzejewski 1987, Fasano & Bonoli 1989, Franx et al. 1989). The similarity in apparent shapes in optical and NIR wavelengths indicates that the low ellipticity noticed at the central regions of these galaxies is intrinsic rather than an artifact contributed by the seeing effect. Additionally the วซ profiles of these galaxies appear very similar in different NIR bands with a very low scatter. This suggests that the morphological differences that are likely to appear in different bands are weak in NIR elliptical galaxies. The twist (โˆ†ฮฆ), characterized by the overall change in the orientation with radius, observed in the orientations of elliptical galaxies decreases with increasing flattening. A relatively small twist shown by ellipticals with วซ > 0.2 suggests that elongation and isophotal twist are likely to be anticorrelated in the NIR wave-bands. We perform a Spearman correlation test between the twist and the deviation in ellipticity (โˆ†วซ) for the entire sample of elliptical galaxies. The test result (correlation coefficient -0.35 and probability 0.05) indicates a significant anti-correlation between โˆ†ฮฆ and โˆ†วซ (see Fig. 18). Note that in the test, a small value of the probability with a negative coefficient suggests significant anti-correlation. To check whether this result is due to any methodological artifact or indicates a physical effect, we rerun the test two more times with different numbers of galaxies in the elliptical sample. For the first run, we construct the sample after removing only galaxy 10 from group 1. This galaxy shows the largest twist (โˆ†ฮฆ โˆผ 68.5 o ) in the entire sample and, therefore, we discard it as an outlier. We find that the removal of this galaxy slightly weakens the result (correlation coefficient -0.34 and probability 0.06) but the correlation still remains significant. In the next run, we remove nine galaxies from the entire sample: the first eight galaxies from group 1 because of their spherical shapes (วซ โˆผ 0.1) and galaxy 10. This time the test result gives correlation coefficient -0.37 and probability 0.09. Although a greater probability indicates less confidence for this subsample, the result does not differ substantially from the original sample. It suggests that the anti-correlation is a physical effect rather than a fluke. The NIR correlation between โˆ†ฮฆ and โˆ†วซ for elliptical galaxies is similar to that in the optical wavelengths. For elliptical galaxies in visual bands, it has been known for a while that the maximum apparent flattening and the maximum observed twist are inversely related (Galletta 1980). An interesting aspect of this correlation would be the possible coupling between NIR and optical light (Jarrett et al. 2003). Elliptical galaxies with วซ < 0.2 usually show large variation in their orientations. These galaxies also show considerable differences both in ellipticity and orientation in different bands. The large twist observed in these galaxies should be taken with caution since spherical contours obtained from nearly spherical galaxies are highly prone to spurious effects such as background noise. The NIR lenticular galaxies have properties similar to those of ellipticals or disk galaxies. The profiles of a few of these galaxies show trends similar to ellipticals. A few of these galaxies show characteristic properties resembling to the galaxies with bar like structures. The lenticular galaxies in our sample, in general, have larger scatter in both วซ and ฮฆ than the ellipticals. In the entire sample of lenticulars, at least 2 galaxies have bar like signatures in their profiles. The observed twists in the lenticulars' orientations are com-parable to the trend noticed in spherical galaxies (วซ < 0.2). The properties of these galaxies stress the fact that morphological classification strongly depends on the wavelength studied. It may be possible that S0 galaxies do not exist in the long wavelength part of the spectrum. This is simply our speculation and more elaborate studies are needed to make this a definite conclusion. The sample of spirals include 40 galaxies that are optically classified non-barred galaxies. At least 11 of these galaxies show bar like features in their NIR profiles, increasing the number of bar like systems to โˆผ 45%. Several previous studies have also reported a higher frequency of barred systems in disk galaxies (Seiger & James 1998;Eskridge et al. 2000;. Our result is in agreement with all of these studies except Seiger & James (1998), who reported โˆผ 90% frequency of barred galaxies from a sample of 45 spirals imaged in NIR J and K bands. Our estimate indicates that a significant fraction of disk galaxies are intrinsically non-barred systems. The absence of a bar like feature in the profiles of 29 spiral galaxies favors this argument. This is also consistent with Eskridge et al. (2000). Among 19 normal spiral galaxies, 17 galaxies are SAtype. The other two, UGC 5739 in group 1 and IC 2947 in group 3, are irregular/peculiar type galaxies. In these 17 SAtype spirals, 5 galaxies have distinct bar like features in their shape profiles indicating that โˆผ 30% of SA-type galaxies have optically hidden bars. Our result is in agreement with Eskridge et al. (2000) who reported โˆผ 40% galaxies have optically hidden bars from a sample of 186 NIR H-band spirals containing 51 non-barred SA-type galaxies. Note that several disk galaxies show multiple bar features in their profiles. However, in this study we only report their morphologies and do not attempt to make any definite conclusion regarding their projected structure since identifying galaxies with multiple bars is a complicated issue (see Wozniak et al. 1995;Erwin & Sparke 1999). It is important to note that our estimate of the frequency of barred galaxies is done by visual inspection. Since the sample of galaxies used in this study is not based on rigorous selection criteria, the results should be taken as an approximate estimate. The overall isophotal twists observed in the orientations of spiral galaxies range between โˆผ 3 o โˆ’ 71 o . Many NIR barred spirals appear to have smaller but notable twists in their orientations than the twist at optical wavelengths (see e. g., Wozniak et al. 1995). Two conclusions can be drawn after analyzing the orientation profiles of the NIR images. First, the NIR light in J, H, and Ks bands is not fully decoupled from Population I light. They are most likely weakly coupled. Second, the different regions (central bulge, bar, and outer disk) of disk galaxies are dynamically linked. It is manifested in the continuous change in the orientations of the majority of the disk galaxies. Exceptions are noticed in few cases, e. g., NGC 3384 (10, group 1; Fig. 11), NGC 4152 (4, group 2; Fig. 13), UGC 2303 (1, group 3; Fig. 15), where the change in the orientation is very sharp. But it is important to note that in the region where the sharp changes are observed, the galaxy contours appear to be very spherical (วซ โ‰ค 0.15). A drastic change in the orientations of spherical contours does not indicate reliably the actual nature of the internal structure since these types of contours are prone to spurious effects. Acknowledgments We thank the referee for his constructive criticisms and many helpful suggestions. We are indebted to Bruce Twarog for thorough and critical reading of the manuscript. His suggestions improved the paper significantly. NR thanks Tom Jarrett, Hume Feldman, and Barbara Anthony-Twarog for useful discussions. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. APPENDIX Here we briefly describe the role of contour smoothing in our analyses. We also discuss the sensitivity of the parameters to any disturbance present on the contour. In order to emphasize the role of contour smoothing we show in Fig. 19 the ellipticity measured from all three AEs, both for unsmoothed and smoothed elliptical galaxies in group 1. In this figure the dark, medium, and light colors are used to represent the J, H, and Ks band respectively. The solid, dashed, and dashed-dotted lines show the area, perimeter, and EC ellipses for each band. As we can see from the unsmoothed contours in the left panel, the ellipses differ from one another in each band mostly near the edge. This deviation is caused by the background noise since the galaxy surface brightness distribution is steeper near the center and shallower outward. We should also note that in a region far away from the center of a galaxy the absolute value of the distribution is very small. In either case any kind of background noise will have a strong effect on the outer part of the distribution. Therefore when one constructs contours along the edge of a galaxy, they appear highly deviated no matter what their true shapes are. If the contours were truly elliptical, they could appear in any arbitrary shape depending on the amount distortion. Since the contours are not elliptic anymore, the AEs diverge from one another. This divergence reflects the apparent shape of the contours. One can see from the right panel that contour smoothing reduces the role of noise significantly. It helps restore the original shape of the contours. At the same time one can also notice that the differences among the ellipses reduce substantially. To demonstrate further, we collect first four galaxies from each group of spiral galaxies and show the ellipticity for both unsmoothed and smoothed contours in Figs. 20 and 21, respectively. The motive, once again, is to reemphasize the role of contour smoothing. When we compare Figs. 19 and 21, it appears that the color difference is, in general, stronger for spiral galaxies and within spirals, it is stronger for certain Hubble types. We see that the galaxies in spiral group 4, which are mostly latetype, have significant differences in their shapes (at least what is revealed by the วซ parameter) when studied in three different NIR bands. We now proceed to check the sensitivity of the Minkowski shape measures to detect image contamination by a foreground star. The J band contours of NGC 4278, NGC 3193, NGC 3379 (1, 4, and 7 in elliptical group 1), are shown in Fig. 22. This figure also includes the contours of NGC 5507 (5 in elliptical group 2) from the same band. Note that the presence of a star is apparent in all three bands. It is most prominent in the J band and this why we show the contour plot in this band. The left panel of Fig. 23 shows the ellipticity profiles of these galaxies and the right panel shows the relative difference in areas enclosed by three tensor AEs. In the left panel Figure 2. The relative difference in the areas enclosed by three different AEs as a function of contour area (A S ) for a selection elliptical galaxies. Each panel contains a total of nine curves : three curves corresponding to three AEs at each band. The dark, medium and light colors to represent J, H, and Ks band, respectively. The area, perimeter, and EC ellipses are shown, respectively, by the solid, dotted, and dashed-dotted lines. The vertical dashed line represents the area within the contour that corresponds to Ks band 3ร—ฯƒn isophotal region. For some galaxies this line goes beyond the horizontal range shown above. The number 1 to 16 shown at top-right in each panel is used just to label the galaxies. we use solid, dashed and dashed-dotted lines represent the measures from area, perimeter and EC AEs. The dotted line is the measure from the scalar functional AE that has not been used in this analysis. It is included only for the purpose of illustration. Note that the construction of the scalar ellipse is different from the tensor ellipses. Since the scalar functionals are simply the area (AS) and perimeter (PS) of a given contour, the scalar AE is an ellipse that has the same area and perimeter as the contour itself. For a perfect elliptical contour, the scalar AE will converge with its tensor counterparts. The scalar functionals do not have enough information to attribute an orientation to its ellipse. The right panel does not show any measurement from the scalar functional. From Figs. 22 and 23, we see that a foreground star embedded in an image causes galaxy isophotes to deviate from their original shapes. It usually adds small, lobe-like features to the otherwise smoothed and spherical contours. The functionals easily pick up this type of signal present on the contour and translate it to the shape parameters. This demonstration highlights the fact that MFs based measures can be used for automatic detection of features attached to the image body. Our analysis is further supported by NGC 5077 (num- ber 5 in elliptical group 2). This galaxy is interesting for our purpose since it has been archived in the 2MASS catalogue as an image free from foreground star. The contour plot of the galaxy, however, reveals that it is not the case since the isophotal contours are perturbed by an uncommon feature. Apart from the contour plot, the behavior of the ellipticity profile or the plot showing ratios of the sizes of AEs also give strong indication of the presence of something unusual in the image body. From the extent of the feature along the radial direction one can infer the possible nature of the object that distorts the image contours. A close inspection of the contours and a comparison with the other galaxies mentioned above lead us to conclude that a foreground star is still left embedded in the galaxy image. With these examples at hand, we feel confident to suggest that both ellipticity and the ratios of the sizes of AEs can be used simultaneously as filtering tools in image processing. These measures may appear useful to reduce the chance of contamination by foreground star while constructing large galaxy catalogues from surveys such as Sloan Digital Sky Survey (SDSS). show the maximum and minimum ellipticity measured in these three bands. The horizontal and vertical dashed lines represent, respectively, the 2MASS estimate of Ks band 3 ร— ฯƒn isophote ellipticity and the area within the contour corresponding to that isophote. Note that for some galaxies the vertical line goes beyond the horizontal range of the figure. The NED galaxy name, its RC3 classification, and the overall deviation (โˆ†วซ) in ellipticity are shown at the top of each panel. The difference between the highest and lowest value of the thick dashed line, in the range log 10 A S โ‰ฅ 1.5, is used to measure โˆ†วซ. Similar style is followed for the rest of the presentation. For the meaning of the symbols "A" and "S" see text. Figure 10. Ellipticity as a function of contour area (A S ) for spiral galaxies in group 1. The labels "OIB" and 'OB" represent, respectively, "optical and infrared bar" and "optical bar". Note that the spiral galaxies are divided into four groups. The groups are organized using the scatter (ฮดวซ) observed on the ellipticity profiles. The galaxies in spiral group 1 have the least scatter (ฮดวซ โ‰ค 0.05). Figure 11. Orientations of galaxies in spiral group 1. Galaxies, numbered as 2, 4, and 6, are scaled by a factor 2 to fit the range along the vertical axis. No scaling is applied to other galaxies. Figure 13. Orientations of galaxies in spiral group 2. Galaxies numbered as 3, 6, and 7, are scaled by a factor 2 to fit the range along the vertical axis. No scaling is applied to other galaxies. Figure 14. Ellipticity as a function of contour area (A S ) for spiral galaxies in group 3. The galaxies have scatter in the range, 0.1 < ฮดวซ โ‰ค 0.2, in J, H, and Ks band measurements. Note that NGC 4375 (6), an optical barred system ("OB") lacks characteristic feature in its NIR profile. Figure 15. Orientations of galaxies in spiral group 3. Galaxies, numbered as 1, 4, 5, 14, and 16, are scaled by a factor 2 to fit the range along the vertical axis. No scaling is applied to other galaxies. Figure 18. Twist (โˆ†ฮฆ, in degree) versus deviation in ellipticity (โˆ†วซ) for the entire sample of elliptical galaxies. The Spearman correlation test, with a correlation coefficient of โˆ’0.35 and a probability of 0.05, indicates a significant anti-correlation between these two parameters. Fig. 20. The line styles and colors are the same as before. The convergence of the AEs is a result of smoothing and illustrates the fact that the contours of the spiral galaxies are elliptic in nature although the ellipticity varies with image size. In comparison to Fig. 20, one can see that contour smoothing reduces noise, keeping the main features intact. One can also notice that the color difference is stronger in late type spirals (lower four panels on the right, marked with the number 4). NGC4278 E1+ NGC3193 E2 NGC3379 E1 NGC5077 E3+ Figure 22. The unsmoothed contour plot of galaxies in J band. In each of these galaxy images a foreground star is left embedded. The presence of the lobe-like feature on contours is a signature of embedded foreground star. . Ellipticity (left panel) and relative differences in the areas enclosed by different AEs (right panel) for elliptical galaxies where a foreground star is embedded in the galaxy images. The figure shows information from all three bands. Dark, medium, and gray colors represent, respectively, the J, H, and Ks band. The dotted line represents ellipticity from the scalar functional (see appendix for more); the solid, dashed, and dashed-dotted line represent the ellipticities of the area, perimeter, and EC AEs, respectively. A sharp kink in the ellipticity profiles is the signature of the embedded foreground star. A similar feature can also be seen from the plot showing the ratios of the sizes of AEs.
Modelling Heat Pumps with Variable EER and COP in EnergyPlus: A Case Study Applied to Ground Source and Heat Recovery Heat Pump Systems : Dynamic energy modelling of buildings is a key factor for developing new strategies for energy management and consumption reduction. For this reason, the EnergyPlus software was used to model a near โ€ zero energy building (Smart Energy Buildings, SEB) located in Savona, Italy. In particular, the focus of the present paper concerns the modeling of the ground source water โ€ to โ€ water heat pump (WHP) and the air โ€ to โ€ air heat pump (AHP) installed in the SEB building. To model the WHP in EnergyPlus, the Curve Fit Method was selected. Starting from manufacturer data, this model allows to estimate the COP of the HP for different temperature working conditions. The procedure was extended to the AHP. This unit is a part of the air โ€ handling unit and it is working as a heat recovery system. The results obtained show that the HP performance in EnergyPlus can closely follow manufacturer data if proper input recasting is performed for EnergyPlus simulations. The present paper clarifies a long series of missed information on EnergyPlus reference sources and allows the huge amount of EnergyPlus users to properly and consciously run simulations, especially when unconventional heat pumps are present. Introduction Energy savings and emissions reduction are two major keywords in the worldwide research scenario. One of the main responsible sectors is the buildings one, with approximately 40% of energy consumption and 36% of CO2 emissions in the EU [1]. In this framework, it is easy to understand why energy dynamic simulations of buildings are of great importance and can be used to either develop innovative solutions for new buildings or to evaluate different retrofit interventions to enhance the energy performance of existing ones [2][3][4]. To decrease energy consumption and pollutant emissions, the first mandatory step is the reduction of building loads (i.e., building energy needs) by means of actions on the building envelope and related to the building operating conditions [5]. These include techniques to increase the external insulation combined with actions for the exploitation of the solar gains to reduce heating loads during winter [6]. After the activities devoted to minimizing the energy needs of the building, the second step is to select and correctly size innovative plants for heating and cooling that, if possible, include the use of renewable energies. For example, solar energy can be exploited in thermal solar collectors to produce Domestic Hot Water (DHW) and in photovoltaic (PV) fields for powering the air conditioning system of buildings [7]. In particular, during the summer season, periods with higher solar radiation coincide with higher electrical energy demand for the cooling air conditioning systems and the use of PV modules helps to reduce the electrical national grid stress. In recent years, one of the more frequently selected plant solutions for air conditioning in buildings has included reversible heat pumps (HPs) that allow to satisfy building requests both in heating and in cooling. Among them, Ground Coupled Heat Pumps (GCHP) are a very effective configuration, exploiting the near constant ground temperature during the year to increase the performance coefficients (EER in cooling mode and COP in heating mode) [8][9][10]. Performance of the ground-coupled heat pump greatly depends on several factors, including the fluid temperatures, the ground thermophysical properties and the configuration of Borehole Heat Exchangers (BHEs) used in the installation [11]. Another interesting plant solution is air-to-air heat pumps devoted to heat recovery on ventilation. This type of systems can be simple (i.e., direct expansion, DX) for the air conditioning of a single zone of a building, or more complex and sophisticated (i.e., included in air-handling unit, AHU), used for ventilation, humidification, filtration and air-conditioning with energy recovery (both active and passive techniques) of entire buildings. It is apparent that one of the main problems in the correct sizing and in the modelling process of a heat pump is to take properly into account the variation of its performance in different operating conditions. In fact, the COP of a heat pump, even at full load, changes at different condensations and evaporation temperatures, which depend on the source and load side temperatures. For technological innovative solutions, the modelling process to include the heat pump in energy dynamic simulations can be difficult. Lee et al. [12] recently developed a simplified heat exchanger model using artificial neural networks. Using a genetic algorithm, they managed to optimize the operating and design parameters of the heat exchanger in order to maximize the seasonal EER and the seasonal COP with respect to the outdoor temperature. Torregrosa-Jaime et al. [13] modelled the performances of a Variable Refrigerant Flow (VRF) equipment. They analysed the model proposed in EnergyPlus and they developed a new one using a BIM approach. Finally, they compared the results obtained with the manufacturer data. Models for heat pumps pertain to two main groups, with two different approaches to the problem [14]. On one hand, there are the "equation fit models", which consider the heat pump as a black box, whose behavior is simulated by means of correlations with coefficients derived from manufacturer data. On the other hand, there are "deterministic models" that consider each component of the system applying energy and mass conservation equations. The main differences between the two approaches are the amount of data requested and the application aim. The equation fit models are easier because they need only the knowledge of the performance at the operating conditions usually given by the manufacturer [15,16]. On the contrary, deterministic models also need data for specific HP components: these parameters often derive from dedicated measurement campaigns and are not provided by the manufacturer. This approach is useful for the study and design of specific components of the heat pump. In dynamic simulations over long periods (e.g., yearly simulations for building response to environmental conditions and internal energy transfers), the working conditions of a heat pump change continuously, and it is mandatory to include, inside the model, at least the COP variation with temperature. The starting point are the data provided by the manufacturer in terms of the performance coefficients of the heat pump in heating and cooling at reference working conditions. This paper deals with HP modelling in EnergyPlus environment. The application of the "equation fit model" is applied for modelling a water-to-water heat hump (Curve Fit Method [17]) and an air-toair heat pump [18], the latter being applied for heat recovery purposes on air ventilation circuit. To the above aim, a case study was taken into account and it refers to a n-ZEB building located at the Savona Campus of the University of Genova, Italy. In this case, a water-to-water heat pump is coupled with the ground and fed by water circulating in a borehole heat exchangers (BHEs) field. If the ground heat transfer is correctly evaluated, the returning fluid temperature (from the boreholes to the heat pump) is known with good approximation. The load side water temperature is imposed, based on the building request and on the operating condition of the distribution system (for the analyzed case composed by fancoils and radiators). The investigated air-to-air heat pump is included in an air-handling unit and it serves as an energy recovery system on the exhaust air from ventilation. Thus, for that special air-to-air heat pump, the load side temperature is the external one, whereas the source side temperature is the return temperature from the building, nearly stable in both cooling and heating conditions. By means of EnergyPlus simulations using "equation fit models", the variables EER and COP of both the heat pumps were evaluated for selected couples of source side and load side temperatures. For the water-to-water heat pump, the effect of the water volumetric flow rates (source and load side) was also taken into account by employing the manufacturer data related to the partial load factor (PLF) effect. For the air-to-air heat pump, the original contribution of the present study is the analysis of the suitable input datasets in case of unconventional heat pumps like the one with energy recovery here considered. The good agreement between expected results and simulations validates the analysis. The present paper is in addition clarifying a long series of missed information on Energy Plus reference sources in order to allow the huge amount of Energy Plus users to efficiently, properly and consciously run simulations when considering temperature varying COPs. Water-to-Water Heat Pump Model This paragraph presents the literature models selected in the present study in order to properly address the input in the Energy Plus program to simulate water-to-water and air-to-air heat pumps ( Figure 1) at temperature varying COP. The detailed description provided (and the related validations) here are original contributions of the present study, since Energy Plus references do not fully specify how the code can properly manage the running mode when inverse machines performance have to be customized in terms of manufacturer information. In EnergyPlus, two different options are available to model the water-to-water heat pumps, i.e., the "Curve Fit Method" and the "Parameter estimation-based model" [15]. For the case study reported in this paper, the selected model is the "Curve Fit Method", which allows quicker simulation of the water-to-water heat pump, avoiding the drawbacks associated with the more computationally expensive "Parameter estimation-based model". The variables that influence the water-to-water heat pump performance are mainly inlet water temperatures (source and load side) and water volumetric flow rates (source and load side). The governing equations of the "Curve Fit Method" for the cooling and heating mode are the following ones [16]: Heating Mode: where the parameters are defined as: The subscript "ref" indicates values at reference conditions that must be correctly specified. The reference temperature is always equal to 10 ยฐC (283.15 K) and even when available data from manufacturer are provided at a different values, performance is to be recast to the above temperature. In cooling mode, the reference conditions are when the heat pump operates at the highest (nominal) cooling capacity indicated in the manufacturer's technical references. The above condition does not match the real heat pump/chiller behavior since its performance can be even better than those at the nominal capacity, provided that the working temperature are "better" than the performance test ones. Similarly, in heating mode, the reference conditions are when the heat pump is operating at the highest (nominal) heating capacity. In EnergyPlus, when selecting the "Curve Fit Method" to model water-to-water heat pumps, one must specify the parameters at the reference conditions and provide the equation fit coefficients. Once the type of the water-to-water heat pump is selected, the generalized least square method is used for the evaluation of the coefficients Ai, Bi, Di, Ei, based on the data available from the manufacturer's catalogue. Air-to-Air Heat Pump Model Air-to-air heat pump is here again modelled with an "equation fit model" [18]. Assuming constant supply air volumetric flow rate as operating conditions, the cooling and heating capacities and the EER and COP (and EIR = 1/EER) are only depending on temperatures and the selected equations to model the air-to-air heat pump are biquadratic ones. In particular, the performance depends on the "load air wet-bulb temperature" TL,in wb and the "source air dry-bulb temperature" TS,in db in cooling mode and on the "load air dry-bulb temperature" TL,in db and the "source air dry-bulb temperature" TS,in db in heating mode. Cooling Mode: Heating Mode: In the previous equations, the parameters are defined as: TL,in wb Load side inlet (in the HP) air wet bulb temperature, [K]. TL,in db Load side inlet (in the HP) air dry bulb temperature, [K]. TS,in db Source side air inlet (in the HP) dry bulb temperature, [K]. The subscript "ref" indicates values at reference conditions that must be correctly specified. In EnergyPlus the reference conditions are required both in cooling and in heating mode. For the standard operating condition, in cooling mode, the reference load side air wet-bulb temperature TL,in wb ref is equal to 19.4 ยฐC (with a corresponding reference load side air dry-bulb temperature TL,in db ref equal to 26.7 ยฐC) whereas the source side air dry-bulb temperature is fixed at 35 ยฐC. In heating mode, the reference load side air dry-bulb temperature TL,in wb ref is equal to 21.1 ยฐC, whereas the source side air dry-bulb temperature is fixed at 8.3 ยฐC. In fact, for conventional reversible heat pumps, the load side conditions correspond to internal building ones (return air temperature TRA) whereas source side conditions correspond to external ones (external air temperature TOA). The Case Study: The Smart Energy Building (SEB) The Smart Energy Building (SEB) was conceived and built by the University of Genoa (Unige) as an innovative and high-performance building to meet goals of zero carbon emissions, energy and water efficiency and building automation. It is located in the Unige Campus of Savona, Italy. The two storey building, in operation since February 2017, has a total floor area of about 1000 m 2 . In particular, SEB is characterized by the presence of: โ€ข high-performance thermal insulation materials for the envelope; โ€ข ventilated facades; โ€ข a photovoltaic field (21 kWp) on the roof; โ€ข extremely low consumption led lamps; โ€ข a rainwater collection system; โ€ข a thermal system composed by: ๏ƒผ an air handling unit (AHU), associated to an air-to-air heat pump, installed on the roof, which performs functions such as circulating, cleaning and cooling/heating the air of the building; ๏ƒผ a ground coupled heat pump (GCHP), that produces cold/hot water to feed fancoils and radiators for cooling/heating purpose; the hot water is used during winter also for Domestic Hot Water (DHW) purposes; ๏ƒผ two solar thermal collectors, for DHW production purposes exclusively; ๏ƒผ an air source heat pump (ASHP), for DHW production as backup unit of the solar collectors. The innovative nature of this building suggests the opportunity to analyze its performance from a dynamic point of view and to develop an energy model suitable for hourly simulations. EnergyPlus was selected to this aim. In particular, the present paper is focused on the modeling of the water-towater ground coupled heat pump (GCHP) and on the air-to air heat pump associated to the AHU. Modelling the Water-to-Water Ground Coupled Heat Pump (GCHP) For the Smart Energy Building, the geothermal heat pump in operation is a Clivet brand, model WSHN-XEE2 MF 14.2, operating with brine (geothermal side) and water. In particular, data refer to operation with a mixture of water and propylene glycol at 30% on the source side. The manufacturer catalogue provides the heat pump performance at full load as a function of source/load fluid temperatures. Table 1 represents the manufacturer data for the size 14.2, for the cooling mode. The performance at full load related to the heating operating mode as a function of temperatures are provided by the Manufacturer in two different Tables depending on the range of the source side water temperature. For our test case, it is interesting to consider a wide range of working conditions for the source side temperature. In fact, for a GCHP with expected long life of operation, the temperature of the ground, starting from the undisturbed value, can change considerably in time [19] and consequently, the temperature of the fluid circulating in the BHE field changes. The two manufacturer tables for heating mode differ for the selected values of the load side temperatures and thus, it is necessary to apply a proper interpolations. This is a typical problem in manufacturer data and cannot be managed in Energy Plus in a different way. The obtained combined dataset for heating mode is presented in Table 2. In grey are the data achieved by interpolation. It is important to notice that the performances in Tables 1 and 2 are provided as a function of the outlet temperatures TS,out and TL,out whereas Equations (11)-(16) contain the inlet ones, TS,in and TL,in. However, the manufacture catalogue provides details about the operating conditions related to the performances of Tables 1 and 2. In detail, the EER and COP data refer to following imposed temperature difference at the load and source sides: Cooling (Table 1): Heating (Table 2): For a complete analysis, it is necessary to take into account also the effect of water volumetric flow rates (source and load side) on the heat pump performance. The manufacturer provides only little information about the effect of the partial load factor PLF on the EER and COP of the water-towater heat pump and in particular the performance at PLF of 67% and 33%. Both the EER and the COP are enhanced at partial load, according to Table 3. Considering that both the source and load sides of the HP work at constant temperature difference according to Equations (11) and (12), the PLF represents not only the ratio between actual cooling or heating capacity and the maximum value but also the corresponding ratio between the water volumetric flow rates at load side. From the values of EER or COP in Table 3 it is possible to deduce the power consumption (cooling and heating mode) and the source side heat transfer rate and, as a consequence, the water volumetric flow rates at source side. The coefficients Ai, Bi, Di and Ei of Equations (1)- (6) are not available from manufacturer references. The only way for accessing them is to iteratively guess their correct value by comparison with the available datasheet values and by minimizing an error. In this paper, a simple optimum search process was applied to cooling or heating capacity and power consumption values provided in the manufacturer catalogue. The final calculated coefficients are presented in Table 4. Table 4. In particular, the graphs show the cooling/heating capacities and the power consumptions for cooling and heating, respectively, as a function of the source side outlet water temperature TS,out with the load side outlet water temperature TL,out as a parameter and considering the three conditions of load, namely PLF = 1, 0.67, 0.33. During the summer season, the cooling capacity C Q ๏€ฆ decreases to increase the source side outlet water temperature TS,out (fluid temperature entering in the BHE field) and increases to increase the load side outlet water temperature TL,out (fluid temperature to fancoils and radiators). On the contrary, the power consumption P increases to increase the source side outlet water temperature TS,out whereas the effect of the load side outlet water temperature TL,out is nearly negligible. As expected, both the cooling capacity and the power consumption decreased by decreasing the partial load factor PLF. (a) During winter, on the other hand, the heating capacity H Q ๏€ฆ increases for increasing source side outlet water temperature TS,out and slightly decreases for increasing load side outlet water temperature TL,out. The power consumption P is marginally affected by the source side outlet water temperature TS,out, whereas it increases with the load side outlet water temperature TL,out. The particular trend of the curve given by Equations (3) and (4), with two slight inflection points for TS,out = 3 and 5 ยฐC, is due to the particular operating conditions for the manufacture catalogue in heating mode. In fact, manufacture tables in the heating mode are built for different imposed temperature differences at the load and source sides, according to Equation (12). Thus, at different source sides, "outlet" water temperatures TS,out correspond the same source side "inlet" water temperatures TS,in = 8 ยฐC that represents the input of Equations (3)-(4). As for the cooling case, both the heating capacity and the power consumption decrease by decreasing the partial load factor PLF. The agreement between manufacture dataset and "equation fit models" approach is good, with an average relative error between less than 7%, for both cooling and heating mode at full load and lower than 15% considering also the PLF = 0.67 and 0.33. Modelling the Air-to-Air Heat Pump For the Smart Energy Building, the selected air-to-air heat pump associated to the air handling unit (AHU) is the Clivet Zephir CPAN-XHE3 Size 3, with a standard air flow of 4600 m 3 /h. This volumetric flow rate fulfils the ventilation requested by the Italian standards for the SEB building in terms of its volume and expected occupancy levels. This air unit is very peculiar, especially if compared with the options conceived and available in Energy Plus. This system is a primary-air plant with a thermodynamic recovery of the energy contained in the return air. The primary air comes entirely from outdoor (fresh-air) at temperature TOA whereas the return-air comes from the building inner rooms at temperature TRA. Return air, before being released to the atmosphere, exchanges heat with the condenser in cooling mode and with the evaporator in heating mode. Return-air represents a favorable thermal source stable in time, offering lower temperature on the condenser side in cooling mode and higher temperature on the evaporator side in heating mode. As a consequent, the energy required by the compressors is reduced up to 50% [20]. The manufacturer catalogue provides the reversible heat pump performances as a function of external air temperature TOA (dry bulb/wet bulb) and supply air temperature TSA. Moreover, the manufacturer catalogue reports two different types of performance coefficients, the thermodynamic efficiencies (EERth and COPth) and the overall efficiencies (EER and COP) that consider also the power of the auxiliary systems. In cooling mode, the selected supply humidity ratio is equal to 11 gvap/kgair and the reference return air temperature TRA is 26 ยฐC. In heating mode, the reference return air temperature TRA is 20/12 ยฐC (dry bulb/wet bulb). To model the air-to-air HP in EnergyPlus, the data corresponding to the "MC" operation mode were not considered, that imply post-heating equal to zero in cooling mode. The distinctive operating conditions of the present heat pump (with energy recovery) allow it to reach high values of performance coefficients but create some challenges in modelling the system in EnergyPlus. In fact, the "load side" temperature becomes the external air temperature TOA whereas the "source side" temperature is the return air temperature TRA both in cooling and in heating modes. Consequently, the reference conditions suggested from EnergyPlus (Par. 2.2) are no longer valid and new reference conditions are defined for the analyzed present heat pump. In particular, in cooling mode, the new reference external air temperature TOA is set to 40/25 ยฐC (dry-bulb/wet-bulb) whereas the reference return air temperature TRA is set to 26 ยฐC (Table 5). In heating mode, the new reference external air temperature TOA is set to โˆ’5 ยฐC (dry-bulb) whereas the reference return air temperature TRA is set to 20/12 ยฐC (dry bulb/wet bulb) ( Table 6). The EnergyPlus model for the air-to-air heat pump implements Equations (7) Thus, it is necessary to create an extended database to obtain, by optimization, the coefficients ai, bi, ci and di of Equations (7)-(10). The selected return temperatures TRA to extend the dataset are 20, 22, 26 ยฐC. By keeping constant the air volumetric flow rate, for the same external and supply conditions (temperature and humidity), also the cooling and heating capacities remain constant. On the contrary, modifying the return temperature conditions changes the "source temperature" and as a consequence, the performance coefficients (EER and COP) and the compressor power are modified. The values of the thermodynamic performance coefficients (EERth and COPth) for the new values of the return temperatures TRA are obtained by multiplying the corresponding Carnot performance coefficients (EERCarnot and COPCarnot) based on the evaporator and condenser temperatures, by two sets of constants CCi and CHi. The coefficients CCi and CHi are calculated here from Carnot law and manufacturer data according to the expressions below. Moreover, they are assumed to be dependent on the supply air temperature TSA but independent of the return temperatures TRA. The evaporator temperature Tevap is assumed to be nearly equal to the supply air temperature TSA whereas the condenser temperature Tcond is evaluated by means of energy balances on the components of the HP. The condenser temperature Tcond is assumed to be nearly equal to the supply air temperature TSA whereas the evaporator temperature Tevap is evaluated by means of energy balances on the components of the HP. Finally, overall efficiencies (EER and COP) are deduced by assuming the fan power consumption as constant and equal to 1 kW for all the operating conditions considered. The results of this analysis are presented in Tables 5 and 6, in cooling and heating mode respectively (data in grey, original data provided by the manufacturer in white). Finally, by means of an optimum search process comparing the performance values of Tables 5 and 6, the coefficients ai, bi, ci and di of Equations (7)-(10) were obtained and the results are presented in Table 7. As an example, Figures 4 and 5 show the cooling and heating capacities and the HP performances (EIR and COP) as a function of external conditions TOA and return temperature TRA as parameter. In particular, the manufactured data reported in Tables 5 and 6 are compared with the curves obtained by using Equations (7)-(10) with the least square error coefficients of Table 7. As expected, not changing the volumetric flow rate and the external TOA and supply conditions (temperature and humidity), the cooling and heating capacities Table 5 and Equations (7) and (8) with Table 7 coefficients. (a) (b) Figure 5. Heating capacity (a) and HP performance (b) in heating mode: comparison between Table 6 and Equations (9) and (10) with Table 7 coefficients. Eq.10 T RA=26ยฐC Eq.10 T RA=22ยฐC Eq.10 T RA=20ยฐC On the contrary, the performance parameter EIR (=1/EER) and COP depend on both the external and the return air temperature (Figures 4b and 5b). In cooling mode, the EIR increases with the external air temperature TOA (load side temperature) and increases with the return air temperature TRA (source side temperature). In heating mode, the COP decreases as the return air temperature TRA is increased (source side temperature) whereas it decreases with the external air temperature (load side temperature) for TOA > 0 ยฐC. For TOA < 0 ยฐC, the COP increases with the external air temperature because of the energy consumption of the defrost contribution. The agreement between manufacturer data and best-fit curves is good and the coefficients can be implemented in EnergyPlus to represent the behaviour of the present air-to-air heat pump. The average relative error (fit profiles vs. manufactured data) in cooling is about 2.3% for the cooling capacity C Q ๏€ฆ and 3.3% for the EER. In heating mode, the average relative error is 2.4% for the heating capacity H Q ๏€ฆ and 2.6% for the COP. Water-to-Water Heat Pump The proposed "Curve Fit Method" presented in the previous paragraphs was validated with reference benchmark simulations in EnergyPlus. A simplified model was created for this purpose, with a building able to work at nearly constant operating conditions for the whole simulation period, i.e., 1 month. The modelled building is equipped with the GCHP Clivet WSHN-XEE2 MF 14.2 and both cooling and heating modes are simulated. Different working conditions are analysed, imposing different load side outlet TL,out and source side inlet TS,in water temperature. The load of the building and the thermal response of the ground are properly calibrated to maintain the desired temperature difference at the source and load sides (Equations (11) and (12)). The results for the full load operating conditions are presented in Tables A1 and A2 in Appendix where the first two columns represent the imposed operating temperatures. From EnergyPlus simulations, it is possible to infer the inlet load and outlet source temperatures and verify that the temperature differences at the source and load sides are comparable to the desired values (Equations (11) and (12) Air-to-Air Heat Pump The equation fit model approach was implemented in EnergyPlus also for the air-to-air heat pump, by means of Equations (7)-(10) with the coefficients in Table 7. Similarly, in this case, a simplified building model was created with nearly constant operating conditions for the whole simulation duration, i.e., 1 month. The modelled building is equipped with the Clivet Zephir CPAN-XHE3 Size 3 and both cooling and heating are simulated. Different operating conditions were simulated, namely the ones presented in Tables A3 and A4 in Appendix for cooling and heating mode, respectively. In the tables, the results of EnergyPlus simulations are reported and compared with the data obtained with the implemented equation fit model. The agreement is very good, with an average relative error of almost 1.5% for the cooling capacity C Q ๏€ฆ , 1.6% for the EER, 5.1% for the heating capacity H Q ๏€ฆ and 0.26% for the COP. Figures 8 and 9 show graphically the same comparison. (8) with Table 6 coefficients. Figure 9. COP in heating mode: comparison between EnergyPlus simulations and Equation (10) with Table 6 coefficients. Conclusions The aim of this work was to provide a series of insights to Energy Plus users when simulations are carried out taking into account the operating temperature effects on performance of heat pumps, chillers and even heat recovery heat pumps in ventilation circuits. The starting point was to refer to the equipment related to a recent near zero energy building at the Authors' University. In particular, the final goal was to properly model the dependence of the heat pumps performance on the temperature, both load and source side and eventually on the partial load operating conditions. The actual installed water-to-water and air-to-air heat pumps have been considered and the equation fit model has been implemented with a series of modifications for adapting it to the typical data available from the manufacturer. Coefficients needed in the equation fit models have been determined by means of an optimum search and, to validate the approach, a simplified building model equipped with the selected heat pumps and chiller has been created. The results from the simulations confirmed the expected results in terms of heating and cooling equipment performance at given nearly constant working temperatures even if small differences (within 7%) resulted from simulation trends and equation fit model input data. The relative error slightly increases (within 15%) if the partial load operating conditions are considered (PLF = 0.67, 0.33). Author Contributions: The research and actions associated to writing the present paper are equally distributed among the three Authors. All authors have read and agreed to the published version of the manuscript. Eq.10 T RA=26ยฐC Eq.10 T RA=22ยฐC Eq.10 T RA=20ยฐC EnergyPlus T RA=26ยฐC EnergyPlus T RA=22ยฐC EnergyPlus T RA=20ยฐC Appendix A
Plasma-wave generation in a dynamic spacetime We propose a new electromagnetic-emission mechanism in magnetized, force-free plasma, which is driven by the evolution of the underlying dynamic spacetime. In particular, the emission power and angular distribution of the emitted fast-magnetosonic and Alfv\'en waves are separately determined. Previous numerical simulations of binary black hole mergers occurring within magnetized plasma have recorded copious amounts of electromagnetic radiation that, in addition to collimated jets, include an unexplained, isotropic component which becomes dominant close to merger. This raises the possibility of multimessenger gravitational-wave and electromagnetic observations on binary black hole systems. The mechanism proposed here provides a candidate analytical characterization of the numerical results, and when combined with previously understood mechanisms such as the Blandford-Znajek process and kinetic-motion-driven radiation, allows us to construct a classification of different electromagnetic radiation components seen in the inspiral stage of compact-binary coalescences. INTRODUCTION With the imminent direct detection of gravitational waves (GWs) by second generation detectors (Dooley et al. (2015)), the pursuit of an understanding of the electromagnetic (EM) counterparts to GWs becomes urgent, as a joint observation in both channels will provide irreplaceable means to diagnose properties of the astrophysical sources (Christensen et al. (2011)). One of the most important types of sources that could radiate both gravitationally and electromagnetically is a coalescing compact binary, involving black holes and/or neutron stars surrounded by magnetized plasma (forming the so-called "magnetospheres"). The magnetic field could originate from the accretion disk of the binary or neutron stars themselves, and the plasma could be generated from vacuum polarization, and/or charged particles coming off of the star surfaces and the accretion disk. Recent numerical simulations (Palenzuela et al. (2010b); Neilsen et al. (2011); Alic et al. (2012)) have shown that EM radiation is indeed given off by such systems in abundance even before merger and for binary black hole systems (while current joint-observation efforts concentrate on the postmerger stage of systems with at least one neutron star (Nissanke et al. (2013))), providing further optimism for the success of multi-messenger astronomy. The next step is then to clarify the various physical processes at work that, together, produces the EM signals seen numerically (in particular, an isotropic radiation that dominates near merger time has not been previously understood analytically). A complete classification and characterization of these processes is a prerequisite for extracting useful information about the binary systems from the observed EM signals. We provide such an analytical characteriza-tion in this work and compare it with previous numerical results (see Fig. 1 below). Within magnetospheres, the energy density of the magnetic field often dominates over that of the plasma particles, creating what's referred to as a force-free plasma. Thanks to the seminal works by Goldreich & Julian (1969) and Blandford & Znajek (1977), it is widely accepted that force-free plasma can act as a medium for powering outgoing EM radiation (or jets) at a cost of reducing the rotational energy of neutrons stars or black holes (Thorne (1994); Meier (2012); Palenzuela et al. (2011); Spruit et al. (1997); Hansen & Lyutikov (2001)). More recent studies (Hansen & Lyutikov (2001) Morozova et al. (2014)) suggest that a force-free plasma could also drain the (linear-motion) kinetic energy of moving objects to power EM radiations in the form of jets launching from star surfaces (or the black hole horizon), accompanied by some isotropic flux. We refer to this as the kinetic-motion-driven radiation 5 , which is also seen from satellites moving in earth's ionosphere (Drell et al. (1965a,b)). There is however, a third mechanism, which we shall call the gravitation-driven radiation, which will be the focus of this paper. When the background spacetime becomes dynamic, the local EM energy density of magnetized plasma deviates from its equilibrium values and these inhomogeneities tend to propagate out via plasma waves. A similar phenomenon is known to exist in spacetimes without matter (the Gertsenshtein-Zeldovich effect (Gertsenshtein (1962);Zeldovich (1973))), where the out-going radiation consists purely of vacuum EM waves. In addition, the generation of magnetohydrodynamic (MHD) waves by the influence of gravitational waves has been examined in Duez et al. (2005). Although this effect has not been explicitly discussed in the context of forcefree magnetospheres, we note that force-free electrodynamics (FFE) can be viewed as the low-inertia limit of relativistic magnetohydrodynamics (McKinney (2006); ). In this paper, we will examine essentially the same physical process, but where the driving gravitational dynamics is not an (idealized wave-zone) gravitational wave. Within force-free plasma, energy can be carried away by two different classes of waves. One class is called the fast-magnetosonic waves in the local short-wavelength limit (the wavelength is much smaller than the radius of spacetime curvature), whose global and longer-wavelength counterparts are named the "trapped modes" in ; Yang et al. (2015). These tend to behave similarly to vacuum EM waves and propagate in a more egalitarian fashion in terms of sky directions. The other class of waves are the Alfvรฉn waves, generalizing to "traveling waves" ; Yang et al. (2015)) or principal null solutions ; Zhang et al. (2015)). A salient feature of the Alfvรฉn waves and their generalizations (for brevity, we will not distinguish between them below, similarly for the other class) is that they propagate along the magnetic field lines, and are as such automatically collimated if the magnetic field threads through the orbital plane of the binary nearly orthogonally (a natural configuration for accretion-disk-supported field). Below, we show how to compute their fluxes as generated by the gravitationally-driven process. SET-UP OF THE PROBLEM Let us assume that there is a stationary FFE configuration in a stationary background spacetime with metric g B . According to discussions in Uchida (1997c,d); Gralla & Jacobson (2014), it is possible to find at least one pair of "Euler potentials" ฯ† 1B,2B , such that F B = dฯ† 1B โˆง dฯ† 2B , where F B is the background electromagnetic field tensor. Now suppose that the spacetime becomes dynamic and its metric is g = g B + h, where parametrizes the magnitude of the spacetime deformation from its stationary state. Correspondingly the Euler potentials will also deviate from their original values: ฯ† 1,2 = ฯ† 1,2B + ฮดฯ† 1,2 , whereby the non-linear FFE wave equations they satisfy are Gralla & Jacobson (2014) with F โ‰ก dฯ† 1 โˆง dฯ† 2 . Note that the Hodge star * is now with respect to the total metric g, so that it depends on metric perturbations. In order to study the gravitationally-induced plasma waves, we shall linearise the above equation to the leading order in , and obtain This equation describes the excitation of plasma fields ฮดฯ† 1,2 by the source on the right hand side, which is linear in h. It implies that GWs interacting with magnetized plasma can generate plasma waves. Moreover, it predicts that a time-dependent Newtonian source within magnetized plasma also induces plasma radiation, an effect that has been overlooked before and could have observational consequences. RADIATION IN NEARLY FLAT SPACETIMES Now we specialize to a simple yet important example where the background metric is flat, i.e., g ยตฮฝ = ฮท ยตฮฝ + h ยตฮฝ . This is a good approximation when the gravitational field generated by matter sources or GWs is weak. In addition, let us assume that the plasma is magnetized along the z direction, with field strength B so that F B = Bdx โˆง dy. When the spacetime becomes dynamic, the EM field 2-form can be written as (note we consider only those FFE perturbations driven by the spacetime variations, and so use the same flag ) With this set-up, one can straightforwardly work out the Hodge star rules, plug them into Eq. (2), and obtain a coupled set of wave equations for ฮดฯ† 1,2 . These equations can further be diagonalized through the definition of a new set of variables: in which case the wave equations decouple into The first equation describes a wave propagating along the magnetic field lines, or in other words the Alfvรฉn wave. The second equation describes the fast-magnetosonic wave, which propagates in all directions. These equations are gauge-invariant, as can be checked by substituting in the infinitesimal gauge transformation and Denote the source terms in Eq. 5 as S 1 and S 2 respectively, the solutions to these wave equations can be ob-tained through the use of Green's functions, where ฮ˜ denotes the Heaviside step function. After evaluating ฯˆ 1,2 , we can reconstruct ฮดฯ† 1,2 , and subsequently F , by noting that whose solutions are (applying the Green's function for 2-D elliptic equations) where โˆ† x = x โˆ’ x and โˆ† y = y โˆ’ y . Analogous to the Gertsenshtein-Zeldovich effect, Eq. (5) together with Eq. (8) explicitly show that GWs injected into magnetized plasma would generate both Alfvรฉn and fast-magnetosonic waves. Suppose that the gravitational wave packet has a characteristic amplitude h and a length-scale of ฮป, it is then straightforward to see that the plasma-wave luminosity L GW satisfies L GW โˆ B 2 ฮป 2 h 2 ; a relationship that can be compared with future numerical experiments. Here we focus instead on the case where the source is generated by two orbiting compact masses, in order to study the radiation of a binary system in the inspiral stage. With a Newtonian matter source (as the leading order post-Newtonian term of general relativistic expressions, which is sufficient for our purpose), h is given by (Misner et al. (1973)) When the source consists of a pair of orbiting black holes, the formulae above are valid at places away from the black holes, which are themselves replaced by point masses. However, the Newtonian approximation becomes inaccurate near the black holes. In addition, in order to compute the plasma waves at far away and extract the energy flux, we must exclude the points enclosed by the black hole horizons. Therefore, in practise (see Sec. 5), we remove two excision spheres when computing the integrals in Eq. 8. To test the sensitivity of the gravitation-driven luminosity values on the excision radii choice, we vary their values from 2M to 3M (M being the black-hole mass), and observed that the resulting flux changes less than 10 precent. For the presentation of data in Sec. 5 then, we adopt the cut-off radius choice of 3M . We caution that this insensitivity to excision radii could change significantly if we take into account relativistic (Post-Newtonian) corrections to the metric. FLUX EXTRACTION According to Eq. (8), the fast-magnetosonic waves are quite similar to the vacuum EM waves, where the source term S 2 can also be decomposed into multipole contributions. Let us assume that the binary (with total mass M) is practicing near-circular motion with a period of 2ฯ€/ฮฉ, in which case ฯˆ 2 in the radiative zone can be written as where the m = 0 piece corresponds to the DC monopole field, which does not radiate. The coefficients f m may be further decomposed into a summation of associated Lengendre polynomials, starting from l โ‰ฅ |m|. In order to compute the energy flux, we need to reconstruct ฮดฯ† 1,2 with Eq. (10) (in the absence of ฯˆ 1 ), or more efficiently, by noticing that ฮดฯ† 1,2 must possess similar asymptotic forms as Eq. (12): and the relationship between g 1,2 m and f m can be obtained using Eq. (4) with ฯˆ 1 = 0: We can then substitute these expressions into ฮดฯ† 1 and ฮดฯ† 2 , and subsequently Eq. (3) to obtain the field 2-form. It is then straightforward, although tedious, to extract from it the electric and magnetic field vectors, and compute the Poynting vector. In the end, we arrive at the flux formula for fast-magnetosonic waves: The Alfvรฉn waves, on the other hand, propagate along the magnetic field lines. Based on Eq. (8), we write ฯˆ 1 in the radiative zone |z| M as where ยฑ stands for the top/down extraction surfaces and u โ‰ก t โˆ’ z, v โ‰ก t + z. The effective radiative part of ฯˆ 1 is only a function of u for z M , and a function of v for โˆ’z M . One can write the associated ฮดฯ† 1,2 in a similar format, which satisfies Eq. (4) with ฯˆ 2 = 0: from which we obtain the luminosity function For systems with mirror symmetry about the orbital plane, it suffices to only compute the luminosity on one side and double the result. BINARY BLACK HOLE COALESCENCE We can now compare our analytical predictions with numerical simulations of equal-mass binary black hole coalescences, and try to identify the physical mechanisms behind the "isotropic" and "collimated" EM radiations seen there (Palenzuela et (2013)), as well as to estimate the magnitude of each piece. To facilitate comparison, we adopt the same contextual parameters as in the numerical experiments above, i.e., a binary black hole system with 10 8 solar masses for each hole and a background magnetic field at 10 4 Gauss. We also note that the strength of the EM emissions is much weaker than that of the gravitational-wave emission, where the gravitational radiation-reaction leads to the shrinking of the orbital radius. As a result, it is a valid and common approximation to ignore any back-reaction of the EM radiations on the evolution of the spacetime. Both fast-magnetosonic and Alfvรฉn waves are produced during the sequence (inspiral, merger, and then ringdown) of binary merger stages, and they radiate mostly in the forms of "isotropic" and "collimated" fluxes, respectively. Below, we will concentrate on the inspiral stage (leading into the merger itself) that's the most interesting for multi-messenger astronomy. During this stage, the EM emissions can be classified into rotation-driven, kinetic-motion-driven and gravitationdriven types. The rotation-driven radiation is generated by the Blandford-Znajek mechanism, which supports a jet-like radiation with luminosity of the order ( in cgs units and when spin is aligned with the magnetic field, or abbreviated as 2.4L 43 B 2 4 M 2 i8ฤ 2 i . Here M i is the ith black hole mass andฤ i is the dimensionless spin parameter of the black hole ranging from 0 to 1. As a black hole moves through magnetized forcefree plasma, it launches collimated jets along the magnetic field lines (Palenzuela et al. (2010b); Neilsen et al. (2011)). The power of this radiation is proportional to v 2 and thus ฮฉ 2/3 . In addition, if the black hole also follows accelerated motion, it generates an additional Poynting flux similar to accelerated charges in vacuum, which can be attributed to fast-magnetosonic wave emission. Its power is on the order of 2/3q 2 a 2 ("Larmor formula" of ), where the effective monopole charge q should have value โˆผ 2BM 2 and the acceleration obeys a โˆ v 2 /d โˆ ฮฉ 4/3 . Summing up the two contributions, we have for kinetic-motion-driven radiation that quency dependence of this class of EM emissions for a binary black hole system. The source term of fastmagnetosonic waves scales as M/d 3 , where d is the orbital separation. Such a source term generates ฯˆ 2 in the multipolar-expansion manner of Eq. (12), with the luminosity for each multipole moment scaling as B 2 M 2 v 2l โˆ ฮฉ 2l/3 . For unequal mass binaries, the radiation contains a dipole piece with l = 1, whereas emission from an equal-mass binary starts at the quadrupolar order (l = 2). On the other hand, the source term for Alfvรฉn waves scales as M vฮฉ/d 2 and the corresponding flux scales as B 2 M 2 v 4 โˆ ฮฉ 4/3 . In Fig. 1, we plot the ฮฉ-dependent luminosities for both fast-magnetosonic and Alfvรฉn waves, for an equalmass binary system (as is simulated in Neilsen et al. (2011), Palenzuela et al. (2010b, and Alic et al. (2012)), with the cut-off radius chosen at 1.5 times the horizon radius (it turns out that the results are insensitive to the cut-off radius). More specifically, we substitute the density profiles appropriate for point masses following Newtonian Keplerian orbits into Eq. (11), and feed the resulting metric perturbation into the right hand side of Eq. (5) to obtain the expressions for S 1 and S 2 . These then allow us to numerically integrate out Eq. (8) and acquire ฯˆ 1 and ฯˆ 2 , representing the Alfvรฉn and fast-magnetosonic waves respectively. To compute the Alfvรฉn flux L Alf , we apply โˆ‚ u and โˆ‚ v to ฯˆ 1 and take the results through a numerically Fourier transformation procedure to obtain โˆ‚ u A ยฑ and โˆ‚ v A ยฑ according to Eq. (16). Finally, another numerical integration according to Eq. (18) provides us with L Alf . We do this for several black hole separations, as signified by their different Keplerian orbital frequencies, and plot the results as the purple dots in Fig. 1. We also compute the fast-magnetosonic fluxes L fast at these separations. In this case, we simply need to project rฯˆ 2 onto exp(imฯ†) basis (taking m up to 30) and substitute the resulting f m values into Eq. 15 to compute L fast . The results are plotted as the blue dots in Fig. 1. From the figure, we can see that the luminosity values are consistent with the quadrupolar contribution's dom-inance over higher multipoles, with a ฮฉ 4/3 scaling. We can also read off the dependence of L Alf and L fast on the magnetic field strength from their respective formula (Eqs. (18) and (15)), which is B 2 4 . Simple dimensional consideration fixes the dependence on M 8 for us, which is M 10/3 8 . What remains to obtain a formula similar to Eq. (20) for the gravitation-driven case is the determination of the coefficients of proportionality, which set the overall amplitudes for the fluxes. These are simply the intercepts on the vertical axis of the solid purple and blue fitting lines in Fig. 1 (in other words, they come from actually solving the equations and are not new independent rough estimates). In the end, we obtain that the gravitation-driven radiation should scale as Close to merger, the gravitation-driven, fast magnetosonic radiation dominates over flux contributions from Blandford-Znajek and kinetic-motion-driven radiations (Eqs. 19 and 20). This is consistent with the numerical observations of Neilsen et al. (2011) and Moesta et al. (2012) (see the top-right corner of Fig. 1). On the other hand, we caution that our computations do not take into account nonlinearities, so the analytical fit to numerical data should be interpreted with a pinch of salt. The aim of the present paper is only to demonstrate the existence of the gravitation-driven radiation, and the fact it can potentially produce large fluxes, especially an isotropic one during merger, rather than trying to make a fit to the numerical data with our zeroth-order calculation. In particular, our results should in no way be interpreted as fully "explaining" the numerical results. In particular, we note that the matching for the fast-magnetosonic/collimated flux (blue crosses versus blue lines) at low frequencies is less accurate. Without a detailed examination involving targeted numerical experiments and higher order analytical computations, we can not state with certainty the exact reason for this, so future studies are required. Here, we can but point out some more obvious subtleties in the matching procedure. Most importantly, as mentioned above, the Newtonian approximation breaks down in the vicinity of the blackholes in our zeroth-order calculation, and this happens regardless of the orbital separation. Although the fluxes change by only a few percentage points when we move the inner cut-off radius from 3M to 2M , the dominant contribution to our numerical flux integrations nevertheless originate from the neighbourhoods of the black holes, instead of the wavezone. Therefore the omission of nonlinear relativistic effects might be the main approximation here, and taking into account the post-Newtonian or relativistic corrections may further change the luminosity estimates above. Other effects, such as the absorption by black holes, should also be treated properly. Secondly, the numerical fluxes are divided according to their directions of propagation, catering more for the observational consequences than for matching with analytical classifications. Such imperfect correspondences between concepts employed by numerical and analytical studies lead to systematic matching errors. For exam-ple, the collimation in the numerical study is defined to be flux propagating inside a cone of a certain opening angle, in analogy with the usual jet language, while for Alfvรฉn waves climbing the vertical magnetic field lines, a cylinder enclosing the binary (or two cylinders around individual black holes when they are far apart) would be more appropriate. Therefore, with a large extraction radius and when the black hole separation is large, the numerical cone would likely enclose a fair amount of fast-magnetosonic waves, contributing to the relative weakness of numerically measured isotropic flux. Many other numerical difficulties associated with subtracting off a background radiation in order to construct a division of the overall flux into the collimated and isotropic types, especially when the overall flux is weak, have been discussed in the numerical papers such as Neilsen et al. (2011) and Moesta et al. (2012). We refer interested readers to these important literature. In the future, more specifically designed numerical experiments are necessary to test this gravitation-driven emission mechanism, including possibly binary star, instead of binary black hole, simulations. Improved sophistication in analytical computations is also necessary, before the effects of the various simplifying assumptions we made in the present work can be disentangled. DISCUSSION We briefly comment on plasma wave generation during the other stages of binary black hole coalescences. During the merger phase, both the spacetime and the magnetosphere are highly dynamic, and the best tool to understand their evolution is through numerical simulations. However, in the ringdown stage, the time-dependent part of the emission arises from: (i) the ringdown of the magnetosphere, as described by its eigenwaves ; Yang et al. (2015)); (ii) the gravitational quasinormal modes will drive additional emission by coupling to the stationary part of the black-hole jets, an effect quantifiable using black-hole perturbation theory. Note that by the "ringdown" stage, we mean the period before the post-merger black hole settles down to Kerr. The settling time can be estimated as 1/ฯ‰ I 22 , where ฯ‰ I 22 is the imaginary part of the frequency for the l = 2, m = 2 quasinormal mode (the dominant mode). The value of ฯ‰ I 22 is about 0.1/M for Schwarzschild black holes and ฯ‰ I 22 โˆผ โˆš 1 โˆ’ฤ/M for rapidly spinning black holes, which asymptotes to zero in the extremal spin limit (i.e. the modes are long lived and the settling is protracted) 6 . For a post-merger black hole of 10 8 solar masses, the Schwarzschild formula translates into a settling time of about eight and a half hours. So although extremely transient in nature, this period may be observationally detectable. On the other hand, the real part ฯ‰ R 22 is โˆผ 1/M for rapidly spinning black holes and โˆผ 0.5/M for Schwarzschild black holes. During the ringdown stage, the gravitation-driven luminosity can be estimated as 6 For generic Kerr black holes, please see Fig. 5 in Yang et al. (2012) for the mode decay rates. while the Blandford-Znajek flux is approximately where a f is the spin parameter for the final black hole. As the final black hole in generic binary mergers is rotating, we expect the Blandford-Znajek contribution to be important, and the gravitation-driven emission to also be an important part of the total flux at least within a timescale of 1/ฯ‰ I 22 . During the ringdown stage, both the spacetime metric and the magnetosphere would be time-dependent, with similar but not exactly the same characteristic frequencies ). The gravitation-driven mechanism would account for the metric variation's modifying effect to e.g., the Blandford-Znajek process, but not that from the magnetosphere ringing. In other words, multiple transient effects are present and it would be difficult to disentangle the signals they generate. Nevertheless, if quasi-periodic flux variations from the postmerger black hole can be detected, then one could in principal do interesting measurements such as that on the black hole spin. For completeness, we can also estimate the flux modification due to the presence of current-sheets near the black holes, which is approximately the geometric mean of collimated and acceleration-induced radiations (see Eq. 42 in ). With units restored and according to Eq. (20), the corresponding luminosity is sub-dominant near merger. In addition, although we have examined the gravitation-driven plasma wave generation here in the context of force-free plasma, we expect similar signatures to persist in materials following more generic MHD equations. Finally, we note that in the binary black hole example, energy is emitted at very low frequencies (below the plasma frequency). In fact, during the Blandford-Znajek process, the outgoing energy flux is carried out at the DC frequency. This is allowed for MHD waves (including waves in force-free plasma), but not for unmagnetized plasma (Thorne & Blandford (2016)).
ACCURACY EMPATHY OF STUDENT COLLEGE OF JAVA TRIBE AND SUNDA TRIBE This study aims to describe the accuracy of empathy in Javanese and Sundanese students from the Guidance and Counseling Study Program at Ahmad Dahlan University. Samples were taken by a purposive sample that consists of Javanese students and Sundanese students. The instrument used was the empathy accuracy scale. The study results were analyzed using descriptive statistical analysis and different tests with Anova. The results showed there was no significant difference between the accuracy of empathy among Javanese and Sundanese students. This research also reveals that the highest aspect of empathy accuracy in Javanese students is an emotional concern, while Sundanese students are perspective-taking. This means that the accuracy of empathy among Javanese students is higher in understanding and feeling the emotions of others, while the accuracy of empathy of Sundanese students is higher in understanding and placing themselves in the minds of others. The results of this study can be used as a base for developing techniques and strategies in guidance and counselling services that focus on developing the accuracy of empathy in adolescents. INTRODUCTION Empathy has an important role in the life of Indonesian society which is a pluralist country consisting of various tribes and cultures. This diversity can be found from the diversity of regional languages, religions, customs and tribes. Consisting of various tribes and cultural values, it can be seen as a nation's wealth that needs to be preserved and preserved (Ratu, Misnah, and Amirullah 2019). On the other hand, such diversity can also be a trigger for conflict between ethnic groups if it is not based on empathy and the tolerance (Atkins, Uskul, & Cooper, 2016;Gustini, 2017). In addition to social life, empathy also has a role in various interactions in the educational environment, including in higher education. The development of student skills does not only lead to skills and cognitive development, but moral as a basis for virtue in behaviour are urgently needed (Agboola & Tsai, 2012). One of the moral parts that are the basis for individuals in interacting in the social environment is empathy (Borba, 2008;Goleman, 2007;Hoffmann, et al., 2016;Schultze-Krumbholz & Scheithauer, 2013;Shu, et al., 2017). Accuracy of empathy is the most essential ability of social intelligence that is built on primary empathy or basic empathy (Goleman, 2007). The term of accuracy empathic describes the ability to accurately deduce the specific content of the experiences and feelings of others (Ickes, 2010). So, someone who has empathy accuracy is someone who can consistently read the thoughts and feelings of others correctly. The development of psychology today, Goleman (2007) suggests an expansion of the term empathy which is divided into two namely primal empathy and empathic accuracy. Both primal empathy (primary empathy) and empathic accuracy (empathy accuracy) are subconstructs of social awareness which is a component of social intelligence. Primary empathy is one's ability to understand other people's feelings following nonverbal emotional cues, while empathy accuracy is one's ability to understand the thoughts, feelings, and intentions of others. So that in this sense three activities have empathic accuracy, namely first understanding the thoughts of others, then understanding what other people feel and then understanding the intentions of others shown both verbally and nonverbally. Accuracy of empathy is one's ability to accurately identify and understand emotional states, thoughts, and intentions of others (McLaren, 2013). In daily life, individuals are confronted with various events that make individuals empathize. However, the extent to which individual accuracy in empathizing is important to be able to provide an appropriate response to others and to improve harmony in social interactions (Cohen, et al. 2012;Hinnekens, et al., 2016;Ickes & Hodges, 2013). People who empathize accurately are people who can read the thoughts and feelings of the target, they can read the condition of other people not with their clothes, they take off their clothes and replace them with other people's clothes. With their new clothes, they can imagine what others think and feel. So when they read the situation will produce accurate conclusions following the real situation in the target (Ickes, 2010). Rogers (1992) accuracy of empathy is the ability to feel the personal world of others as if it were his, but without ever losing the quality "as if" it. Accuracy of empathy focuses on the connection between cognitive in an interaction, namely the ability to deduce what is in the minds of others (Klein & Hodges, 2001) and the ability to feel the emotional state of others appropriately (Smith, 2015;Steffgen, et al., 2011;Wright, Wachs, & Harper, 2018). Rogers (Pedersen, Crethar, & Carlson, 2008) states that the accuracy of empathy consists of cognitive components and affective components. Cognitive empathy is the ability to understand the condition or state of mind of another person precisely, and without losing the real condition. While effective ability is to feel a certain form or feeling as what is felt or told by others. According to Davis (1983), the accuracy of empathy can be measured from multidimensional components, namely the cognitive component and the affective component, each of which has two specs. The cognitive component is taking perspective and fantasy, while the affective component includes Emphatic Concern and Personal Distress. Mead in Davis (1983) emphasizes the importance of ability in perspective-taking for non-egocentric behaviour, that is, abilities that are not oriented to one's interests but the interests of others. Empathic concern is sympathy that is oriented towards others and attention to the misfortune of others. this aspect is also a reflection of a feeling of warmth that is closely related to sensitivity and concern for others. Cohen & Wheelwrught (2004), stated criticism of the Davis IRI scale that aspects of fantasy and personal distress are not used because they can measure other processes more broadly. So based on the views of the experts, then to measure the accuracy of empathy can be done by looking at the aspects of empathy in the form of perspective-taking, cognitive accuracy, emotional concern and accuracy emotional. Accuracy of empathy owned by individuals is influenced by the individual's cultural background to grow and develop (Chung, Chan, & Cassels, 2010;Park, et al., 2016;Sharifi-Tehrani, et al., 2019;Wang, et al., 2003). Culture influences identity and a set of attributes that determine identity. culture becomes one of the factors that influence individual development (Chopik, O"Brien, & Konrath, 2017;Maghrabi & Palvia, 2012). Culture is a human medium that translates and regulates human actions and gives meaning to what he does or consciously restrains (Dahl, 2012). Cultural background influences the perspectives and values of individuals, skills that are mastered and considered important, the expected role of adults, the development of language and communication skills, emotional expression and regulation, and the formation of self-image (Ormord, 2009). Culture influences what individuals think, feelings, how to dress, what and how individuals eat, talk, values and moral principles that individuals hold, as well as how individuals interact with each other and how individuals understand the world (Ratts, et al., 2016). The results of research conducted by Fathurroja (2018) on the description of the ethnic identity of adolescent Sundanese and Javanese shows that the two ethnic groups are at a low level in exploring their ethnicity. The ability to explore ethnicity is an important part of cultural intelligence (Nugraha, 2019) and an important part of empathy is needed to understand what is being thought and felt by others who are from their ethnicity and who are of different ethnicity (Matsangidou, et al., 2018). Furthermore, the results of a study conducted by Nurwati & Rosilawati (2017) found that there were contradictory perceptions between Javanese and Sundanese people. An example is in dance culture. For Sundanese people, dancers are interpreted as "ronggeng" which connotes poorly. Whereas in Javanese society, dancers are referred to as "bedaya" which means respectable community. Differences in perceptions of cultural diversity can trigger conflict, if not based on empathy in the life of a plural society (Gonรงalves, et al., 2016;Klimecki, 2019;Perrone-McGovern, et al., 2014). Accuracy of empathy is important for students because the ability to empathize accurately affects the harmony of social interactions and influences learning success (Faisal & Zuri Bin Ghani, 2015;Yalcin-Tilfarlioglu & Arikan, 2012). One way to foster harmony between students and students and lecturers in social interaction in higher education environments is accurate empathy (Bouton, 2016;Dahri, Yusof, & Chinedu, 2018;Daltry, et al., 2018). Accuracy of empathy also has an important role in the success of counselling and guidance services in helping counselees (Atzil-Slonim, et al., 2019;Bayne, Pusateri, & Dean-Nganga, 2012). So the ability to give empathy accurately is important to have by a counsellor who needs to be sharpened since the study in college. Based on the background above, this study aims to determine the accuracy of student empathy from the guidance and counselling study program from the Javanese and Sundanese. The results of this study can be used as basic data to develop techniques and strategies in guidance and counselling services that focus on developing the accuracy of empathy for adolescents in a multicultural environment. METHOD This study aims to describe the empathy accuracy of Javanese and Sundanese students from the Guidance and Counseling Study Program UAD. Samples were taken using purposive sampling with a total of 60 students consisting of 30 Javanese students and 30 Sundanese students from the students of the University of Ahmad Dahlan's guidance and counselling study program. The total sample of men is 31 people and women is 29 people. Consideration of the selection of research samples is based on the subject domicile area, namely the Javanese ethnic group drawn from students of guidance and counselling study program from Yogyakarta Special Region and Central Java which includes Semarang, Pati, Kedu, Banyumas, Pekalongan, Surakarta, Salatiga, and Magelang. Whereas Sundanese students were drawn from students of guidance and counselling study program who came from Tatar Pasundan which covered the provinces of West Java, Banten, Jakarta, the city of Bandung, Bogor, and Tangerang. The instrument used was the empathy accuracy scale with a reliability value of 0.937. The measured aspects of empathy consist of cognitive aspects (perspective-taking and cognitive accuracy) and affective aspects (emotional concern and emotional accuracy). Data analysis used descriptive statistical analysis with one way ANOVA different test. FINDINGS AND DISCUSSIONS Student's accuracy empathy data is categorized into three levels, namely high accuracy, medium accuracy, and low accuracy (Azwar, 2012). Calculation of empathy accuracy data categorization is shown in table 1. Table 1. Categorization of Accuracy Empathy Range of Scores Category X โ‰ฅ (Mean + 1,0 SD) High Accuracy (Mean -1,0 SD) โ‰ค X < (Mean + 1,0 SD) Medium Accuracy X < (Mean -1,0 SD) Low Accuracy Based on the results of data analysis of 60 students from Javanese and Sundanese ethnic categories, the empathy accuracy of BK study program students is listed in table 2. Based on the results of data analysis, it shows that most of the empathy accuracy of students are in the medium accuracy category, which is 75%, high accuracy category is 16.67% and low accuracy category is 8.33%. Accuracy of empathy for guidance and counselling students from Javanese and Sundanese shows that most are in the medium accuracy category. This can be the basis that the accuracy of empathy of students from the two tribes still needs to be developed because empathy is a very important ability for counsellors. As a guidance and counselling student candidate for the counsellor, the primary ability that should be possessed is the ability to empathize to support the success of the counselling process (Corey, 2011;Jones, 2011). Furthermore, based on the results of data analysis shows that the average accuracy in each aspect of empathy among Javanese students and students does not show a large difference. The average empathy accuracy of Javanese and Sundanese students in each aspect is shown in Graph 1. Graph 1. Average Comparison of Accuracy Empathy of Java Tribe Students and Sunda Tribe Students Based on the results of data analysis shows that the accuracy of empathy among Javanese students, the highest aspect is in the aspect of emotional concern that is 2.78, then the second aspect is the accuracy aspects of Mean of Accuracy Empathy in Each Aspect Java Tribe Sunda Tribe emotional 2.76 and the third aspect is the perspective-taking 2.74, and the lowest aspect is the cognitive accuracy of 2.72. The accuracy of empathy in Sundanese tribe students shows the highest is in the perspective-taking aspect is 2.80 while the aspect of cognitive accuracy is 2.75. Furthermore, the emotional concern aspect is 2.72 and the emotional accuracy score is 2.64. The highest aspect of empathy accuracy in Javanese ethnic students on emotional concern aspects means that Javanese students are higher in understanding and feeling the emotions of others. Whereas Sundanese students are higher in the aspect of perspective-taking, it means that Sundanese students are higher in understanding and putting themselves in the minds of others. The results of this study are in line with research conducted by Wewekang and Puspawuni (2016) that Javanese people hold the value of Javanese culture in the form of rumangsa bisa which can be able to feel themselves and others. Besides, the results of this study are also in line with studies conducted by (Asep 2010) that Javanese people have the habit to teach from generation to generation that each group member should be able to develop virtues such as compassion, kindness, generosity, ability to feel other people's anxiety, a sense of social responsibility, concern for others, learning to sacrifice for others and living the sacrifice as a high value, helping and helping one another. Teaching empathy by habituation is one of the effective methods for developing empathy in individuals (Decety & Svetlova, 2012;Eisenberg, 2000). For Sundanese students, the average aspect of perspective-taking Sundanese students is higher than the other aspects of empathy accuracy. The results of this study are in line with the results of a study conducted by Fitriyani, Suryadi & Syam (2015) and a study conducted by Rinawati (2010) that Sundanese culture for generations teaches the cultural value of Sundanese pameo silih asih (love one another), silih asah (improve one another), and silih asuh (nurture one another) that make up for the teachings taught from generation to generation. Cultural values of haste mean mutual improvement through education and science. The cultural values of this penance also contain teachings so that with the education and knowledge they possess, individuals have self-awareness and can conduct self-reflection that is useful for improving themselves. Taking perspective is related to the ability to reflect on one's self to improve themselves (Gilbert, et al., 2017). People who have self-awareness for self-reflection tend to have good perspective-taking abilities (Emen & Aslan, 2019;Gilbert, et al., 2017). Besides, people who have self-awareness tend to avoid egocentrism (Abbate, Boca, & Gendolla, 2016). People who like to self-reflect to improve themselves will have a good perspective-taking in understanding the conditions and situations experienced by others (Gerace, et al., 2017;Moreira, DeSouza, & Guerra, 2018). Before the different test from ANOVA, data on the accuracy empathy of Javanese and Sundanese students were tested for data normality. The results of the normality test data analyzed by the Kolmogorov-Smirnov One-Sample Test showed Asymp. Sig. (2-tailed) of 0.070> 0.05 means that the accuracy empathy data of students is normally distributed. Furthermore, the results of different empathy accuracy tests on Javanese and Sundanese students analyzed using ANOVA showed that in each aspect of empathy accuracy there were mean differences but these differences were not significant. The results of the different empathy accuracy test results for Javanese and Sundanese students are listed in Table 3. Based on the results of data analysis shows that the accuracy of empathy of Javanese and Sundanese students did not have a significant difference. From the results of different tests with one way ANOVA on the total score of the accuracy of empathy students from the two tribes shows a significant value of 0.821> 0.05, meaning that there is no significant difference between the empathy accuracy of Javanese students and Sundanese students. If viewed from each aspect shows there are differences in the average accuracy of empathy among Javanese and Sundanese students, but the differences in each aspect are also not significant. In the aspect of perspective, the average Javanese student taking was 2.74 and Sundanese students were 2.80 with a significance value of 0.426> 0.05. This means that there are differences in the average perspective of taking among students of Javanese and Sundanese ethnic groups, but there is no significant difference. In the aspect of cognitive accuracy, the average Javanese students were 2.72 and Sundanese students were 2.75 with a significance value of 0.729> 0.05. This means that there are differences in the average cognitive accuracy of Javanese and Sundanese students, but there is no significant difference. Furthermore, the emotional concern aspect of Javanese students was 2.78 and Sundanese students were 2.72 with a significance value of 0.572> 0.05. This means that there are differences in the average emptional concern among Javanese and Sundanese students, but there is no significant difference. In the aspect of accuracy, the emotional average of Javanese students was 2.76 and Sundanese students were 2.64 with a significance value of 0.181> 0.05. This means that there are differences in the average emotional accuracy of Javanese and Sundanese students, but there is no significant difference. Accuracy empathy among Javanese and Sundanese students does not have a significant difference because it can be caused by the Sundanese and Javanese ethnic groups both having a cultural value of empathy taught for generations. Sundanese culture has values that are held in high esteem by the Sundanese people as reflected in the slogan of mutual love (mutual love), selfimprovement (mutual improvement), and fostering that is taught from generation to generation, one of which is through parenting patterns (Fitriyani et al. 2015;Rinawati, 2010). Likewise in Javanese culture, since childhood children have been taught about compassion and help from family through the role of parents (Wewekang & Puspawuni, 2016). One of the prominent features of Javanese society is tulung tinulung / please help. The manifestation of these values in behaviour appears in all activities in society both in development or in other activities called "splice" which comes from the word sambat (ask for help), that is, please help or work together to help others without the payment of money. These values of empathy are taught from generation to generation (Lestari, 2016). Furthermore, data on the accuracy empathy of Javanese and Sundanese students when viewed from gender differences show that the highest average aspect of empathy accuracy for women in the emotional concern and the highest aspect of male empathy accuracy in accuracy cognitive. The data regarding differences in accuracy of student empathy from Javanese and Sundanese in terms of gender are listed in table 4. Based on the results of data analysis shows that the accuracy of empathy of Javanese and Sundanese students, when viewed from the sex, shows that there are meaningful differences in each aspect but these differences are not always significant in each aspect. In the perspective-taking aspect shows that there is no significant difference between the empathy accuracy of male and female students with a significance value of 0.0519> 0.05. The cognitive aspect shows a significant value of 0.096> 0.05. This means that there is no significant difference in cognitive accuracy specs between male and female students. Furthermore, the results of data analysis show that there are significant differences in the aspects of emotional concern and emotional accuracy. The emotional concern aspect shows a significant value of 0.019 <0.05, which means that there are significant differences between the emotional concerns of male and female students. The accuracy emotional aspect shows a significant value of 0.046 <0.05. This means that there is a significant difference between the emotional accuracy of male and female students. The results of the data analysis also showed that the average empathy accuracy of female and male students showed the average accuracy of female empathy was higher than that of male students, which was an average of empathy accuracy of female students of 163.65 while male students of 154.93 with a significance value of 0.049 <0.05. Men tend to be higher in cognitive accuracy and women in emotional aspects. This result is in line with the opinion of Goleman (2007), that women in western culture are on average more able to empathize than men, women tend to; have the same feelings as those felt by others, for example when someone feels sad or happy, it is also felt by women who are nearby. The results of this study are also in line with the results of a study conducted by Rueckert et al. (2011), showed that empathy at Northeastern Illinois University among female students was higher in the aspect of emotional empathy compared to male student empathy. This study is also in line with studies conducted by Mastre, Navarro, Samper, & Porcar (2009) found that female empathy responses are greater than male empathy responses in adolescents in Spain. But the results of this study are not in line with the results of a study conducted by Fischer, Kret, & Broekens (2018) found that men and women are equally more intense in capturing the emotional conditions experienced by others. CONCLUSION AND RECOMMENDATION The results of the study showed that the accuracy of empathy among Javanese and Sundanese students did not have a significant difference. This can be caused by the two tribes having the same cultural value of empathy taught for generations. This research also reveals that the highest aspect of empathy accuracy in Javanese students is an emotional concern, while Sundanese students are taking perspective. This means that the accuracy of empathy among Javanese students is higher in understanding and feeling the emotions of others, while the accuracy of empathy of Sundanese students is higher in understanding and placing themselves in the minds of others. Besides, if viewed from the gender, the accuracy of female empathy is higher than the accuracy of male empathy. The results of this study can be used as a basis for developing techniques and strategies in guidance and counselling services that focus on developing the accuracy of empathy in adolescents.
Investigating the Equivalent Plastic Strain in a Variable Ring Length and Strut Width Thin-Strut Bioresorbable Scaffold Purpose The ArterioSorb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\rm{{TM}}}$$\end{document}TM bioresorbable scaffold (BRS) developed by Arterius Ltd is about to enter first in man clinical trials. Previous generations of BRS have been vulnerable to brittle fracture, when expanded via balloon inflation in-vivo, which can be extremely detrimental to patient outcome. Therefore, this study explores the effect of variable ring length and strut width (as facilitated by the ArterioSorb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\rm{{TM}}}$$\end{document}TM design) on fracture resistance via analysis of the distribution of equivalent plastic strain in the scaffold struts post expansion. Scaffold performance is also assessed with respect to side branch access, radial strength, final deployed diameter and percentage recoil. Methods Finite element analysis was conducted of the crimping, expansion and radial crushing of five scaffold designs comprising different variations in ring length and strut width. The Abaqus/Explicit (DS SIMULIA) solution method was used for all simulations. Direct comparison between in-silico predictions and in-vitro measurements of the performance of the open cell variant of the ArterioSorb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\rm{{TM}}}$$\end{document}TM were made. Paths across the width of the crown apex and around the scaffold rings were defined along which the plastic strain distribution was analysed. Results The in-silico results demonstrated good predictions of final shape for the baseline scaffold design. Percentage recoil and radial strength were predicted to be, respectively, 2.8 and 1.7 times higher than the experimentally measured values, predominantly due to the limitations of the anisotropic elasto-plastic material property model used for the scaffold. Average maximum values of equivalent plastic strain were up to 2.4 times higher in the wide strut designs relative to the narrow strut scaffolds. As well as the concomitant risk of strut fracture, the wide strut designs also exhibited twisting and splaying behaviour at the crowns located on the scaffold end rings. Not only are these phenomena detrimental to the radial strength and risk of strut fracture but they also increase the likelihood of damage to the vessel wall. However, the baseline scaffold design was observed to tolerate significant over expansion without inducing excessive plastic strains, a result which is particularly encouraging, due to post-dilatation being commonplace in clinical practice. Conclusion Therefore, the narrow strut designs investigated herein, are likely to offer optimal performance and potentially better patient outcomes. Further work should address the material modelling of next generation polymeric BRS to more accurately capture their mechanical behaviour. Observation of the in-vitro testing indicates that the ArterioSorb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{\rm{{TM}}}$$\end{document}TM BRS can tolerate greater levels of over expansion than anticipated. INTRODUCTION Bioresorbable scaffolds (BRS) offer several potential benefits over conventional metallic stents including a reduced requirement for antiplatelet pharmacology, the reduction of late thrombosis, restoration of physiological vasomotion and reduced complication of repeat intervention. [3,10] To date, first generation BRS have not demonstrated any significant advantages over drug eluting stents (DES), and, in fact, have so far reported higher incidence of device thrombosis until 3 years post intervention. However, between three and 5 years post intervention, the event rates were not significantly different from DES, and fewer scaffold thrombosis were reported. [10,21,22] A possible explanation for this performance relates to design constraints. Specifically, given the mechanical inferiority of polymeric materials (lower elastic stiffness and yield strength) compared to metals, the use of thick struts, often in excess of 150 lm was necessary in early generations of BRS. Fracture of the scaffold struts, post implantation, was also regularly reported in first generation devices due to the brittle behaviour of BRS, particularly compared with metallic DES. [15,34] Therefore, the challenge for the next generation of BRS, some of which have already provided promising initial data, [20,23,36] lies in reducing strut thickness without loss in mechanical performance, particularly radial strength, as well as facilitating large over expansions during post-dilatation without inducing strut fracture. [6] Another important consideration for all coronary scaffolds is side-branch access. [27] Given the recommended 'provisional stenting' technique for coronary bifurcations, [4] it is important that the deformation of struts precluding side-branches is reduced to minimise the stress induced on the vessel wall. Excessive damage to the vessel walls caused by the placement of the stent/ scaffold is commonly linked with poor clinical outcomes, particularly restenosis, [16,19] due to the onset of neointimal hyperplasia. [14,24] Maximising the open cell area of the scaffold could help to achieve a reduction in damage to the vessel wall. The equivalent plastic strain, PEEQ, developed in the struts of BRS in balloon expansion is a scalar metric that provides insight into the state of plastic deformation in a structure. Plastic strain is inherently relied upon by coronary stents/scaffolds to maintain their target diameter in the diseased vessel and avoid excessive elastic recoil and so this metric will help determine the mechanical performance of BRS. Migliavacca et al. [26] investigated the effect of a number of design variables on the performance of a slotted tube stent, including consideration of the PEEQ. However, only the total PEEQ was considered rather than its distribution across the scaffold struts. Additionally, Wang et al. [37] optimised the design of a BRS by seeking to minimise the concentration of PEEQ at the crown apex to homogenise the strain distribution and related the PEEQ observed in silico to in-vitro observations of stress crazing, first reported by Radu et al. [33] We propose that PEEQ is an important but relatively poorly understood metric in the case of coronary scaffolds as it is rarely considered in the literature. Assessment of the PEEQ will help to provide insight into the avoidance of brittle strut fracture. Excessive levels of strain, a surrogate of PEEQ, will lead to strut fracture whilst insufficient PEEQ will result in higher levels of elastic recoil, due to the limited amount of permanent deformation in the scaffold struts. Therefore, understanding the relationship between scaffold geometry, PEEQ and mechanical performance is extremely important. Specifically, understanding the distribution of PEEQ across the crown width and around the scaffold ring, where PEEQ is greatest is critical. The ArterioSorb TM BRS is an open cell thin-strut coronary scaffold manufactured by solid phase orientation of poly-l-lactic acid (PLLA) [2,11] It comprises a single closed cell at the central rings with subsequent open cells either side. Whilst the original BRS design incorporated variable ring lengths, this work investigates the combined effects of variable ring length and variable strut width. No other study appears to have reported these design variables to explore the mechanical performance of a BRS and improve the associated fracture resistance. Therefore, we utilised geometry control to investigate the distribution of equivalent plastic strain in the scaffold struts of five different designs, representing modifications to a baseline design (referred to as design 5) which is based upon the open-cell ArterioSorb TM design. Finite element analysis (FEA) was used to simulate the crimping, balloon expansion and radial strength testing which mimicked the in-vitro testing of scaffolds by Arterius Ltd. Results from the bench testing of the baseline design were used to validate the in-silico data. Over expansion of the baseline design as well as the expansion of previous generations of the Arte-rioSorb TM BRS were considered to provide further insight into BRS mechanical behaviour, particularly related to the adverse effects observed in the over expansion of BRS. Specific radial strength (SRS), recoil (R % ), cell area (CA) and final diameter (FD) were considered along with average maximum PEEQ in the crowns of the scaffold end rings (PQ max ) to appraise the scaffold performance. Also, paths in particular locations on the scaffold geometry were defined along which the equivalent plastic strain was observed to aid understanding of its distribution across critical locations in the scaffold. 3D Geometry The geometry of the central closed ring, configured to represent the ArterioSorb TM BRS [11] is shown in Fig. 1. The scaffold has a constant ring length of 0.7 mm and a strut width of 0.17 mm increased to 0.22 mm at the crown apex. The shape of the rings is defined by a cosine function, defined as: where the parameter L Ring denotes the ring length and x is the phase parameter, adjusted such that an outer BIOMEDICAL ENGINEERING SOCIETY diameter (OD) of 2.54 mm is achieved using eight crown units. The values of x and y give the position in the circumferential and axial directions, respectively. This function defines the spine of the scaffold, over which the crown and strut widths are laid. The scaffold consists of closed cells between its two central rings with open cells located either side. To explore variable ring length and strut width designs, design variables are defined for the ring length factor (F RL ) and strut width factor (F SW ), to define the rates of change of the ring length and strut width (S width ). L Ring and S width for each ring are given by: where n denotes the ring number which is zero at the central closed ring and increases symmetrically towards the rings at either end of the scaffold. The parameterisation of the geometry is such that the inner radius of the crown is not fixed. Therefore, as F SW is increased, a wider crown results in a smaller radius of curvature at the inside of the crown. The following parameters remain constant across all designs; ring length of the central closed rings (0.7 mm, measured peak to trough for the underlying spine of the scaffold), ratio of the crown to strut width (220/ 170), thickness in the radial direction (0.095 mm), number of rings (12) and nominal diameter (2.54 mm). The 3D geometry of the baseline scaffold design is shown in Fig. 2. Five scaffold designs were investigated, with the baseline design referred to as design 5. The design variables were chosen such that the ring length and strut width at the end rings were not excessively large or small for the scaffolds' intended expansion diameter of 3.8 mm. Table 1 presents the geometrical parameters for each of the designs as well as the dimensions at the scaffold end ring. Constitutive Material Model The extruded PLLA tubes from which the polymer backbone of the ArterioSorb TM is manufactured are pre-processed using biaxial die drawing whereby the tube is heated above its glass transition temperature on an expanding mandrel. This induces crystallinity and orientates the polymer chains in both the axial and circumferential directions. As a result, the PLLA displays significant anisotropy with higher strength and stiffness in the axial direction, due to increased polymer chain crystalinity in this direction. The anisotropic plastic potential material model, as used by Pauck and Reddy [31] and Blair et al., [7] utilises stress-strain data obtained by Arterius Ltd (Leeds, UK) from uniaxial tensile tests of dogbone shaped specimens cut in both the axial and circumferential directions from PLLA tubes. The material's elastic behaviour is completely described by the bulk modulus (K) and the shear modulus (ฤœ), defined as: where the Young's modulus (E) and Poisson's ratio (m) were input into Abaqus/CAE (DS SIMULIA) as 3250 MPa and 0.3, respectively. The anisotropic material model uses the Hill's yield function to define the plastic behaviour of the material. In Cartesian coordinates this is defined as: where the constants are given by: the ratios r 2 ij =r 2 0 and s 2 ij =s 2 0 define the ratio of yield stress in the direction ij (where i and j = 1,2,3) to the yield stress in the tabulated stress-strain data used to define the plastic behaviour. The stress-strain data for the circumferential direction was entered into Abaqus/ CAE (DS SIMULIA) before the ratio: was used to scale the plastic yield stresses for the axial direction. The yield stress ratio for the radial and shear directions was assumed to be unity. Figure 3 shows the stress-strain data obtained from the tensile tests along with the material model implemented in the FEA simulations. The material properties in the axial direction should be taken into account, as the scaffold crowns undergo bending in expansion and so components of both axial and circumferential deformation are present. However, from previous experience of in-silico scaffold expansion, it is clear that the stresses and strains in the axial direction are dominated by those in the circumferential direction. Therefore, the impact of this difference is small due to the limited strains developed in the axial direction. Scaffold Model The scaffold was meshed using C3D8R reduced integration elements. These are 8-node 3D stress elements with trilinear shape functions. A mesh refinement study was undertaken where the final diameter and reaction force of the scaffold nodes were found to deviate by less than 4% when refining the mesh from a seed size of 0.04 mm to 0.02 mm, confirming that mesh convergence was achieved. Therefore, a mesh size of 0.04 mm was selected. Folded Balloon Model A tapered tri-folded balloon model, similar to that used by De Beule et al., [12] of length 20 mm was used to expand the scaffold. The isotropic elastic material model was used to describe the behaviour of the balloon where E and m were taken as 850 MPa and 0.4, respectively. Bilinear 4-node quadrilateral, reduced integration membrane elements were used to mesh the geometry. Crimp Model A cylindrical surface of length 20 mm was used to crimp and test the specific radial strength of the scaffold. Again, this utilised the isotropic elastic material model where E and m were taken as 5000 MPa and 0.3, respectively. The crimping surface was meshed using linear 4-node quadrilateral, reduced integration surface elements. Simulation Setup Simulation of the crimping, free expansion and specific radial strength tests was conducted to mimic the mechanical in-vitro testing of scaffold platforms Fig. 4. The simulation comprised the following: 1. Balloon folding. The tapered balloon was wrapped such that it resembled a standard trifolded balloon. 2. Crimping. The scaffold was crimped from its nominal OD of 2.54 mm to 1.10 mm via the displacement driven crimping surface. The surface was then removed to allow the scaffold to recoil. 3. Expansion. The scaffold was expanded by inflating the balloon, pressurised to 0.75 MPa. The balloon was then deflated to allow the scaffold to recoil. 4. Crushing. The scaffold was crushed radially via the displacement driven crimping surface to an OD of 2 mm. The Abaqus/Explicit (DS SIMULIA) solver, as used in many similar studies, [12,17,29] was chosen due to its ability to handle complex contact interactions. Whilst the solution is time dependant, the inertia forces are not dominant and so the simulation can be approximated as a quasi-static process. Therefore, the kinetic energy of the system should remain low throughout the simulation and should not exceed 5% of the internal energy. [1] The step lengths and time increment were taken as 0.06 s and 2e-7 s, respectively, based upon previous experience. The FEA simulations were submitted to the University of Southampton Iridis 4 high performance computing cluster. Each simulation was run on a 2.6 GHz Sandybridge 16-core node and took approximately 6 hours to complete. In-Vitro Mechanical Testing The in-vitro testing referred to herein was conducted by Arterius Ltd (Leeds, UK) on the baseline design (design 5) and previous generations of the Arte-rioSorb TM . Firstly, the scaffold was crimped incrementally on to a balloon-catheter at 45 C to an OD of 1.1 mm. The scaffold was then expanded in stages using a 3.0 mm diameter tri-folded balloon, inflated to 1.2 MPa, followed by a tri-folded balloon of diameter 3.5 mm, inflated to 0.75 MPa, intended to expand the scaffold to a target OD of 3.8 mm. This was conducted whilst the scaffold was submerged in a water bath, heated to 37 C to simulate the haemodynamic environment. This multi-step process is similar to that used in a clinical scenario where post-dilatation of the scaffold is commonplace. Once the post-dilatation balloon was deflated and removed the scaffold elastically recoiled and was then radially crushed to an OD of approximately 2 mm using a Blockwise TTR2 radial force testing machine. The diameter of the scaffold was measured at each stage of the crimping and expansion process, separately for each scaffold ring using a laser measurement system. Cell Area Cell area (CA) is the space enclosed by a single open cell at the end of the device. The choice of end is arbitrary given the scaffold is axially symmetric. Final Diameter Final diameter (FD) measures the OD of the scaffold once it has elastically recoiled post balloon-expansion. The diameter of an end ring of the scaffold was measured at multiple locations around the circumference to calculate a mean value. Percentage Recoil Percentage recoil (R % ) is defined as the percentage difference between the maximum diameter (MD) of an end ring, at the midway point of the expansion step where the balloon reaches the maximum inflation pressure and the final diameter (FD) at the end of the expansion step, defined as: Recoil provides a convenient method of comparing the expected FD of the scaffolds given that each design will achieve a different MD in the simulations due to their different SRS. In clinical practice, the balloon inflation pressure would be varied to ensure to ensure that a deployed scaffold was suitably expanded. Specific Radial Strength Specific radial strength (SRS) is the maximum reaction force of the crimping cylinder divided by the scaffold length. This is defined as: where F i is the reaction force of the i th of the 3240 nodes that constitute the crushing cylinder and l is the length of the scaffold. Equivalent Plastic Strain The distribution of PEEQ was analysed on paths across each of the crowns' widths, (Fig. 5a) and along paths around the end ring of the scaffold, (Fig. 5b). The metric, PQ max is defined as the average of the maximum PEEQ at each end ring crown, found using the ring path. An end ring was chosen as this provides the strongest contrast in performance between different designs. The equivalent plastic strain is defined as: where _ pl is defined depending upon the material model in use, in this case utilising the Mises definition: where _ pl is the vector containing the rate of plastic strain in each direction. Due to the circumferential symmetry of the scaffold geometry, the PEEQ data along the scaffold end ring paths was averaged across each of the four repeating units that form an open scaffold ring. Figure 6 shows a repeating unit of the end ring of the scaffold, highlighted red. To assess the PEEQ across the crown width, a fourth order polynomial regression model was used to fit to the data. Using an average of the eight PEEQ values (one per crown) at each location across the crown width was not possible because of the differences in mesh discretisation between crowns due to the use of a swept mesh. This issue did not affect the ring paths and so an average value at each location along the repeating unit was used to aid visualisation and analysis of the data. The data for the eight crown paths was divided into two groups for model fitting -those with and without a connector attached, as this significantly affected the PEEQ distribution. Table 2 shows the performance metrics for each of the five designs simulated. Design 2 has the greatest SRS and the smallest FD. However, it also presents the largest level of PQ max at the scaffold end rings and the smallest CA. In contrast, design 5 offers a similar level BIOMEDICAL ENGINEERING SOCIETY of SRS to design 2 yet a greatly reduced level of PQ max and a larger CA. This broadly describes the design trade-off that might be expected. Short and wide struts will offer good levels of radial strength and recoil whilst long narrow struts provide improved cell area and plastic strain characteristics. Interestingly, design 2 (a short-wide strut design) appears not to offer a significant benefit over design 5 in terms of SRS which is explored below. Figure 7 displays a comparison of the force/diameter curves for the radial crushing step of the in-vitro and in-silico tests for scaffold design 5. Whilst the stiffness in the two cases is very similar, shown by the initial linear gradient of the curves, the in-silico prediction significantly overestimates the maximum force required to crush the scaffold. Figure 8 gives the PEEQ distribution in four rings of each of the five scaffolds in expansion when the balloon is at maximum inflation pressure. It is evident that the difference in geometry between each design yields significantly different distributions of PEEQ. The short-wide strut design (design 2, Fig. 8b) shows the greatest level of PEEQ with an average maximum of 1.2 across the eight crowns. Design 2 displays significant splaying of the crowns in the radial direction whilst design 4, (Fig. 8d) shows twisting at the crown apex as the scaffold is expanded. Design 3, (Fig. 8c) displays the lowest level of PEEQ development amongst the five designs where PEEQ at the inside of the crown does not exceed 0.5. Figure 9 details the PEEQ along the path denoted in Fig. 5b, averaged for a single repeating unit, from an end ring of each of the five scaffold designs. The error bars show the maximum deviation of the average value from the data points. Designs 2, 4 and 5 in Fig. 9b, 9d and 9e, respectively, show an alternating pattern of PEEQ at the inside of each crown due to the presence of a connector. This is evident in the two different amplitudes of the largest curves which denote the inside of the crown, where the largest level of PEEQ exists. This effect does not appear to be present in the narrow strut designs, shown in Fig. 9a and 9c, where the curves denoting the inside of the crown are of equal amplitude. In the case of each design, with the exception of design 4, the level of PEEQ on the outside of the crown is symmetrical about the crown attached to a connector. Indeed, the small 'v' shaped peaks are of equal amplitude on each side of the large amplitude peak. In the case of design 4, the PEEQ distribution on the outside of the crown is asymmetric about the connector. This is evident via the right side of each 'v' shaped peak in Fig. 9d displaying a larger value than the left side. It is also evident that the wide strut designs show greater variability in maximum PEEQ compared with the narrow strut designs, evidenced in the larger error bars in Fig. 9b, 9d. Figure 9b also confirms that the PEEQ in design 2 only drops to zero Figure 10 displays the values of PEEQ across the two groups of crowns on the end ring of each scaffold design using the path in Fig. 5a. As expected from Figs. 8 and 9, designs 2 and 4 display significantly greater levels of PEEQ at the inside of the crown and show a greater level of penetration of PEEQ across the crown. As per Fig. 9, the presence of a connector reduces the level of PEEQ developed at the inside of the crown. This is particularly evident in design 5, (Fig. 10e) where there is a large difference in maximum PEEQ between the two cases. This is also present in designs 2 and 4 but less noticeable, in part due to the large variability of PEEQ. DISCUSSION Firstly, a comparison of the in-silico and in-vitro tests for scaffold design 5 shows that the simulations poorly predict the scaffold's SRS and R % , predominantly attributed to the constitutive material model. However, the FD of the scaffold predicted by the model is 3.57 mm, compared to 3.71 mm according to the in-vitro test which gives a percentage error of less than 4%. Whilst an elasto-plastic model utilising the Hill's yield function has been employed in previous computational studies of polymeric BRS, [7,31] Hoddy et al. [18] have demonstrated the challenge of accurately predicting both the elastic recoil and radial strength of a scaffold based upon the thin-strut ArterioSorb TM using this model framework. However, their novel user-defined material model predicted radial strength within 1.1% of the analogous in-vitro test for only a small reduction in accuracy of the post-expansion diameter prediction. To date, more advanced material models that capture the viscous response of PLLA have been used to explore polymeric BRS in the context of FEA. [5,8,9] However, viscoelastic-plastic models that utilise a parallel network rheology require a greater effort in terms of calibration to the uniaxial tensile data and those available in Abaqus/Explicit do not allow for any description of anisotropy which appears critical in the PLLA used to construct the ArterioSorb TM . Eswaran et al. [13] very accurately predicted the force/displacement profile of a single ring early generation BRS when subjected to radial expansion and crushing by employing a user-defined anisotropic viscoelastic-plastic material model. Therefore, whilst the elasto-plastic material model employing the Hill's yield function represents a compromise, it was considered beyond the scope of this research to develop an alternative material model that can more accurately describe the material anisotropy and capture its viscous time-dependant behaviour. However, it is evident from the literature that improvements in material modelling will result in more accurate predictions of scaffold mechanical behaviour. Figure 11 shows a comparison of the in-silico and invitro expansion of scaffold design 5 at the maximum and final diameters. The in-silico scaffold is shown as a red overlay upon an image of the in-vitro test. Observing Fig. 11b, it can be seen that the in-vitro The in-vitro test and baseline results are highlighted in bold to aid comparison. FIGURE 7. A comparison of the in-vitro and in-silico force/diameter curves for the radial crushing process. BIOMEDICAL ENGINEERING SOCIETY scaffold elastically recoils less than the in-silico scaffold, as per Table 2, as the open cell crowns remain more greatly straightened. Whilst prediction of the final diameter yields less than a 4% error, the maximum diameter is predicted with even greater accuracy as 3.85 mm compared to 3.81 mm as reported by the invitro test, an error of 1%. Differences are noticeable in the scaffold shapes, particularly in Fig. 11b at the central closed ring. In addition to the limitation of the material model, this discrepancy could also be a result of the interaction between the scaffold and balloon. The in-vitro test results in the crimped scaffold remaining firmly in contact with the balloon after the crimp is removed whilst the in-silico test leaves a small gap between the crimped scaffold and balloon. Moreover, the friction between the scaffold and balloon is set using a friction coefficient of 0.1 yet this value is difficult to validate and may be a source of error. The scaffold also appears to bend slightly in the in-vitro test which results in the left hand rings of the in-silico scaffold misaligning with the in-vitro scaffold in Fig. 11b. However, prediction of the scaffold shape generally appears most accurate at the end rings, the location at which the values of PEEQ are extracted. This gives improved confidence regarding the prediction of the local strains at this critical location. It is evident in Fig. 7 that the scaffolds display a similar level of initial stiffness in the radial crushing process but the in-silico prediction significantly overshoots the maximum force required to crush the scaffold from its recoiled diameter. In Fig. 7 the force/diameter curves are displaced from each other in the x direction due to the scaffolds reaching different final diameters leading to the crimp making contact with the scaffold at different points in the two tests. After the maximum force is applied the scaffold is considered to have failed. Therefore, predicting the force/diameter relationship after this point it less critical than predicting the maximum force and stiffness. Observation of the five scaffold designs makes clear that variations in ring length and strut width, of approximately +/-20% from the baseline design, along the scaffold length yield a change in mechanical performance, as per Table 2. This results in SRS ranging from 1.01 N/mm to 1.54 N/mm and R % varying from 4.14% to 10.23%. There is also a significant variation in PQ max , from 0.67 to 1.20. Referring to Table 2, short-wide strut designs afford improved levels of SRS and R % due to the increased levels of PEEQ developed at the crown apex whilst long-narrow strut designs naturally accommodate superior side branch access with a reduced risk of strut fracture due to the lower levels of plastic deformation. The use of defined paths upon which to observe the PEEQ, significantly aids the quantification and analysis of the subtle differences in PEEQ distribution between each scaffold. Designs 1 and 5 show similar distributions of PEEQ, according to Fig. 8. This is unsurprising given they both have the same ratio of ring length to strut width. Design 1 has a 15% lower SRS than design 5, due to the lower level of PEEQ developed at the crown apex, evidenced in Fig. 10 where the PQ max is approximately 0.6 for design 1, compared with 0.8 for design 5. Interestingly, in the case of design 5, PQ max is greater than the ultimate tensile strain (UTS) of PLLA, which is 0.7 in the circumferential direction, as per Fig. 3. This indicates the ability of the scaffold to tolerate large levels of plastic strain in expansion. Figure 12 strengthens this argument as it shows design 5 significantly over-expanded without displaying evidence of strut fracture. Many of the crowns appear to have been completely straightened yet do not display any whitening of the plastic known as 'crazing', first observed in BRS by Radu et al. [33] Indeed, this validates design 5, and, by extension, design 1, as viable designs that tolerate significant plastic strain, even when over-expanded, as is commonplace in clinical practice. [15] Figure 10(a) and 10(e) show two clear patterns of PEEQ penetra-tion across the crown width. It is evident that the presence of a connector attached to a crown restricts the ability of the crown to open in expansion, limiting the development of plastic strain. This will result in crowns in any given scaffold ring displaying alternating stress levels which could worsen uneven resorption of the scaffold where resorption occurs fastest at the highly stressed crowns. [25,35] This in turn could exacerbate the long term blood clotting risk if scaffold struts protrude into the blood flow, providing sites for thrombus formation. Therefore, the use of a closed cell scaffold design, whilst reducing the cell area afforded by the scaffold, may improve the likelihood of even resorption of BRS. In contrast to designs 1 and 5, design 3 displays a very low level of plastic strain developed in the scaffold struts. According to Figs. 8 and 9, the value of PQ max does not exceed 0.5. Figure 10(c) shows that the PEEQ does not penetrate far into the straight sections of the strut, leading to the poor R % and SRS performance in design 3. However, this design does have a large CA to facilitate improved side branch access. The presence of a connector does not appear to influence the PEEQ distribution in the narrow strut design. Referring to Fig. 10c there is no difference in the PQ max between the two groups, most likely due to the scaffold being under expanded for its ring length. Referring to the wide strut designs (designs 2 and 4), significantly more plastic strain is developed in the scaffold struts compared to the narrow strut designs. In addition to the reduced radius of curvature at the inside of the crown apex, this is due to the twisting and splaying of the crowns that occurs in expansion, highlighted in Fig. 13 which shows designs 2 and 4 at maximum balloon inflation pressure. This behaviour is likely to occur due to the high ratio of strut width to strut thickness. As the scaffold is expanded, the conventional cantilever movement in the crown is exchanged for twisting in the radial direction as this provides less resistance to open the scaffold ring. The twisting behaviour has also been observed in vitro when an alternative long ring-length scaffold design was deployed into a silicon coronary artery model, shown in Fig. 14. Whilst any twisting or splaying of the scaffold struts may be expected to be constrained by the vessel wall in vivo, this does not appear to be the case in vitro. The tendency of the struts to twist will heighten the stress exerted on the arterial lining, increasing the risk of neointimal hyperplasia, [14,24] as well as increasing the risk of thrombus formation due to malapposed scaffold struts. Of course, reducing stress exerted on the vessel wall is highly desirable. Indeed, this is highlighted by a number of studies that assess the stress exerted by coronary stents/scaffolds on the arterial layers. [28,30,32] This mechanism also explains the relatively poor SRS performance of design 4 as the weaker resistance of the struts in the radial direction is exposed to the crushing force, rather than utilising a component of the stiffer material properties in the axial direction. Observing Fig. 9(b) and 9(d) it can be seen that the PQ max in designs 2 and 4 at the inside of the crown apex varies by approximately +/-20% between crowns, highlighted by the error bars in both Figs. 9 and 10. It is likely the splaying and twisting of the crowns results in the large variation in maximum PEEQ, in part due to the small lateral movement of the location of PQ max . Twisting at the crown apex results in the location of maximum PEEQ moving away from the geometrical centre of the crown, leading to the PQ max values in each repeating unit not aligning. Moveover, the very large rate of change of PEEQ at the inside of the crown apex also leads to differences between each repeating unit. Significantly, only a very small portion of the scaffold struts in design 2 appear to have developed no plastic strain. This is confirmed by Figs. 8b and 9b where the PEEQ has penetrated far into the straight sections of the struts. Although design 2 provides good SRS performance (albeit only marginally higher than design 5) it does present a significant risk of strut fracture, due to the high level of PQ max . Referring to design 4 in Fig. 9d, it is evident that the PEEQ is contained more closely to the vicinity of the crown apex, compared to design 2. This explains the relatively low level of SRS in design 4. However, design 4 still presents a significant risk of strut fracture due to the large value of PQ max developed at the inside of the crown. Figure 10b shows that a significant level of PEEQ develops to around 50% of the crown width and some PEEQ even penetrates the entire crown width. This can be extremely detrimental to the scaffold as it can induce the 'hinging' phenomenon, whereby plastic strain visibly penetrates across the width of the crown, evidenced by the whitening of the plastic from the inner to outer crown radius. Figure 15 shows two different previous generations of the Arte-rioSorb TM BRS, which utilise a helical pattern of connectors, expanded via balloon inflation, one of which, depicted in Fig. 15a, shows the onset of 'hinging' even when expanded to a modest target diameter. This contrasts the scaffold in Fig. 15b which shows no evidence of hinging when expanded to a similar target diameter. Whilst hinging has not been observed to initiate brittle fracture in scaffolds when expanded, it does leave them extremely vulnerable to fatigue failure and will, as previously mentioned, have implications on the rate at which the scaffold resorbs in the crown region. This in turn could lead to protrusion of the scaffold struts into the lumen presenting an increased risk of thrombus formation. CONCLUSIONS Variable ring length and strut width scaffolds were explored via in-silico testing to facilitate improved over expansion tolerance and side branch access of BRS. Comparison with in-vitro data was also conducted to validate the simulations and deepen understanding of the adverse effects that manifest in BRS expansion. Particular attention was given to the development of equivalent plastic strain, PEEQ, in the scaffold struts. This was also observed along defined paths both across the crown widths and along the edges of the scaffold end rings which highlighted a number of subtle differences in PEEQ distribution between each design. The following conclusions could be drawn: 1. The maximum and final diameters of the scaffold post balloon expansion were accurately predicted by the in-silico test with a percentage error of less than 4%. The specific radial strength and percentage recoil of the scaffold was poorly predicted by the in-silico testing due to the limitations of the elasto-plastic material model and the difficulty in accurately capturing the interaction between the balloon and scaffold. 2. Wide-strut scaffolds offer a significantly increased risk of brittle strut fracture due to the development of high levels of equivalent plastic strain at the inner radius of the crown. The onset of hinging may also occur in wide-strut designs if equivalent plastic strain penetrates the width of the crown. Allied to this consideration is the radial twisting about the crown apex that occurs; a result of the high strut width to strut thickness ratio. This heightens In-vitro evidence of strut twisting about the crown apex in the deployment of a long ring-length BRS into a mock silicon vessel. The initial scaffold expansion was conducted by Arterius Ltd whilst the image was obtained using an optical microscope at the University of Southampton's material laboratory. the risk of damage to the vessel wall, reduces the scaffold's radial strength and was also found to result in an asymmetric distribution of PEEQ either side of the connector. 3. In-vitro experiments suggest that the Arte-rioSorb TM BRS can tolerate significant over expansions without evidence of brittle fracture or the hinging phenomenon. 4. The presence of a connector attached to a crown will reduce the equivalent plastic strain developed at that crown and alter its distribution around the scaffold ring. This is particularly the case in over-expansion scenarios and for wide strut designs. This will impact the likelihood of brittle fracture, the manifestation of the hinging phenomenon as well as have implications for the even resorption of the scaffold in-vivo. FUNDING This study was funded by the Engineering and Physical Sciences Research Council.
Fluctuating brane in a dilatonic bulk We consider a cosmological brane moving in a static five-dimensional bulk spacetime endowed with a scalar field whose potential is exponential. After studying various cosmological behaviours for the homogeneous background, we investigate the fluctuations of the brane that leave spacetime unaffected. A single mode embodies these fluctuations and obeys a wave equation which we study for bouncing and ever-expanding branes. II. THE BACKGROUND CONFIGURATION We consider five-dimensional static spacetimes with the usual cosmological symmetries (homogeneity and isotropy) along the three ordinary spatial dimensions. The metric can be written in the form where dฮฃ 2 is the metric for maximally symmetric three-dimensional spaces. For simplicity, we will consider only the flat case. With our parametrization (1) of the metric, we implicitly assume that the Killing vector โˆ‚ โˆ‚t is time-like and thus that the spacetime is static in the strictest sense. Although the calculations below are given explicitely in this context, it is not difficult to show that the end results will still hold if the coordinate t becomes space-like, i.e. if A 2 and B 2 are negative. We assume that the bulk contains a scalar field ฯ†(r) with a potential V (ฯ†). The five-dimensional action for the bulk is given by an expression of the form where we have chosen the normalization so that the scalar field is dimensionless and the potential scales like a square mass. The bulk Einstein's equations, derived from this action, read or in terms of the Ricci scalar Explicitly, they take the form where a prime denotes a derivative with respect to r. Similarly the bulk scalar field obeys the Klein-Gordon equation These equations can be solved for specific potentials V (ฯ†), in particular for exponential potentials as summarized below. A. Explicit solutions In the case of a scalar field potential of the form there exists a simple class of static solutions [21,3]. The full set of static solutions is given in [14], but we will restrict our study to the class of solutions described by the metric with where C is an arbitrary constant, and the scalar field configuration ฯ† = โˆ’3ฮฑ ln(R). (12) Note that, in the limit ฮฑ = 0, the scalar field vanishes while its potential reduces to an effective cosmological constant and one recovers the well-known Sch-(A)dS five-dimensional metric. The metric (10) can be expressed in a slightly different form, as in [3], namely after the change of coordinate and a trivial redefinition of time. B. Moving brane Let us now consider the presence of a three-brane moving in the static bulk background (1). Although we are interested, in this section, only in the motion of the homogeneous brane, we already present the general formalism, following [22], which we will use later for the study of brane fluctuations. We define the trajectory of the brane in terms of its bulk coordinates X A (x ยต ) given as functions of the four parameters x ยต which can be interpreted as internal coordinates of the brane worldsheet. One can then define four independent vectors which are tangent to the brane. The induced metric on the brane is simply given by whereas the extrinsic curvature tensor is given by where n A is the unit vector normal to the brane, defined (up to a sign ambiguity) by the conditions It is also useful to express K ยตฮฝ in terms of only partial derivatives, which reads Let us now apply this formalism to the homogeneous brane, which can be parametrized by where we take for the parameter ฯ„ the proper time, i.e. such that The induced metric is thus which shows that the geometry inside the brane is FLRW (Friedmann-Lemaรฎtre-Robertson-Walker) with the scale factor given by the radial coordinate R of the brane. The cosmological evolution within the brane is thus induced by the motion of the brane in the static background. With the parametrization (20), the four independent tangent vectors defined in (15) take the specific form where a dot stands for a derivative with respect to ฯ„ while the components of the normal vector are given by Finally, the components of the extrinsic curvature tensor are given by Assuming Z 2 symmetry about the brane, the junction conditions for the metric read where S ยตฮฝ is the energy-momentum tensor of brane matter and S โ‰ก S ยตฮฝ h ยตฮฝ its trace. Because the brane is homogeneous and isotropic, S ยตฮฝ is necessarily of the perfect fluid form, i.e. S ยต ฮฝ = Diag (โˆ’ฯ, P, P, P ) , where the energy density ฯ and the pressure P are functions of time only. Substituting the above expressions (25) and (26) for the components of the extrinsic curvature tensor, one finds the following two relations: and 1 AB Using the first junction condition (29), the second expression (30) can be reexpressed explicitly as a conservation-like equation for the energy density ฯ:ฯ whereแน˜ โ‰ก R โ€ฒแน™ and Using Einstein's equations (5)(6)(7), this function f (r) can be reexpressed in terms of the scalar field as and the conservation equation takes the form of As we will see in the next subsection, the use of the junction condition for the scalar will enable us to reexpress once more this conservation equation in another form. C. Junction condition for the scalar field In addition to the junction conditions for the metric, we must also ensure that the junction condition for the bulk scalar field is also satisfied. The latter depends on the specific coupling between ฯ† and the brane matter. In order to be more explicit, we now introduce the action for the brane where we assume the metrich ยตฮฝ to be conformally related to the induced metric h ยตฮฝ , i.e. Variation of the total action S = S bulk + S brane with respect to ฯ† yields the equation of motion for the scalar field, which is the Klein-Gordon equation (8) with the addition of a distributional source term since the scalar field is coupled to the brane viah ยตฮฝ . An alternative way to deal with this distributional source term is to reinterpret it as a boundary condition for the scalar field, or rather a junction condition at the brane location which takes the form where ฮพ โ€ฒ โ‰ก dฮพ/dฯ†. Taking into account the Z 2 symmetry and the explicit form for the normal vector (24), one ends up with the condition where all terms are evaluated at the brane location. Moreover, using the first junction condition (29), this relation can be reduced to This junction condition for the scalar field can be substituted in the (non) conservation equation (31) which then readsฯ This relates the energy loss, from the point of view of the brane, to the transverse momentum density, from the point of view of the bulk. In fact, this non standard cosmological conservation equation can also be rewritten in the standard form if one introduces the energy densityฯ and pressureP = wฯ, as well as the scale factorรฃ, defined with respect to the metrich ยตฮฝ , which in other contexts would be referred to as the Jordan frame. D. Brane cosmological evolution In order to work with an explicit example, we turn again to the dilatonic bulk solutions given in (10)(11)(12) and try to implement a moving brane in these backgrounds. Taking the square of the junction condition (29), one immediately obtains the generalized Friedmann equation, where we have introduced the notationฯ For ฮฑ = 0, (42) reduces to the well-known brane Friedmann equation with the characteristic ฯ 2 term on the right hand side. As for the scalar field junction condition (39), the radial dependence of ฯ† given in (12) imposes the following constraint between the equation of state ratio w and ฮฑ: If we now assume that w is constant, this constraint implies that the coupling is linear, i.e. ฮพ(ฯ†) = ฮพ 1 ฯ†, in which case the conservation equation (40) can be explicitly integrated to yield where ฯ 1 is a constant. One can then substitute this relation into the Friedmann equation (42) to obtaiแน… with the potential where the coefficients are given explicitly by and the powers by Equation (46) is analogous to the total energy (which vanishes here) of a particle moving in a one-dimensional potential V (R). The case of a brane domain wall, w = โˆ’1, was analysed in [3]. In this case, p 1 = p 2 and the potential is the sum of only two terms. Here, however, we have obtained the equation of motion valid for any equation of state of the form P = wฯ, with w constant. In order to simplify the potential, let us consider the situation ฮฑ 3 = โˆ’C = 0. It is not difficult, from the analysis of the two terms left in the potential V (R), to see that the potential has six distinct shapes, depending on the sign of a 2 and the value of ฮฑ 2 . To classify the various cases, it is convenient to introduce the parameter p defined, for w = โˆ’1, by so that the powers p 1 and p 2 simply read The various cases are then: โ€ข p < โˆ’6 (which implies w < โˆ’2/3 and 0 < p 1 < p 2 ) (see fig. 1); โ€ข โˆ’6 < p < 0 (which implies p 1 < 0 < p 2 ) (see fig. 2); โ€ข p > 0 (which implies p 1 < p 2 < 0) (see fig. 3). In the three cases corresponding to a 2 < 0, the evolution of the scale factor is monotonous because the potential is always negative, whereas for a 2 > 0, the potential vanishes at a nonzero value R c which represents the maximum value of the scale factor during the cosmological evolution. In the latter three subcases, cosmological expansion is thus followed by a collapse. This situation a 2 > 0 can be seen to be equivalent to the supergravity models with a bulk scalar field and an exponential superpotential [6]. It is also worth noticing that when a 2 < 0 the function h(R) parametrizing the metric becomes negative. In that case the coordinate R becomes time-like whereas t becomes space-like and the Killing vector corresponding to translations of t is then space-like. The brane normal vector is then given by n a = ( which is a real quantity as soon asแน™ 2 is large enough. The rest of the analysis remains unchanged. III. BRANE FLUCTUATIONS In this section, we turn to the analysis of the brane fluctuations allowed when the bulk geometry is left unperturbed. The fluctuations of the brane will be described by perturbing the embedding of the brane in the bulk spacetime, i.e. by writing where the bar stands for the homogeneous quantities defined in the previous section. The four tangent vectors defined in (15) are then given by Substituting in the definition of the induced metric (16), and being careful to evaluate the (unchanged) bulk metric g AB at the perturbed brane location, one finds Using this expression, one can easily make the connection with the Bardeen potentials measuring the gauge invariant metric perturbations induced by the fluctuations of the brane position. In the longitudinal gauge, the perturbed metric reads and by comparing with (54), one finds that which gives, after using the background junction conditions (27), and The metric perturbations are thus directly proportional to the brane fluctuation ฮถ. We will return later to the evolution of the Bardeen potentials. The rest of this section is devoted to the derivation of the equation of motion that governs the evolution of the brane fluctuation. We first consider the perturbed junction conditions for the metric and then those for the scalar field. A. Perturbed junction conditions for the metric As a first step, let us evaluate the perturbed normal vector, which can always be decomposed as The coefficients ฮฑ and ฮฒ ยต can be determined by perturbing the two equations in (18). They are given by Substituting the expressions (53) and (59) in the perturbation of the extrinsic curvature tensor (17), one obtains the expression The expression with an upper index and a lower index is also useful and can be obtained from the above expression by using the relation where the indices forK ฯƒฮฝ are raised by using the inverse metrich ฯฯƒ . The explicit evaluation of the components of the perturbed brane extrinsic curvature tensor, for the metric (1), then yields In the longitudinal gauge, which we shall use, the components of the perturbed brane energy momentum tensor read where is the (traceless) anisotropic stress tensor, and the perturbed junction conditions for the metric, which follow from (27), are given by Inserting (67-69), this gives explicitly The second equation (65) determines, once ฮถ is known, the velocity potential v, except when the equation of state is w = โˆ’1, in which case one gets the constrainแนซ This implies that the perturbation reads up to a global translation of the brane. The function C(k) will be determined later. Finally, equation (74) can be decomposed into a trace and a traceless part, giving respectively The last equation simply gives and shows that the anisotropic stress is intrinsically related to the brane fluctuation. B. Perturbed junction condition for the scalar field The next step in order to establish the equations of motion for the brane fluctuations is to write down the perturbed junction condition for the scalar field. The first order perturbation of (37) yields Taking into account Z 2 symmetry and using the background junction condition (38), its derivative along the trajectory, and the other junction condition (29), one finds, after some algebra, that eq. (80) takes the form where we have introduced Combining (81), (77) and (72), one sees that the matter perturbation can be eliminated to give a differential equation that depends only on ฮถ. It has the form of a wave equation. and reads Introducing the function ฯˆ defined by and using the conformal time ฮท defined by dฯ„ = Rdฮท, one can rewrite the wave equation in the simple form where the effective mass is given by We have thus obtained the wave equation governing the intrinsic brane fluctuations in the general case. Initially, we started from a system of five equations, (72), (73), (81), (79) and (77), all obtained from the junction conditions, either of the metric or of the scalar field. These five equations contain one dynamical equation, which has been expressed above in terms of the quantity ฮถ (or ฯˆ) and four constraints which yield respectively the energy density ฮดฯ, the pressure ฮดP , the four-velocity potential v and the anisotropic stress ฮดฯ€. In contrast with the standard cosmological context where one can choose beforehand the relation between ฮดP and ฮดฯ, and the anisotropic stress, they are here completely determined by the constraints once a solution for ฮถ is given. This is necessary to get a configuration where the brane is fluctuating while the background is unaffected. Intuitively, this means that the gravitational effect due to the geometrical fluctuations of the brane must be exactly compensated the distribution of matter in the brane, so that the net gravitational effect due to the presence of the brane is completely cancelled in the bulk. In the rest of the paper, we will specialize our study to specific solutions, which will simplify the expression of the effective mass. IV. PERTURBATIONS IN DILATONIC BACKGROUNDS In this section we will focus on the dilatonic backgrounds described earlier, corresponding to exact solutions for an exponential potential. Using the previous general result about the mass term M 2 in the wave equation, we can now specialize these results to the dilatonic backgrounds. We will concentrate on the case where the background equation of state parameter w is constant. Substituting the solution (10) in the expression (86), one finds that the square mass reads Notice that for w = โˆ’1 only the C dependent term remains. Using the decomposition ฮถ = C(k)R we find that which leads to C(k) = 0 and therefore the absence of brane fluctuations for w = โˆ’1. In the following we will concentrate on the C = 0 case. Introducing the parameter p defined in (50), the squared mass is now given by We will now treat separately the cases of positive or negative a 2 , i.e. of ฮด, which correspond to very different behaviours. A. Bouncing branes Let us concentrate first on the case where ฮณ has the dimension of mass. The motion can be conveniently analysed by defining y = R 6(w+1) . The equation of motion (46) yields Let us define n as The scale factor is then given by the implicit relation y 1+n 1 + n F (1 + n, F being the hypergeometric function. Of course the scale factor is only determined after inverting these equations. The motion is bounded from above by y = ฮด. For a brane whose scale factor increases initially, it reaches a maximal value corresponding to ฮด before bouncing back and being irredeemingly attracted by the singularity located at R = 0. It is interesting to notice that for p โ‰ค โˆ’6 the singularity is reached at infinite conformal time while for p > โˆ’6 it takes a finite amount of conformal time to the brane in order to reach the singularity. Let us now analyse the square mass driving the brane fluctuations. We have plotted the different cases in Figure 4. There is a qualitative change of behaviour for the square mass when p > โˆ’6. Below the critical value p = โˆ’6 the mass vanishes at the singularity when R vanishes. This leads to an oscillatory behaviour of the brane fluctuation. Above that threshold the squared mass becomes infinitely negative at the singularity leading to an instability of the brane to fluctuations, i.e. the brane tends to be ripped to shreds by the presence of the singularity. 4. M 2 in a bouncing universe. In the left picture (p โ‰ค โˆ’6), the red line corresponds to p โ‰ค โˆ’9, the green line represents โˆ’9 < p < โˆ’6 and the blue line displays the critical value p = โˆ’6. The right picture shows M 2 for p > โˆ’6 : the red line corresponds to โˆ’6 < p < 0, the green line to p = 0, the blue line stands for 0 < p < 3 whereas the yellow line stands for p โ‰ฅ 3. In the cases โˆ’9 < p < โˆ’6, the square mass is negative for small values of R, reaches a minimum and then increases up to positive values with increasing R. However, values of R greater than ฮด 1/(6(w+1)) are irrelevant since the background evolution bounces when reaching this maximum value. The position of the minimum is given by whereas the scale factor corresponding to M 2 = 0 is given by For โˆ’6 < p < 3, the square mass starts from negative values and becomes positive after the critical value y 0 which is less than ฮด only for p < โˆ’3. In other words, for cases p > โˆ’3, the region corresponding to positive square mass is irrelevant. We can recover this qualitative analysis by studying the solutions of the wave equation (85), which, in terms of the variable y, reads The variable y evolves between 0 < y < ฮด. The asymptotical behaviour of the perturbation near the singularity y = 0 depends on n : ฮด โˆ’ 1 16 ln y , n = โˆ’1 C 1 cos y n+1 + ฮฑ + C 2 sin y n+1 + ฮฑ , n < โˆ’1 Notice that for p < โˆ’6 the brane oscillates for an infinite amount of conformal time before reaching the singularity. For p > โˆ’6, the brane stops oscillating and hits the singularity in a finite amount of conformal time. For p = โˆ’6 the brane oscillates only for small length-scales corresponding tok > 4 โˆš ฮด. The Bardeen potentials, given in (57) and (58), are also worth investigating. They are proportional, and related to the brane fluctuation according to One thus notices the critical value p = โˆ’9, above which the Bardeen potentials are enhanced, for an expanding universe, with respect to ฯˆ. One can compute numerically the evolution of the perturbation, the Bardeen potential and the scale factor as a function of the conformal time (see Figure 5). B. Ever expanding branes We now turn to the case Using once more the y variable we can rewrite the background evolution equation as dy 6(1 + w) y + |ฮด| 6. M 2 in an inflationary universe. The picture p โ‰ค โˆ’6 depicts three different cases : p < โˆ’9 (red line), โˆ’9 โ‰ค p < โˆ’6 (green line) and p = โˆ’6 (blue line). In the second picture, where โˆ’6 < p โ‰ค โˆ’3, the red and blue lines illustrate โˆ’6 < p < 0, the green line stands for p = 0 and the yellow line stands for 0 < p โ‰ค 3. The third picture represents p > 3. Since p 2 > p 1 , the asymptotic behaviour at early times, i.e. at small R, is dominated, both for the background and for the perturbations, by the R p1 term (since p 2 > p 1 ), which does not depend on the sign of a 2 . Therefore, the asymptotic behaviour at early times for ever expanding branes is exactly the same as that found in the case of bouncing branes. For large R, conversely, the dominating term is R p2 . For the background, this leads to a power-law behaviour of the scale factor, explicitly given by which, in terms of the cosmic time, translates into R(ฯ„ ) โˆ ฯ„ 2/(p(1+w)+2) . As soon as p < 0, one gets an accelerated expansion, similar to the standard four-dimensional power-law inflation, which can be obtained from a scalar field with an exponential potential. For p > 0, one gets a decelerated power-law expansion. It is instructive to compare the power-law expansion for the brane with the standard expansion law, which is given by Substituting the expression (105) in the squared mass, one finds Note that, in the case of power-law inflation, one can derive a second-order differential equation of the form (85) for a canonical variable which is a linear combination of the scalar field perturbation and of the (scalar) metric perturbation. For a power-law a โˆผ t q , one would find It is easy to check that our expression for M 2 does not coincide with the M 2 ef f deduced from power-law inflation, for the same evolution of the background. With power-law inflation, the spectrum for the Bardeen potential(s) is given by which tends to a scale-invariant spectrum for large power q. In our case, we obtain that the fluctuations are If one assumes that ฯˆ is given in the asymptotic past ฮท โ†’ โˆ’โˆž as the usual vacuum solution in inflation, i.e. then this means that ฮฑ 2 = 0 and the behaviour on long wavelengths is given by Using the relation between ฯˆ and the Bardeen potential, one thus finds that the spectrum for ฮฆ is given by Contrary to the four dimensional inflationary case, the spectrum of the Bardeen potential is not constant outside the horizon. Moreover the spectrum is red and far from being scale-invariant. Hence, despite an inflationary phase on the brane, the intrinsic fluctuations of a brane in a dilatonic background are not a candidate for the generation of primordial fluctuations. V. CONCLUSION We have investigated the fluctuations of a moving brane in a dilatonic background. These fluctuations are represented by a scalar mode on the brane corresponding to ripples along the normal direction to the brane. As the brane fluctuates, it induces metric fluctuations, in particular we have found that the induced metric appears naturally in the longitudinal gauge with two unequal Bardeen potentials ฮฆ and ฮจ. The fact that these potentials are not equal springs from the presence of anisotropic stress on the brane. For a fixed equation of state for the matter content on the brane, for instance cold dark matter, we find that the two Bardeen potentials are proportional. As such this implies that a single gauge invariant observable ฮฆ characterizes the brane fluctuations. Our approach differs from the projective approach [4,5] in as much as we have not considered the perturbed Einstein equations on the brane. This allows us to free ourselves from the thorny problem of the projected Weyl tensor on the brane. We have focused on the motion and fluctuations of branes in a particular class of dilatonic backgrounds. These backgrounds correspond to an exponential potential and an exponential coupling of the bulk scalar field to the brane. The motion of the brane is either of the bouncing type or the ever-expanding form. In the bouncing case we find that the brane cannot escape towards infinity, it is bound to a singularity which is either null or time-like. In the time-like case, i.e. when it appears at a finite distance in conformal coordinates, the fluctuations of the brane are unbounded implying that the brane is ripped by the strong gravity around the singularity. In the null case, i.e. when the singularity is at conformal infinity, the fluctuations oscillate in a bounded manner while converging to the singularity. The bouncing case is equivalent to the behaviour of a brane in a supergravity background. As we only consider intrinsic fluctuations of the brane in an unperturbed bulk, this corresponds to a situation where supersymmetry is preserved by the bulk while broken by the brane motion. Therefore the bouncing brane fluctuations correspond to fluctuations of a non-BPS brane embedded in a supergravity background. In the ever-expanding scenario, we can distinguish two possibilities. The brane can escape to infinity with a scale factor which is either expanding in a decelerating manner or accelerating, i.e. corresponding to an inflationary era of the power law type. In the decelerating case, the brane eventually oscillates forever. In the inflationary case, the brane is such that any fluctuation of a giving length scales oscillates until it freezes in while passing through the horizon. Of course this scenario is reminiscent of four-dimensional inflation modelled with a scalar field. Here the features of inflation, i.e. the relationship between the power spectrum and the scale factor, differ from the four dimensional case. This is an interesting observation as it leads to a new twist in the building of inflationary models. One might hope that alternative scenarios to four-dimensional inflation may emerge from five dimensional brane models and their fluctuations.
Accurate Pinyin-English Codeswitched Language Identification Pinyin is the most widely used romanization scheme for Mandarin Chinese. We consider the task of language identification in Pinyin-English codeswitched texts, a task that is significant because of its application to codeswitched text input. We create a codeswitched corpus by extracting and automatically labeling existing Mandarin-English codeswitched corpora. On language identification, we find that SVM produces the best result when using word-level segmentation, achieving 99.3% F1 on a Weibo dataset, while a linear-chain CRF produces the best result at the letter level, achieving 98.2% F1. We then pass the output of our models to a system that converts Pinyin back to Chinese characters to simulate codeswitched text input. Our method achieves the same level of performance as an oracle system that has perfect knowledge of token-level language identity. This result demonstrates that Pinyin identification is not the bottleneck towards developing a ChineseEnglish codeswitched Input Method Editor, and future work should focus on the Pinyinto-Chinese character conversion step. Introduction As more people are connected to the Internet around the world, an increasing number of multilingual texts can be found, especially in informal, online platforms such as Twitter and Weibo 1 (Ling et al., 2013). In this paper, we focus on short Mandarin-English mixed texts, in particular those that involve intra-sentential codeswitching, in which the two languages are interleaved within a single utterance or sentence. Example 1 shows one such case, including the original codeswitched text (CS), and its Mandarin (MAN) and English (EN) translations: (1) CS: ่ฟ™ไธชthermal exchanger็š„thermal conductivityๅคชไฝŽ. MAN: ่ฟ™ไธชๆข็ƒญๅ™จ็š„็ƒญไผ ๅฏผ็ณปๆ•ฐๅคชไฝŽ. EN: The thermal conductivity of this thermal exchanger is too low. A natural first step in processing codeswitched text is to identify which parts of the text are expressed in which language, as having an accurate codeswitched language identification system seems to be a crucial building block for further processing such as POS tagging. Recently, Solorio et al. (2014) organized the first shared task towards this goal. The task is to identify the languages in codeswitched social media data in several language pairs, including Mandarin-English (MAN-EN). Since Chinese characters are assigned a different Unicode encoding range than Latin-script languages like English, identifying MAN-EN codeswitched data is relatively straightforward. In fact, the baseline system in the shared task, which simply stores the vocabularies of the two languages seen during training, already achieves 90% F1 on identifying Mandarin segments. Most of the remaining errors are due to misclassifying English segments and named entities, which constitute a separate class in the shared task. We focus in this paper on performing language identification between Pinyin and English, where Pinyin is the most widely used romanization schemes for Mandarin. It is the official standard in the People's Republic of China and in Singapore. It is also the most widely used method for Mandarin speaking users to input Chinese characters using Latin-script keyboards. Example 2 shows the same codeswitched sentence, in which the Chinese characters have been converted to Pinyin: (2) Zhege Thermal Exchanger de Thermal Conductivity taidi. Distinguishing Pinyin from English or other languages written with the Roman alphabet is an important problem with strong practical motivations. Learners of both English and Chinese could benefit from a system that allows them to input codeswitched text (Chung, 2002). More generally, accurate Pinyin-English codeswitched language identification could allow users to input Mandarin-English codeswitched text more easily. A Chinese Input Method Editor (IME) system that detects Pinyin and converts it into the appropriate Chinese characters would save users from having to repeatedly toggle between the two languages when typing on a standard Latin-script keyboard. Since Pinyin is written with the same character set as English 2 , character encoding is no longer a reliable indicator of language. For example, she, long, and bang are Pinyin syllables that are also English words. Tisane is a English word, and is also a concatenation of three valid Pinyin syllables: ti, sa, and ne. Thus, contextual information will be needed to resolve the identity of the language. Our contributions are as follows. First, we construct two datasets of Pinyin-English codeswitched data by converting the Chinese characters in Mandarin-English codeswitched data sets to Pinyin, and propose a new task to distinguish Pinyin from English in this codeswitched text. Then, we compare several approaches to solving this task. We consider the level of performance when training the model at the level of words versus individual letters 3 in order to see whether having word boundaries would affect performance. Two standard classification methods, SVMs and linear-chain CRFs are compared for both settings. We find that SVM produces better results on the word-level setting, achieving 99.3% F1 on a Weibo dataset. CRF produces better results on the letter-level setting, achieving 98.2% F1 on the same dataset. Lastly, we pass the output of our models to a system that converts Pinyin back to Chinese characters as an extrinsic evaluation. The result shows that word-level models produce better conversion performance. Our automatic conversion method achieves the same level of performance as an oracle system with perfect knowledge of token-level language identity. This result demonstrates that Pinyin identification is not the bottleneck towards developing a Chinese-English codeswitched IME, and that future work should focus on the Pinyin-to-Chinese character conversion step. Related Work Several models for MAN-EN codeswitched language identification were developed as part of the First Shared Task on Language Identification in Codeswitched Data (Chittaranjan et al., 2014;King et al., 2014). The most common technique was to employ supervised machine learning algorithms (e.g., extended Markov Models and Conditional Random Field) to train a classifier. Codeswitched language identification has been previously studied with other language pairs, (Carter et al., 2013;Nguyen and Dogruoz, 2014;Das and Gambรคck, 2013;Voss et al., 2014). However, very few articles discuss codeswitched Pinyin-English input specifically. There has been research on improving the error tolerance of Pinyinbased IME. Chen and Lee (2000) propose a statistical segmentation and a trigram-based language model to convert Pinyin sequences into Chinese character sequences in a manner that is robust to single-character Pinyin misspellings. They also propose a paradigm called modeless Pinyin that tries to eliminate the necessity of toggling on and off the Pinyin input method. While their modeless Pinyin works on Pinyin generating a single Chinese character or a single English word, our experiments in this paper attempt to generate an entire sequence of Chinese characters and English words. Research in improving the codeswitched text input experience also exists for other languages that use a non-alphabetic writing system, such as Japanese. Ikegami and Tsuruta (2015) propose a modeless Japanese input method that automatically switches the input mode using models with n-gram based binary classification and dictionary. Task Definition Given a Pinyin-English codeswitched input as shown in Example 2, the main task is to identify the segments of the input that are in Pinyin as pinyin, segments that are in English as non-pinyin, punctuation and whitespaces as other as shown in Example 3. The other label is used to tag tokens that do not represent actual words in both languages. Segments in bold, italic and underlined are labeled as non-pinyin, pinyin and others, respectively. Separating other label from non-pinyin prevents such tokens identifiable using a simple dictionary method from artificially inflating the performance of the models during evaluation. We do not follow the annotation scheme of the shared task in putting named entities into their own class (Solorio et al., 2014). In the Pinyin-English case, named entities clearly belong to either the pinyin class or the non-pinyin class, and Pinyin named entities would eventually need to be converted to Chinese characters in any case. Furthermore, named entity annotations are not available in the Weibo corpus that we construct. Corpus Construction We created two Pinyin-English codeswitched corpora automatically, by converting Mandarin-English codeswitched data. Our Mandarin-English corpora are obtained via two sources. ENMLP We used the training data provided by the First Workshop on Computational Approaches to Code Switching of EMNLP 2014 (Solorio et al., 2014). The workshop provides a codeswitched Mandarin-English training data that contains 1000 tweets crawled from Twitter. The Chinese part of the data is in traditional Chinese. WEIBO We downloaded 102995 Weibo entries using Cnpameng (Liang et al., 2014), a freely available Weibo crawler. Most of the entries are written in Simplified Chinese. Only a small proportion of the entries (about 1%) contain Mandarin-English codeswitched content. We removed entries that are not codeswitched and sampled 3000 of the remaining entries. In this corpus, most tokens are Chinese, with only one or two English words embedded. Chinese characters account for about 95% of the tokens. Preprocessing and labeling The Mandarin-English codeswitching corpora are not directly usable in our experiments; we need to first convert the Chinese characters to Pinyin. We used jieba 4 , a library for segmenting a Chinese sentence into a list of Chinese words. For each word, we used pypinyin 5 , a Python library that converts Chinese words, both Traditional Chinese and Simplified Chinese, into Pinyin. We then label each Pinyin sequence as pinyin, white spaces and punctuation as other, and English words as non-pinyin, as described above. The Mandarin-English codeswitched data we collected all contain short sentences 6 . This was by design, as we are interested in intra-sentential codeswitching. We expect that inter-sentential codeswitching would not require frequent Pinyin IME mode toggling, and labeling them for their language would also be easier. The frequency counts of each label in the EMNLP corpus and the WEIBO corpus are shown in Tables 1 and 2, respectively. Models We propose two classes of models to solve the task: Word-Based Models and Letter-Based Models. They differ in how the input is segmented. We compared these two segmentation schemes with the goal to test whether automatic Pinyin word segmentation is needed to accurately identify Pinyin and English tokens. Word-Based Models (WBM) The input is segmented into one of (1) a Pinyin sequence representing Chinese words 7 , (2) an English word, or (3) other (space and punctuation). Each chunk is labeled as one of pinyin, non-pinyin or other. The Pinyin sequences representing Chinese words are indirectly segmented according to the word segmentation of 7 Note that a Chinese word can be either a single Chinese character or a concatenation of multiple Chinese characters. the corresponding Mandarin characters. Example 3 illustrates the WBM. Letter-Based Models (LBM) The input is segmented into individual letters. Each letter is labeled as one of pinyin, non-pinyin or other. For each of the schemes above, we experimented with two discriminative supervised machine learning algorithms: Support Vector Machines (SVMs) and linear-chain Conditional Random Fields (CRFs). We chose to experiment with SVMs and CRFs in order to see whether a standard classification approach suffices or if a sequence model is needed. These methods have also been shown to perform well in previous work on language identification tasks (King and Abney, 2013;Chittaranjan et al., 2014;Lin et al., 2014;Bar and Dershowitz, 2014;Goutte et al., 2014). Feature Extraction We selected features to pass into our models by drawing upon recent work in codeswitched language identification (Chittaranjan et al., 2014;Lin et al., 2014;Nguyen and Dogruoz, 2014). We explored a number of different options for features, and the final set was chosen based on performance on a heldout development set, as follows. Word-Based Models (WBM) The following features were chosen for each segment s: โ€ข Identity of s, converted to lower case โ€ข Whether s is a legal sequence of Pinyin โ€ข Whether s is upper-cased โ€ข Whether s is capitalized โ€ข Whether s is a number โ€ข The token occurring prior to s in the sequence โ€ข The token occurring after to s in the sequence Letter-Based Models (LBM) The following features were chosen for each segment t: โ€ข Identity of t, converted to lower case โ€ข Whether t is upper-cased โ€ข Whether t is a number โ€ข The token occurring prior to t in the sequence โ€ข The token occurring after to t in the sequence We initially experimented with several other features, but found that they did not improve performance on the development set, so we did not include them in the final system. In the WBM setting, we tried adding Part-Of-Speech (POS) tags as features, but found that existing POS taggers do not handle codeswitched data well. For both WBM and LBM, we tried to add a boolean feature to indicate whether the segment is at the start or end of the input but this turned out to be unhelpful. Baseline Dictionary-Based Method We compared these methods against a baseline, which labels a concatenation of valid Pinyin syllables 8 as pinyin, whitespaces and punctuation as others and the rest as non-pinyin. Experiment 1: Language Identification We tested our models on the two codeswitching corpora that we created. We split each corpus into training (80%) and testing (20%) subsets. We also created a held-out development set by randomly sampling 100 entries from EMNLP corpus, and used it to select the feature sets described in Section 4.1. We kept the same set of features for the WEIBO corpus, without performing any additional tuning. We trained the CRF model using CRFsuite (Okazaki, 2007) and the SVM model using Scikit-learn (Pedregosa et al., 2011). The models were tested using commonly defined evaluation measures -Precision, Recall and F1 (Powers, 2011) at the word level for WBMs and at the letter level for LBMs. Table 3, all the WBM machine learning algorithms performed better than the baseline. The average P, R and F1 for each model were calculated without taking into account the values from the other label. This prevents the other class, which can largely be predicted by whether the segment is a whitespace or punctuation character, from artificially inflating results. The SVM-WBM model performed the best with an F1 of 0.980 on EMNLP corpus and 0.993 on WEIBO corpus. In LBM settings, only CRF outperformed the baseline with an F1 of 0.962 on the EMNLP corpus and 0.982 on the WEIBO corpus. As shown in Note that the baseline F1 for the other class is not at 1.0 because baseline method's dictionary of punctuation and whitespace characters were constructed from the training set, and does not exhaustively cover all possible characters of this class. Since there is, to our knowledge, no previous study on Pinyin-English codeswitched text input, we cannot perform direct comparison against existing work. In terms of similar tasks involving codeswitched text, the top-performing MAN-EN language identification system achieved an F1 of 0.892 (Chittaranjan et al., 2014), but the annotation scheme includes a category for named entities. Ikegami and Tsuruta (2015) achieved an F1 of 0.97) on codeswitched Japanese-English text input using an n-gram-based approach. In the WEIBO corpus, the F1 performances of the models are very high, at up to 0.982 and 0.993. This could be because each entry in the Weibo corpus contains only one or two occurrences of single English words embedded into a sequence of Pinyin. The non-pinyin words are often proper nouns (these tokens are often capitalized), English words or acronyms that do not have a translation in Chinese. In this context, it is less common to see English words that are also valid Pinyin. While SVM performs the best with WBM, it does not perform as well with LBM. The lower performance of SVM-LBM is caused by the limited access to contextual information (only the letter directly before and after each token). By contrast, CRF-LBM can naturally take into account the sequence ordering information. This result shows that a sequence model like CRF is needed for LBM. Table 3: The performance of the models in terms of precision (P), recall (R), and F1, for each of the three classes. The avg/total row represents the average of the pinyin and non-pinyin classes, weighted by their frequencies in the dataset. We excluded the other category from the avg, because that class mostly consists of whitespace and punctuation. Discussion and error analysis We consider here the causes of the remaining errors. First, some errors are due to segments that are both legal English words and legal Pinyin sequences, as discussed in Section 4.2. For example, you (ๆœ‰), a word that occurs both in Mandarin and English with high frequency, is difficult for our models. Having additional POS information available to the models would be helpful, as ๆœ‰ is a verb in Mandarin, while you is a pronoun in English. Another source of errors is the presence of mixed Pinyin, Chinese characters and English within individual words. These errors are often found in user names (i.e., Twitter handlers). For example: (4) @eggchen ๅ‘ผๅซnicoๆˆ‘ๅฎŒๅ…จไธๅˆฐไฝ ่€ถ The twitter handle eggchen, labeled as non-pinyin in the gold standard, is a concatenation of English word egg and raw Pinyin chen. With CRF-LBM, since the word boundary information was not available, CRF-LBM wrongly labels chen as pinyin, sep-arated from eggchen. Finally, the LBMs sometimes fail to correctly predict codeswitching boundaries. Taking an example in the dataset: "desledge" was an input sequence where "de" has gold standard label pinyin and "sledge" has gold standard label non-pinyin. In the CRF-WBM, the word boundary information is given, so the model is able to predict the labels correctly. The CRF-LEMB model predicted that the entire sequence "deseldge" is "non-pinyin". 6 Experiment 2: Converting Pinyin to Chinese characters Next, we experimented with converting the Pinyin-English codeswitched inputs back to the original Mandarin-English sentences in an end-to-end, extrinsic evaluation. Pinyin is the most widely used method for Mandarin users to input Chinese characters using Latin-script keyboards. Improvements in the language identification step transfer over to the next step of Chinese characters generation. Converting Pinyin to Chinese characters is not an easy task, as there are many possible Chinese characters for each Pinyin syllable. Modern Pinyin Input Methods use statistical methods to rank the possible character candidates in descending probability and predict the top-ranked candidate as the output (Chen and Lee, 2000;Zheng et al., 2011). Task Given a Pinyin-English codeswitched input and the corresponding labels produced by our codeswitched language identification models, produce Mandarin-English codeswitched output by converting the parts labelled as Pinyin to Chinese characters. Method We use a representative approach that models the conversion from Pinyin to Chinese characters as a Hidden Markov model (HMM), in which Chinese characters are the hidden states and Pinyin syllables are the observations. The model is trained from SogouT corpus (Liu et al., 2012), and the Viterbi algorithm is used to generate the final output. We used a Python implementation of this model 9 to convert pinyin segments to Chinese characters while leaving others and non-pinyin segments unchanged. We use the Pinyin-English codeswitched input, paired with language identification labels from Baseline, SVM-WBM, or CRF-LBM to generate Mandarin-English codeswitched output. We then evaluated these outputs against the gold standard by measuring precision, recall, and F1 on the Chinese characters. We also compare against an oracle topline, which has perfect knowledge of the segmentation of the input into Pinyin vs non-Pinyin. For the CRF-LBM, we used the Smith-Waterman algorithm (Smith and Waterman, 1981) to align the output produced by the CRF-LBM method with the gold-standard words. As shown in Tables 4 and 5, with SVM-WBM, the F1 of the generated Mandarin-English codeswitched outputs are better than the baseline in both corpora for all labels, and achieves a level of performance close to the oracle method. This result shows that our contribution to accurate language identification in transliteration codeswitching pairs is able to improve the performance of Pinyin IME in codeswitching context. Furthermore, the result demonstrates that Pinyin identification is not the bottleneck towards developing a Chinese-English codeswitched IME, at least if word boundary information is given. Future work should focus on the Pinyin-to-Chinese character conversion step. Model The higher performance of the WBM models compared to the LBM models suggests that having correct word boundaries is crucial for identifying Pinyin-English codeswitched input at higher accuracies. Note that F1 measure of both the oracle, Baseline and SVM-WBM models are better in WEIBO corpus in comparison to EMNLP corpus. This is backed by their higher F1-measure in the language identification step. Error analysis There is much room for improvement in the results. Despite CRF-LBM achieving higher than Baseline F1-measure in the language identification steps, the Mandarin-English generation accuracy of CRF-LBM is lower than baseline. The sources of this lower accuracy are the following: Presence of mixed raw pinyin. As described in Section 5.2, CRF-LBM labels the majority of raw pinyin as "pinyin". Consequently, it made the mistake of converting them to Chinese characters where they should not be. Failure to properly predict codeswitching word boundary. An example was given previously in Section 5.2. Each failure in predicting codeswitching word boundary produces two errors, one for the first word and one for the second. This double penalty explains why despite of CRF-LBM having higher F1 than baseline in experiment 1, it is doing worse than baseline in experiment 2. Conclusion Having an accurate codeswitched language identification system serves as a crucial building block for further processing such as POS tagging. Our results on Pinyin-English codeswitched language identification experiments provide novel contributions to language identifications on transliteration pairs. We find that SVM performs the best at the word level while CRF performs the best at the letter level. In the second experiment, we developed an automatic method that converts Pinyin-English codeswitched text to Mandarin-English text as an extrinsic evaluation of our models. We showed that word-level models produce better conversion performance. One of our automatic word-level methods achieves the same level of performance as an oracle system that has perfect knowledge of tokenlevel language identity. This result demonstrates that Pinyin identification is not the bottleneck towards developing a Chinese-English codeswitched IME, and future work should focus on the Pinyinto-Chinese character conversion step. Our approach could also be considered for other languages with non-Latin-based writing systems and a corresponding romanization scheme, such as Japanese and Romaji (Krueger and Neeson, 2000).
Copula Graphical Models for Heterogeneous Mixed Data This article proposes a graphical model that handles mixed-type, multi-group data. The motivation for such a model originates from real-world observational data, which often contain groups of samples obtained under heterogeneous conditions in space and time, potentially resulting in differences in network structure among groups. Therefore, the i.i.d. assumption is unrealistic, and fitting a single graphical model on all data results in a network that does not accurately represent the between group differences. In addition, real-world observational data is typically of mixed discrete-and-continuous type, violating the Gaussian assumption that is typical of graphical models, which leads to the model being unable to adequately recover the underlying graph structure. The proposed model takes into account these properties of data, by treating observed data as transformed latent Gaussian data, by means of the Gaussian copula, and thereby allowing for the attractive properties of the Gaussian distribution such as estimating the optimal number of model parameter using the inverse covariance matrix. The multi-group setting is addressed by jointly fitting a graphical model for each group, and applying the fused group penalty to fuse similar graphs together. In an extensive simulation study, the proposed model is evaluated against alternative models, where the proposed model is better able to recover the true underlying graph structure for different groups. Finally, the proposed model is applied on real production-ecological data pertaining to on-farm maize yield in order to showcase the added value of the proposed method in generating new hypotheses for production ecologists. Introduction Gaussian graphical models are statistical learning techniques used to make inference on conditional independence relationships within a set of variables arising from a multivariate normal distribution Lauritzen (1996). These techniques have been successfully applied in a variety of fields, such as finance (Giudici and Spelta, 2016), biology (Krumsiek et al., 2011), healthcare (Gunathilake et al., 2020) and others. Despite their wide applicability, the assumption of multivariate normality is often untenable. Therefore, a variety of alternative models have been proposed, in, for example, the case of Poisson or exponential data (Yang et al., 2015), ordinal data (Guo et al., 2015) and the Ising model in the case of binary data. More general, despite the availability of approaches that do not impose specific distributions on the data, they are limited by their inability to allow for nonbinary discrete data (Liu et al., 2012;Fan et al., 2017) or contain a substantial number of parameters (Lee & Hastie, 2015). Dobra and Lenkoski (2011) developed a type of Gaussian graphical model that allows for mixedtype data, by combining the theory of copulas, Gaussian graphical models and the rank likelihood method (Hoff, 2007). Whereas this model consisted of a Bayesian framework, Abegaz and Wit (2015) proposed a frequentist alternative, reasoning that the choice of priors for the inverse covariance matrix is nontrivial. Both the Bayesian and frequentist approaches have seen further development and application to real problems in the medical (Mohammadi et al., 2017) and biomedical sciences (Behrouzi & Wit, 2019). Notwithstanding distributional assumptions, all aforementioned methods assume that the data is i.i.d. (independent and identically distributed). However, real-world observational data often contain groups of samples obtained under heterogeneous conditions in space and time, potentially resulting in differences in network structure among groups. Therefore, the i.i.d. assumption is unrealistic, and fitting a single graphical model on all data results in a network that does not accurately represent the between group differences. Conversely, fitting each graph separately for each group fails to take advantage of underlying similarities that may exist between the groups, thereby possibly resulting in highly variable parameter estimates, especially if the sample size for each group is small (Guo et al., 2011). For these reasons, during the last decade, several researchers have developed graphical models for so-called heterogeneous data, that is, data consisting of various groups (Guo et al., 2011;Danaher et al., 2014;Xie et al., 2016). Akin to graphical models for homogeneous data, research on heterogeneous graphical models has mainly pertained to the Gaussian setting, despite mixed-type heterogeneous data occurring in a wide variety of situations, such as multi-nation survey data, meteorological data measured at different locations, or medical data of different diseases. Consequently, the aim of this article is to fill the methodological gap that is graphical models for heterogeneous mixed data. Even though Jia and Liang (2020) aimed to close this methodological gap using their joint mixed learning model, the effectiveness of said model has only been shown in the case where the data follow Gaussian or binomial distributions. This is not always the case in real-world applications. In addition, the model is unable to handle missing data, which tend to be the norm, rather than the exception in real-world data (Nakagawa & Freckleton, 2008). Despite Jia and Liang also including an R package with their method, it is currently depreciated and not usable for graph estimation. Motivated by an application of networks on disease status, Park and Won (2022) recently proposed the fused mixed graphical model: a method to infer graph structures of mixed-type (numerical and categorical) data for multiple groups. This approach is based on the mixed graphical model by Lee and Hastie (2013), but extended to the multi-group setting. The proposed model assumes that the categorical variables given all other variables follow a multinomial distribution and all numeric variables follow a Gaussian distribution given all other variables, which is not realistic in the case of Poisson, or non-Gaussian continuous variables. Moreover, the imposed penalty function consists of 6 different penalty parameters to be estimated for 2 groups, which only grows further as the number of groups increases, resulting in the FMGM being prohibitively computationally expensive. Furthermore, no comparative analysis is done with existing methods, but only to a separate network estimation, giving no indication of comparative performance on different types of data. Finally, the FMGM is not accompanied by an R package that allows for such comparative analyses. There is a need for a method that can handle more general mixed-type data consisting of any combination of continuous and ordered discrete variables in a heterogeneous setting, which to the best of our knowledge does not exist at present. Borrowing from recent developments in copula graphical models, the proposed method can handle Gaussian, non-Gaussian continuous, Poisson, ordinal and binomial variables, thereby letting researchers model a wider variety of problems. All code used in this article can be found at https://github.com/sjoher/ cgmhmd-analysis, whilst the R package can be found at https://github.com/sjoher/cgmhmd. Application to production ecological data Interest in relationships between multiple variables based on samples obtained over different locations and time-points is particularly common in production-ecology, a science that aims to understand and predict the productivity of agricultural systems (e.g. yield) as a function of their genetic biological components (G), the production environment (E) and human management (M). Production-ecological data typically consist of observations from different crops, seasons, environments, or management conditions and research is likely to benefit from the use of graphical models. Moreover, production ecological data tends to be of mixed-type, consisting of (commonly) Gaussian, non-Gaussian continuous and Poisson environmental data, but also ordinal and binomial management data. A typical challenge for production-ecological research lies in explaining variability in observed yields as a function of a wide set of potential enabling and constraining variables. This is typically done by employing linear models or basic machine learning methods such as random forest that model yield as a function of a set of covariates (Ronner et al., 2016;Bielders & Gรฉrard, 2015;Palmas & Chamberlin, 2020). However, advanced statistical models such as graphical models have not yet been introduced to this field. As graphical models are used to represent the conditional dependencies underlying a set of variables, we expect that these models can greatly aid researchers' understanding of Gร—Eร—M interactions by way of exposing new, fundamental relationships that affect plant production, which have not been captured by methods that are commonly used in the field of production ecology. Therefore, we use this field as a way to illustrate our proposed method and thereby introduce graphical models in general to production ecologists. This article extends the Gaussian copula graphical model to allow for heterogeneous, mixed data, where we showcase the effectiveness of the novel approach on production-ecological data. To this end, in Section 2, the proposed methodology behind the Gaussian copula graphical model for heterogeneous data is presented. Section 3 presents an elaborate simulation study, where the performance of the newly proposed method compared to other types of graphical models is evaluated. An application of the new method on production-ecological data consisting of multiple seasons is given in Section 4. Finally, the conclusion can be found in Section 5. Methodology A Gaussian graphical model corresponds to a graph G = (V, E) that represents the full conditional dependence structure between variables represented by a set of vertices V = {1, 2, . . . , p} through the use of a set of undirected edges E โŠ‚ V ร— V , and depends on a n ร— p data matrix X = (X 1 , X 2 , . . . , X p ), X j = (X 1j , X 2j , . . . , X nj ) T , j = 1, . . . , p, where X โˆผ N p (0, ฮฃ), with ฮฃ = ฮ˜ โˆ’1 . ฮ˜ is known as the precision matrix containing the scaled partial correlations: ฯ ij = โˆ’ ฮ˜ij โˆš ฮ˜iiฮ˜jj . Thus, the partial correlation ฯ ij represents the independence between X i and X j conditional on X V \ij . Therefore, (i, j) โˆˆ E โ‡” ฮ˜ ij = 0. Copula graphical models for heterogeneous data p , where k = 1, 2, . . . , K represents the group index, indicating differential genotypic, environmental or management situations, and X (k) j is a column of length n k , where n k is not necessarily equal to n k for k = k and the data are of mixed-type, i.e. non-Gaussian, counts, ordinal or binomial data, as obtained from measurements on different genotypic, environmental, management and production variables. Moreover, the data across the different groups are not i.i.d. For group k, a general form of the joint cumulative density function is given by As the Gaussian assumption is violated for the X (k) , maximum likelihood estimation of ฮ˜ (k) based on a Gaussian model will not suffice. For joint densities consisting of different marginals, as in (1), copulas can be applied to model the joint dependency structure between the variables (Nelsen 2007). In the copula graphical model literature, each observed variable X j is assumed to have arisen by some perturbation of a latent variable Z j , where Z โˆผ N p (0, ฮฃ), with correlation matrix ฮฃ. The choice for a Gaussian latent variable is motivated by the familiar closed-form of the density and the fact that the Gaussian copula correlation matrix enforces the same conditional (in)dependence relations as the precision matrix of graphical models (Dobra & Lenkoski, 2011;Behrouzi & Wit, 2019;Abegaz & Wit, 2015). This article also assumes a Gaussian distribution for the latent variables such that where ฮฃ (k) โˆˆ R pร—p represents the correlation matrix for group k. The latent variables are linked to the observed data as X where the F j , j = 1, 2, . . . , p are observed continuous and ordered discrete variables taking values (in the discrete case) in {0, 1, . . . , d being the number of categories of variable j in group k. A visualization of the relationship between the latent and observed variables is given in Figure 1. The copula function joining the marginal distributions is denoted as j ) is standard uniform (Casella & Berger, 2001) and, due to the Gaussian assumption of the Z (k) j , can be written as where ฮฆ ฮฃ (k) () is a cdf of a multivariate normal distribution with correlation matrix ฮฃ (k) โˆˆ R pร—p . As the ฮฆ() is always nondecreasing and the F (k) โˆ’1 j (t) are nondecreasing due to the ordered nature of the data, we have that Hoff (2007). Thus, we have that z i j }. From here on out, we refer to the set of intervals containing the latent data D(x) = {z โˆˆ R K n k ร—p : z j )} as D. In order to facilitate the joint estimation of the different ฮ˜ (k) , the probability density function over all K groups is given as where c(F p )) is the copula density function and f (k) j is the marginal density function for the j-th variable and the k-th group. This copula density is obtained by taking the derivative of the cdf with respect to the marginals. As the Gaussian copula is used, the copula density function can be rewritten as: p )) and I is a p ร— p identity matrix. The full log-likelihood over K groups is then given by where X = (X (1) , . . . , X (K) ) T . We denote {ฮ˜ (k) } K k=1 as ฮ˜ for the purpose of simplicity. The two rightmost terms in the penultimate line of (3) were omitted, as they are constant with respect to ฮ˜ (k) because of the standard normal marginals. Model estimation When estimating the marginals, a nonparametric approach is adhered to, as is common in the copula literature. This is due to the computational costs involved their estimating and because of the fact that we only care about the dependencies encoded in the ฮ˜ (k) . They are estimated asF ij โ‰ค x). Whilst (3) allows for the joint estimation of the graphical models pertaining to the different groups, these models are not sparse and cannot enforce relations to be the same. Sparsity is a common assumption in biological networks and production ecology is not an exception. Consider for example the solubilization of fertiliser which is independent of root activity (de Wit, 1953), the independence between nitrogen and yield for certain crops (Raun et al. 2011), or more general the independence between weather and various management techniques. Moreover, if certain groups are highly similar, for example different locations with similar climates, enforcing relations between those groups to be the same is both realistic and parsimonious. Therefore, a fused-type penalty is imposed upon the precision matrix, such that the penalised log-likelihood function has the following form for 1 โ‰ค k = k โ‰ค K and 1 โ‰ค j = j โ‰ค p. Here, ฮป 1 controls the sparsity of the K different graphs and ฮป 2 controls the edge-similarity between the K different graphs. Higher values for ฮป 1 and ฮป 2 correspond to respectively more sparse and more similar graphs, where similarity is not only limited to similar sparsity patterns in the different ฮ˜ (k) , but also in terms of attaining the exact same coefficients across different ฮ˜ (k) . The fused-type penalty for heterogeneous data graphical models was originally proposed by Danaher et al. (2014). Whenever groups pertaining to seasons or environments share similar characteristics, production ecological research has hinted at similar edge values between groups (Hajjarpoor et al., 2021;Zhang et al., 1999;Richards & Townley-Smith, 1987). Consider the case where groups represent different locations. If two groups have very similar environments, both weather patterns and soil properties, many conditional independence relations are expected to be similar between the groups, as the underlying production ecological relations are assumed to be invariant across (near) identical situations (Connor et al., 2011). Conversely, if the amount of shared characteristics is limited between the groups, the edge values between groups are expected to be different, resulting from the low value for ฮป 2 , as obtained from a penalty parameter selection method. Moreover, this fused-type penalty has been shown to outperform other types of penalties (Danaher et al., 2014), and, if the data contains only 2 groups, this type of penalty has a very light computational burden, due to the existence of a closed-form solution for (4) once the conditional expectations of the latent variables have been computed. As direct maximization (ฮ˜|X) is not feasible due to the nonexistence of an analytic expression of (4), an iterative method is needed to estimate the value of ฮ˜ ฮป1,ฮป2 . A common algorithm used in the presence of latent variables is the EM-algorithm (McLachlan & Krishnan, 2007). A benefit of this algorithm is that it can handle missing data, which is not uncommon in production ecology as plants can die mid-season due to external stresses such as droughts or pests. The EM algorithm alternates between an E-step and an M-step, where during the E-step the expectation of the (unpenalised) complete-data (both X and Z) log-likelihood conditional on the event D and the estimateฮ˜ (m) obtained during the previous M-step is computed which are the estimated correlation matrix and the first and second moments of a doubly truncated multivariate normal density, respectively. Expressions of these moments are given by Manjunath and Wilhelm (2021). Note that tr(R (k) ) does not depend on ฮ˜ (k) and was therefore omitted in the last step of the derivation of Q(ฮ˜|ฮ˜ (m) ). Despite the existence of a functional expression for E(Z Behrouzi and Wit (2019) used two alternative methods to compute this quantity, as directly computing the moments is computationally expensive, even for moderate p of 50. Accordingly, the faster alternatives proposed are a Gibbs sampling and an approximation-based approach. Whereas the former results in better estimates for the precision matrices, the latter is computationally more efficient. The Gibbs sampler is built on the Fortran-based truncated normal sampler in the tmvtnorm R package (Wilhelm & Manjunath, 2022). The Gibbs method consists of a limited number of steps, where the focus lies on drawing N samples from the truncated normal distribution, where t x (k) i is a vector of length p and contains the lower truncation points i +1 is a vector of length p and contains the upper truncation points for observation x (k) i . The method is summarised in Algorithm 1. Algorithm 1 Gibbs method When the Gibbs method is run for the first iteration (m = 1) of the EM algorithm, ฮฃ (k) = I p , and otherwise ฮฃ (k) = ฮ˜ (mโˆ’1),(k) โˆ’1 . Computing the sample mean on simulated data from a truncated normal distribution leads to consistent estimates ofR (k) (Manjunath & Wilhelm, 2021). Guo et al. (2015) proposed an approximation method to estimate the conditional expectation of the covariance matrix. When j = j , the variance elements of this matrix correspond to the second moment , which is simply a product of the first moment for variables j and j . The method is summarised in Algorithm 2, and details can be found in Appendix A. Algorithm 2 Approximate method Input: X Output: After obtaining an estimate for allR (k) , using either the Gibbs method or the approximate method, the M-step commences which consists of maximizing (5) with respect to the precision matrices, subject to the imposed penalties of (4): which is done using the fused graphical lasso by Danaher et al. (2014). Model selection Instead of one penalty parameter, as is typical for graphical models, the copula graphical model for heterogeneous data requires the selection of two penalty parameters. In a predictive setting, the AIC (Akaike, 1973) penalty and cross-validation approaches are commonly applied, whereas the BIC (Schwarz, 1978), EBIC (Chen & Chen, 2012) and StARS (Liu et al., 2010) approaches are designed for graph identification (Vujaฤiฤ‡ et al., 2015). When considering a grid of ฮป 1 ร— ฮป 2 combinations of penalty parameters, computational cost becomes crucial. Therefore, unless a coarse grid structure for the penalty parameters is used or if researchers are willing to spend a substantial amount of time waiting for the "optimal" combination of penalty parameters, information criteria are preferred over computationally intensive methods such as cross-validation and StARS. Two of these, the AIC and EBIC are given for heterogeneous data: where the degrees of freedom ฮฝ (k) = card{(i, j) : i < j,ฮธ (k) ij = 0} and 0 โ‰ค ฮณ โ‰ค 1 is a penalty parameter, commonly set to 0.5. The EBIC tends to lead to sparser networks than the AIC penalty (Vujaฤiฤ‡ et al., 2015). Simulation studies To demonstrate the added value of the proposed copula graphical model for heterogeneous mixed data, a simulation study is undertaken using a variety of settings. In this simulation, relevant parameter values are known a-priori. Both the Gibbs-based and approximation-based copula graphical models will be evaluated, together with the following models; the fused graphical lasso (FGL) by Danaher et al. (2014) and the graphical lasso (GLASSO) method (Friedman et al., 2008), where the networks are fitted seperately for each group. Whilst the joint mixed learning method by Jia and Liang (2020) is a method that aims to analyse similar data to the copula graphical model for heterogeneous data, their R package equSA (Jia et al., 2017) is no longer supported, which, at the time of writing, resulted in constant crashes of the R software when running the joint mixed learning method. As an aside, it should be noted that both the FGL and GLASSO methods assume Gaussian data. However, the data used to compare the methods is of mixed-type. Even though Poisson data can be normalised using for instance a logarithmic base-10 transformation, ordinal and binary data cannot. For this reason, the data was not normalised. For each combination of network-type, 25 datasets are generated consisting of p = {50, 100}, n k = {10, 50, 100, 500} and K = 3 groups. The combinations of these values for n k and p result in both high-and low-dimensional scenarios. Moreover, the choice of p is pursuant to the typical number of variables in a production-ecological dataset and the K with the number of seasons or environments analysed in such data. The networks used in this simulation study are a cluster network, a scale-free network and a random network according to the Erdล‘s-Rรฉnyi model (Erdล‘s & Rรฉnyi, 1959). The choice for the first network is motivated by the fact that for production-ecological data, when group-membership variables exist, such as environments or seasons, we expect clusters consisting of a large number of within-cluster edges compared to the number of between-cluster edges to arise. The scale-free network represents the opposite of a cluster network, consisting of a relatively high number of edges between clusters and a low number of edges within clusters. As model performance under opposite conditions is also relevant to evaluate, the scale-free network was chosen. The last network choice; the random network, allows the evaluation of the proposed copula graphical model under unspecified graph structure, where edge connection probability p e results in sparse or dense graphs, depending on whether the probability is close to 0 or 1, respectively. Having a model that performs well under assumed sparsity without additional structural network assumptions is useful, as it is not always known a-priori what the underlying graph should look like. The data is simulated in the following way: 1. Generate graph G and (initial) shared precision matrix ฮ˜ (s) according to the type of network: cluster, scale-free or random, by setting values in ฮ˜ to [โˆ’1, 0.5] โˆช [0.5, 1] the correspond to edges in G 2. Create different precision matrices ฮ˜ (k) for k = 1, . . . , K by randomly filling in ฯM zero elements of ฮ˜ (k) , where ฯ is a dissimilarity parameter and M is the number of nonzero elements in the lower diagonal of ฮ˜ (s) . Turn covariance matrix into correlation matrix ฮฃ . . , p} sample s b = ฮณ b p , s o = ฮณ o p , s p = ฮณ p p and s g = V โˆ’ s b โˆ’ s o โˆ’ s p columns without replacement for the binomial, ordinal, Poisson and Gaussian variables respectively, where ฮณ i , i โˆˆ I = {b, o, p, g} represents the proportion of that variable occurring in the data. Then, iโˆˆI s i = V and iโˆˆI s i = โˆ…. 9. Sample latent data Z โˆผ N (0, ฮฃ (k) ) 10. Generate observed data X In the simulations, we set distribution proportions ฮณ b to 0.1, ฮณ o to 0.5, ฮณ p to 0.2 and ฮณ g to 0.2. Moreover, the success parameter for the binomial marginals is set at 0.5, the rate parameter of the Poisson marginals is set at 10 and the number of categories for ordinal variables is set at 6. The edge connection probability p e for the random network is set at 0.05, resulting in sparse random graphs. The number of clusters in the cluster network is set at 3. Finally, we considered ฯ = 0.25 and 1, resulting in respectively similar and different graphs for each group. This results in a total of 3 ร— 4 ร— 2 ร— 2 = 48 unique combinations of settings. Each combination is used to sample 25 different datasets to minimise the effect of randomness on the results. To evaluate the performance of the models, ROC curves are drawn, consisting of the false positive rate (FPR) and true positive rate (TPR), which respectively represent the rate of false and true edges selected by the model. These are defined as: , where I() is an indicator function. The FPR and TPR are computed by varying ฮป 1 over [0, 1] with a step size of 0.05, whilst fixing ฮป 2 to either 0, 0.1 or 1, resulting in 3 ROC curves per plot (one for each value of ฮป 2 ), per dataset per model. Correspondingly, AUC scores are computed for these curves. In addition, the Frobenius and entropy loss are computed and averaged over the groups: Aside from the averaged performance measures across the values of ฮป 2 , the best choice performance measure is given: the value of the performance measure the attained the best score, either highest or lowest depending on the measure, for a particular choice of ฮป 2 . Results for the random network are given in Table 1 and Figure 2, whereas the results for the cluster and scale-free networks are given in Appendix B in respectively Table 2 and Figure 7 and Table 3 and Figure 8. Figure 2: ROC curves plotted over values of ฮป 1 . The line type denotes the penalty for ฮป 2 : ฮป 2 = 0 , ฮป 2 = 0.1 and ฮป 2 = 1 . The colors represent the method used: Gibbs method , Approximate method , Fused graphical lasso and GLASSO . From left to right the values of n are respectively 10, 50, 100 and 500. The simulation results shown in Table 1 and Figure 2 indicate that the proposed copula graphical model for heterogeneous mixed data in general outperforms the alternative models. Only under very low-dimensional settings does the GLASSO approach attain a better AUC that the proposed method, which is not a setting that typically occurs in real-world data. In high dimensional settings, the performance of the proposed model is substantially better than that of the alternative models. When groups become more dissimilar, by increasing dissimilarity parameter ฯ, the relative advantage of the proposed method becomes less substantial as compared to the GLASSO method, but this is to be expected, as there is less common information to borrow across the groups. Moreover, in low-dimensional settings, enforcing the precision matrices to be equal across groups (ฮป 2 = 1) results in better performance than when relatively little information is borrowed across groups. Conversely, the opposite phenomenon is observed for simulations when n = p and n < p. When values for ฮป 2 are high, more information is borrowed across graphs, which is of added value when the number of samples per group is low. Conversely, when groups contain enough observations, such as in the rightmost figures, setting ฮป 2 > 0 unnecessarily restricts the graphs, as each group contain enough information for individual estimation. Differences between the Gibbs and approximate methods are also observed: even though both methods select approximately the same number of true and false positive edges, marked differences are present when inspecting the entropy loss results, indicating differences in edge values. The Gibbs method near-consistently outperforms the approximate method, which matches the results obtained by Behrouzi and Wit (2019). Therefore, the Gibbs method is recommended over the approximate method when accuracy is the primary concern of the researcher. Application to production-ecological data The Gibbs version of the copula graphical model for heterogeneous mixed data was applied to a real world production-ecological data set. The data contained variables pertaining to soil properties (e.g. clay content, silt content and total nitrogen), weather (e.g. average mean temperature and number of frost days), management influences (e.g. pesticide use and weeding frequency), external stresses (pests, diseases) and maize yield. The data were collected in Ethiopia by the Ethiopian Institute of Agricultural Research (EIAR) in collaboration with the International Maize and Wheat Improvement Center (CIMMYT) and was used as part of a larger study (ming) aimed at explaining crop yield variability. The maize data used to illustrate the proposed model consists of measurements taken on two different locations: Pawe in northwestern Ethiopia and Adami Tullu in central Ethiopia, see Figure 3. These two locations have many properties that are not present in the data, but can influence the underlying relationships, such as soil water level, air pressure, wind, drainage, etc, as the two locations have different climatic properties and altitude levels (Abate et al., 2015). Crop yield tends to be the principal variable of interest in production ecology and will be the focus of this analysis. Both reported yield (yield) and simulated, water-limited, yield (yield theoretical) were present in the data. The difference between these two quantities (i.e. the yield gap) tends to be substantial in Ethiopia which makes unraveling the factors determining their relationship of interest to production ecologists (van Dijk et al., 2020;Assefa et al., 2020;Kassie et al., 2014;Getnet et al., 2016). The literature (Rabbinge et al., 1990;Connor et al., 2011) frequently identifies the following factors as potentially important for determining yield: total rainfall, planting density, soil fertility, use of intercropping, crop residue, amount of labour used, maize variety, plot size and fertiliser use. However, establishing the relative importance of these factors under different conditions is not trivial, since many of these factors interact may interact in complex ways depending on time and place. To gain more insight into the relations underlying yield variation, we apply the proposed copula graphical model for heterogeneous mixed data. The model is fitted using a grid of 11ร—11 combinations for ฮป 1 and ฮป 2 , from which the combination is selected that minimises the EBIC (8) with ฮณ = 0.5. Some general graph properties of the full graphs as seen in Figure 4 are discussed, followed by a discussion of the yield graphs seen in Figure 5. Figure 4: Results for the 4 networks, with penalty parameters set to ฮป 1 = 0.2, ฮป 2 = 0 as chosen by the EBIC with ฮณ = 0.5. Node size is indicative of the node degree, edge width reflects the absolute value of the partial correlation and color is used to make visual distinction between positive (green) and negative (red) partial correlations (green). Topologically, the graphs consist of a dense cluster of variables mainly consisting of soil and weather properties with some management variables mixed in, a less dense cluster of yield with its neighbours and some conditionally independent management and external-stress variables. The graphs consist respectively (by increasing group order) of 271, 261, 220 and 253 edges, with average node degrees of 8.60, 8.29, 6.98 and 8.03. However, in order to gain insights into which variables are conditionally dependent on yield, the full graphs are impractical. Therefore, the subgraphs containing the yield variable and its neighbours are given in Figure 5 below. By the Local Markov Property (cf. Lauritzen, 1996), yield is conditionally independent of all variables not shown in the graph given its neighbours, resulting in Figure 5 being sufficient to infer all conditional dependencies of the yield variable. Figure 5: Results for the 4 yield networks obtained by applying the Local Markov Property to the yield variable. All variables that share an edge with yield in at least one of the four networks are included in all four networks. Node size is indicative of the node degree, edge width reflects the absolute value of the partial correlation and color is used to make visual distinction between positive (green) and negative (red) partial correlations. Central in these graphs is the actual, reported yield. Whereas this variable has many direct relationships in the graph of Pawe and Adami Tullu in 2010, this is not the case for the other groups, including Pawe in 2013, possibly indicating a temporal interaction. Most results presented in Figure 5 are not unexpected, such as the positive effects of the use of (variety hybrid) seed, and consequently the negative effect of planting a local variety (variety local) (Assefa et al., 2020), the benefit from the application of N fertiliser and labour input. More surprising perhaps was the negative relationship with the amount of seed found in Adami Tullu in 2010, but this may be a true indication that optimum densities were relatively low for that location and year. The direct relation found in Pawe and Adami Tullu in 2010 between livestock (ownership) and yield is also interesting and may reflect either beneficial effects of their use as draft animals or positive effects on soil fertility, either directly through manure, or through greater resource endowment in general, for which livestock ownership is an indicator (Silva et al., 2019). In this regard, the negative effect of livestock on the labour input per person, which in turn has a positive effect on yield could also reflect the beneficial effect of the use of animal power over manual labour. With respect to labour per se, both graphs for Pawe in 2010 and Adami Tullu in 2013 contain an edge between yield and total labour use (totlab persdayha), indicating conditional dependence. Whilst the presence of the edges, and the corresponding positive partial correlations are not surprising per se, the fact that this relation is only found in Pawe in 2010 and Adami Tullu in 2013 is unusual. As labour is highly seasonal (Silva et al., 2019), and labour seasonality can vary with location, this edge is expected either for Pawe in 2010 and 2013 or for Adami Tullu in 2010 and 2013. This is an example of a conditional dependence that can be explored further by production ecologists. The present analysis is an example of how graphical models can aid in the exploration and understanding of fundamental production-ecological relations. Once other researchers take note of this novel application, methodologies can be tailored which will further understanding of how yield is influenced. Goodness of fit In order to evaluate the stability of the selected network, we apply a bootstrapping procedure to the data, where 200 (row) permutations of the original data are created, the proposed model is fitted over all values for ฮป 1 and ฮป 2 , and an optimal model is selected using the the EBIC (ฮณ = 0.5). For each graph G k , k โˆˆ 1 . . . , 4, edges that occur over 90% across the 200 bootstrapped G k graphs (hereinafter referred to as the acceptance ratio) are compared to the pre-bootstrap fitted graphs, indicating how stable the model performance is across random permutations of the data. The choice of an acceptance ratio of 90% was based on the fact that we are primarily interested in whether the results obtained by the proposed model are stable across small perturbations of the data, and using a high threshold only takes into account edges that can be considered part of the underlying graph. Using an acceptance ratio of 90% for the edges obtained from bootstrapped data, the fitted model described above with ฮป 1 = 0.2 and ฮป 2 = 0 is able to infer all fundamental relations (the discovery rate). This perfect discovery rate remains up until the acceptance ratio is decreased to 70%, where across all four graphs, the discovery rate decreases to 98.71%. Even when we identify those edges that appear in 50% of the graphs obtained from the bootstrap as fundamental, the discovery rate remains 89.53%. When the acceptance ratio is set even lower, non-fundamental relations are more likely to be included, which is not of interest here. In addition, judging by Figure 6, most edges in the fitted model occur frequently in the bootstrapped graphs, as judged by the low amount of very light red edges. Of particular interest are the edges surrounding the yield variable, which are stable. Therefore, using the EBIC results in stable model selection where fundamental relations are satisfactorily recovered. Conclusion Responding to the need of production ecologists for a statistical technique that can shed light into fundamental relations underlying plant productivity in agricultural systems, this article introduces the copula graphical model for mixed heterogeneous data: a novel statistical method that can be used to obtain estimate simultaneous interactions amongst multi-group mixed-type datasets. The proposed model can be seen as the fusion of two different models: the copula graphical model and the fused graphical lasso. The former extends the Gaussian graphical model with non-Gaussian variables, whereas the latter extends the graphical model to a multi-group setting and enforces similarity between similar groups. The model performs competitively for a myriad of graph structures underlying very different datasets, thereby extending its use beyond production-ecological data, to any mixed-type heterogeneous data. Moreover, the proposed method was applied to a production-ecological dataset consisting of 4 groups, reflecting spatial and temporal heterogeneity, as is typical of production-ecological data. Aside from yield relationships that are typically identified in production-ecological research, the model also found some peculiar relationships, giving motivation for future research. In terms of future statistical research, one recommendation that we give is model selection for multi-group graphical models, in order to facilitate applied researchers. Current approaches do not give theoretical guarantees for these types of models and model selection remains an important part of any statistical application. Another possible research direction is to extend the proposed method by allowing for unordered categorical (nominal) data, which is one of the shortcomings of the copula. Finally, we urge statisticians to develop methodologies that make optimal use of the intricacies that production-ecological data offer and is in line with the goals of production ecologists. Figure 7: ROC curves plotted over values of ฮป 1 . The line type denotes the penalty for ฮป 2 : ฮป 2 = 0 , ฮป 2 = 0.1 and ฮป 2 = 1 . The colors represent the method used: Gibbs method , Approximate method , Fused graphical lasso and GLASSO . From left to right the values of n are respectively 10, 50, 100 and 500. Figure 8: ROC curves plotted over values of ฮป 1 . The line type denotes the penalty for ฮป 2 : ฮป 2 = 0 , ฮป 2 = 0.1 and ฮป 2 = 1 . The colors represent the method used: Gibbs method , Approximate method , Fused graphical lasso and GLASSO . From left to right the values of n are respectively 10, 50, 100 and 500.
From the colour glass condensate to filamentation: systematics of classical Yangโ€“Mills theory The non-equilibrium early time evolution of an ultra-relativistic heavy ion collision is often described by classical lattice Yangโ€“Mills theory, starting from the colour glass condensate (CGC) effective theory with an anisotropic energy momentum tensor as initial condition. In this work we investigate the systematics associated with such studies and their dependence on various model parameters (IR, UV cutoffs and the amplitude of quantum fluctuations) which are not yet fixed by experiment. We perform calculations for SU(2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2$$\end{document}) and SU(3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3$$\end{document}), both in a static box and in an expanding geometry. Generally, the dependence on model parameters is found to be much larger than that on technical parameters like the number of colours, boundary conditions or the lattice spacing. In a static box, all setups lead to isotropisation through chromo-Weibel instabilities, which is illustrated by the accompanying filamentation of the energy density. However, the associated time scale depends strongly on the model parameters and in all cases is longer than the phenomenologically expected one. In the expanding system, no isotropisation is observed for any parameter choice. We show how investigations at fixed initial energy density can be used to better constrain some of the model parameters. Introduction The medium created by ultra-relativistic heavy-ion collisions is characterised by strong collective behaviour. It is generally accepted that a quark-gluon plasma (QGP) is formed and the effective theory describing the multiparticle correlations of this nearly-perfect fluid is relativistic viscous hydrodynamics. For a long time it was believed that the application of a e-mail: philipsen@th.physik.uni-frankfurt.de b e-mail: wagenbach@th.physik.uni-frankfurt.de c e-mail: zafeiropoulos@thphys.uni-heidelberg.de hydrodynamic models requires the thermalisation time from the initial non-equilibrium stage of the collision to the QGP to be very short [1,2] compared to the lifetime of the QGP. More recently it was argued that hydrodynamics also applies to a not-yet equilibrated system [3]. From a theoretical point of view, a heavy-ion collision has different stages. As an initial condition, one assumes the colour glass condensate (CGC), i.e. an effective field theory description of boosted, saturated gluons [4]. The resulting strong gauge field dynamics constitutes the first stage of the evolution. A following later stage is then governed by hydrodynamic equations until the medium becomes too dilute for this long wavelength description. The precise duration of the early stage is not yet known for realistic values of the coupling. Models of the hydrodynamical stage constrain it to be around or less than 1 fm [5]. In this work, we focus on the early time dynamics of the gauge fields out of equilibrium, where we pursue a purely classical treatment of Yang-Mills theory. This approach is justified for the infrared modes of gauge fields with a high occupation number, see, e.g., [29][30][31][32]. Our goal is to initiate a systematic study of the dependence on a variety of parameters entering through the CGC initial condition as well as the systematics of the classical evolution itself. In particular, we compare a treatment of the realistic SU(3) gauge group with the more economical SU (2), monitor a gauge-invariant definition of the occupation number of field modes to address the validity of the classical approximation, and compare the evolution in a static box with the one in an expanding medium. We also attempt to quantify the dependence of our results on various model parameters introduced in the literature, like the amplitude of initial boost non-invariant fluctuations, an IR cutoff to emulate colour neutrality on the scale of nucleons as well as a UV cutoff on the initial momentum distribution. Many of these issues have already been addressed one by one when they were introduced, as indicated in the following sections, but not in their interplay, as we attempt to do here. In the next section we summarise the theoretical framework of our approach and give the CGC initial conditions this work is based on. In Sect. 3, we present the numerical results of our simulations, where we extensively elaborate on the underlying parameter space of the CGC. We will see that the system is highly sensitive to the model parameters and suggest a method to reduce the number of free parameters by keeping the system's physical energy density fixed. We also present depictions of the filamentation of the energy density in position space, which results from initial quantum fluctuations and indicates the occurrence of chromo-Weibel instabilities. Section 4 contains our conclusions and an outlook. Some very early stages of this work appeared as a conference proceeding [33]. The anisotropy parameter ฮพ = a ฯƒ /a t is the ratio of spatial and temporal lattice spacings which does not renormalise in the classical limit, and ฮฒ = 2N c /g 2 is the lattice gauge coupling (we choose N c = 2 and N c = 3 colours). In the expanding geometry, where we use comoving coordinates ฯ„ = โˆš t 2 โˆ’ z 2 and ฮท = atanh(z/t), the lattice action reads We introduced the transverse lattice spacing a โŠฅ and the dimensionless rapidity discretisation a ฮท . Inserting the link variables , and expanding around small values of the lattice spacing one recovers the classical Yang-Mills action in the continuum limit, a ฮผ โ†’ 0. In order to choose canonical field variables and construct a Hamiltonian, we set i.e., we are using temporal gauge. The field variables are then the spatial (and rapidity) links and the rescaled dimensionless chromo-electric fields, For the situation in a static box this results in the standard Hamiltonian with corresponding classical field equations and Gauss constraint For the expanding case we have, in comoving coordinates, Re Tr 2ฯ„ 1 โˆ’ U xy with field equations and Gauss constraint We then consider the time evolution of the classical statistical system whose equilibrium states are determined by the classical partition function For simulations in equilibrium, initial configurations are generated with a thermal distribution governed by this partition function, and then evolved in t by solving (10) or (13), respectively. For a system out of equilibrium, by definition there is no partition function. Rather, specific field configurations satisfying the Gauss constraint have to be given by some initial conditions, and are then evolved using the field equations. Non-equilibrium initial conditions (CGC) Heavy-ion collisions at high energy density can be described in terms of deep inelastic scattering of partons. The corresponding parton distribution functions are dominated by gluonic contributions, which motivates the description in terms of a colour glass effective theory [4,34]. The gluonic contribution to the parton distribution is limited by a saturation momentum Q s , which is proportional to the collision energy. When the saturation scale Q s becomes large there is a time frame where soft and hard modes get separated [35]. The colliding nuclei constitute hard colour sources, which can be seen as static. Due to time dilatation, they are described as thin sheets of colour charge. Choosing z as the direction of the collision, this is usually described in light cone coordinates, The colour charges are distributed randomly from collision to collision. In the McLerran-Venugopalan (MV) model [36] the distribution is taken to be Gaussian, with charge densities Here ฮผ 2 โˆผ A 1/3 fm โˆ’2 is the colour charge squared per unit area in one colliding nucleus with atomic number A. It is non-trivially related to the saturation scale [37], with Q s โ‰ˆ Q := g 2 ฮผ. For Pb-Pb or Au-Au collisions, this is larger than the fundamental QCD scale ฮ› QCD . We choose a value in the range of expectations for ultra-relativistic heavy-ion collision at the Large Hadron Collider (Q s โ‰ˆ 2 GeV [38]) and fix Q = 2 GeV for our simulations throughout this paper. Originally the MV model was formulated for a fixed time slice. Later it was realised that, in order to maintain gaugecovariance in the longitudinal direction, this initial time slice has to be viewed as a short-time limit of a construction using N l time slices, containing Wilson lines in the longitudinal direction [37,39]. In the literature the designation "N y " is also frequently used for the number of longitudinal sheets, but in order to distinguish it from the lattice extent in ydirection we use N l instead. The colour charge densities produce the non-Abelian current and the corresponding classical gluon fields are then obtained by solving the Yang-Mills equations in the presence of those sources, For the lattice implementation of this initial condition, we follow [37] and solve with the lattice Laplacian in the transverse plane, The two nuclei are labelled by k = 1, 2, the index v = 1, . . . , N l indicates the transverse slice under consideration and m is an IR regulator. For m = 0, a finite lattice volume acts as an effective IR cutoff. However, a finite m โˆผ ฮ› QCD is expected to exist, since correlators of colour sources are screened over distances of ฮ› โˆ’1 QCD , as was initially proposed in [37]. Of course, a determination of this screening length requires the full quantum theory and thus is beyond a classical treatment. We shall investigate the dependence of our results by varying m between zero and some value of the expected order of magnitude. Physically, the parameter m indicates the inverse length scale over which objects are colour neutral in our description, and hence m = 0.1 Q โ‰ˆ 200 MeV โ‰ˆ 1 fm โˆ’1 โ‰ˆ 1 R p , with R p being the proton radius, is a sensible choice. Although we already have a UV cutoff โˆผ 1/a โŠฅ from the lattice discretisation, often an additional UV momentum cutoff ฮ› is used in the literature [19,29,40,41]. It is implemented by neglecting all modes larger than ฮ› while solving Poisson's equation (20a) in momentum space. There are two ways to interpret this additional UV cutoff. It is sometimes used as a technical trick to maintain an initial spectrum in the IR while allowing to make a โŠฅ smaller, in order to reduce discretisation effects. As we shall see, this is only consistent in the expanding scenario. Alternatively, it can be interpreted simply as an additional model parameter of the CGC, which restricts the colour sources in Fourier space to modes in the IR. Again, we shall investigate how results depend on the presence and size of this parameter. To get the transverse components of the collective initial lattice gauge fields U k = exp(iฮฑ a T a ), ฮฑ a โˆˆ R, we have to solve N g equations at each point on the transverse plane, Tr T a (U (1) For the case of N c = 3 we do this numerically using multidimensional root finding methods of the GSL library [42]. For the case of N c = 2, one can find a closed-form expression and circumvent this procedure, i.e. (22) reduces to The remaining field components are with the index convention introduced in Sect. 2.1. To make the initial conditions more realistic, fluctuations can be added on top of this background [15,43], which are supposed to represent quantum corrections to the purely classical fields. They are low momentum modes constructed to satisfy the Gauss constraints (11) and (14), respectively, where ฯ‡ k (x โŠฅ ) are standard Gaussian distributed random variables on the transverse plane. The amplitude of the fluctuations is parametrised by ฮ”. So far there is no theoretical prediction for its value, which is yet another model parameter we shall vary in order to study its effect on the physical results. Note that, in principle, these modelled fluctuations could be replaced by the spectrum calculated at NLO from the initial conditions, without additional parameters [44]. To our knowledge, this has not been implemented so far, and we first assess the relative importance of fluctuations before attempting such a task. Setting the lattice scale and size In a non-equilibrium problem, a scale is introduced by the physical quantity specifying the initial condition. In our case this is the magnitude of the initial colour charge distribution defined in (17) and we follow again [19] in setting the dimensionless combination QL = 120, where L corresponds to the transversal box length in physical units. It is chosen to correspond to the diameter of an Au atom with A = 197, R A = 1.2 A 1/3 fm โ‰ˆ 7 fm. In the LHC literature it is conventional to define the transverse section of the box by ฯ€ R 2 A = L 2 , which then sets the transverse lattice spacing through L = N โŠฅ a โŠฅ . Together with Q = 2 GeV we thus have As long as we do not add any term describing quantum fluctuations, the system reduces to a 2D problem and thus the results are independent of a ฮถ . For non-vanishing fluctuations in the static box we work with an isotropic spatial lattice, i.e. a z = a โŠฅ , whereas our 3D simulations in comoving coordinates are performed at a ฮท N ฮท = 2.0 as proposed, e.g., in [45]. Observables Energy density and pressure are convenient observables to investigate the early isotropisation process of the plasma. The system's energy density is the 0th diagonal element of the energy-momentum tensor, T 00 , and can be separated into its evolving chromo-magnetic and chromo-electric components, B and E , respectively, and further into transverse and longitudinal components, On the lattice, the chromo-electric and chromo-magnetic contributions to the Hamiltonian density in Cartesian coordinates, H โ‰ก T tt , are The contributions to the lattice Hamiltonian density in comoving coordinates, H โ‰ก ฯ„ T ฯ„ ฯ„ , read Summing the transverse and longitudinal components over the lattice then gives the averaged energy density contributions, with the lattice volume V = N 2 โŠฅ N ฮถ . A suitable measure for isotropisation is given by the ratio of longitudinal and transverse pressure. These are given by the spatial diagonal elements of the energy momentum tensor, Note that at early times the field component of the longitudinal pressure is negative. This is due to the leading order of the CGC initial condition which sets P L to exactly the negative value of P T [46], and reflects the force of the colliding nuclei. In complete equilibrium both pressures are equal. Validity of the classical approximation One necessary requirement for a continuum quantum field to behave effectively classically is a high occupation number N of its field modes. In addition, for a classical description to be a good approximation, the IR sector should dominate the total energy of the system, since the classical theory breaks down in the UV. Occupation numbers can be defined unambiguously for free fields, and only then. In the framework of canonical quantisation, and in a fixed gauge, the Fourier modes of the gauge and chromo-electric fields correspond to annihilation and creation operators of field quanta (or gluons) with energy ฯ‰(p) = |p|. With proper normalisation, these combine to a number density operator n(p), returning the number of gluons with momenta in (p, p + dp) when acting on an arbitrary Fock state. The (vacuum subtracted) Hamiltonian of the free theory can then be expressed simply by counting the excited field quanta of momentum p, For interacting fields, the interpretation of their Fourier modes is changed and occupation number cannot be defined rigorously. It is thus a valid concept only for sufficiently weak coupling and weak fields. In this case it has been shown that the energy contribution of the gauge-fixed field modes according to the last equation agrees well with the gaugeinvariant energy of the system, see e.g. [49]. In order to study the population of different momentum modes and their contribution to the energy, it is therefore customary to compute the Fourier components of the gauge and/or chromo-electric field, e.g. [47][48][49][50], or, close to equilibrium, those of field correlators [40,51]. However, besides the gauge-dependence, which gets amplified for interacting and strong-fields and causes ambiguities in the interpretation of the momentum distribution, this also introduces a significant computational overhead for the process of gauge fixing, especially for SU (3). For this reason, we turn the procedure around and consider the spectral decomposition of the manifestly gauge-invariant Fourier transform of the total energy density, whose average over equal absolute values of momenta 2 normalised on that momentum, provides a measure for the population of momentum modes. That is, we define an alternative occupation number density with the physical volume V and (p) โ‰ก H(p) in the static case and (p) โ‰ก H(p)/ฯ„ in the expanding one, respectively. We used the eigenfrequency ฯ‰ corresponding to the free massless dispersion relation ฯ‰( p) โ‰ˆ p, as is appropriate for p 0.1 Q [47]. In the non-interacting limit and Coulomb gauge, this definition results in the same energy density as the gauge-fixed ones used in the literature [47][48][49][50]. In the interacting and strong field case, when occupancy becomes ambiguous, our definition removes any gauge dependence while retaining its physical interpretation based on Fourier modes of the energy density. p โ‰ก |p| = p 2 x + p 2 y + p 2 z . This is indicated by the notation ยท p . Another question is up to which energy level the modes of a classical theory provide a good approximation: because of the Rayleigh-Jeans divergence, the UV sector of the classical theory in equilibrium increasingly deviates from that of the full quantum theory, irrespective of occupation numbers. In thermal equilibrium, a UV cutoff is usually fixed by matching a thermodynamical observable between the full and an effective theory. In a non-equilibrium situation, however, it is difficult to identify a scale up to which the classical theory is valid. A common self-consistent procedure then is to demand that the total energy of the system under study is "dominated by infrared modes". Ordering of scales and parameters We wish to study the dependence of the classical Yang-Mills system on the lattice spacing and volume, as well as of the various parameters introduced through the CGC initial conditions. For the classical description of the CGC model to be self-consistent, the parameters representing various scales of the problem have to satisfy The original MV model without additional IR and UV cutoffs corresponds to the special case m = L โˆ’1 and ฮ› = a โˆ’1 โŠฅ . The dimensionless version of these relations to be satisfied by our lattice simulation is obtained by dividing everything by Q. Numerical results Our numerical implementation is based on the well-tested and versatile QDP++ framework [52], which allows for dataparallel programming on high performance clusters. Unless stated differently, we will use QL = 120 throughout this section. Furthermore, as introduced in Sect. 2.2, the initial conditions in the boost invariant scenario, i.e. the one without longitudinal fluctuations, are identical in both frameworks. We will therefore present corresponding results for the energy density solely in the expanding formulation, since the counterparts in the static box can easily be derived therefrom due to energy conservation. SU(2) vs. SU(3) Performing the calculations for the realistic SU(3) rather than SU(2) gauge theory introduces roughly an additional factor of 3 in terms of computational time, depending on the studied observables. Comparing physical results between the groups is non-trivial, since the ratio Q s /Q depends on the number of colours, as well as our observables like the energy den- Fig. 1 Total energy density and its chromo-magnetic and chromoelectric components for SU(2) and SU (3) sity. For the saturation scale we have Q s โˆผ โˆš N c Q [37] and for the initial energy density g 2 (t|ฯ„ = 0) โˆผ N c N g [39]. A physically meaningful, dimensionless combination with the leading N c -behaviour scaled out is thus g 2 /(Q 4 N c N g ) plotted vs. โˆš N c Qฯ„ . In Fig. 1, where we applied this rescaling, 3 we clearly see that there is no significant difference in the observables we are studying. In particular, the sub-leading N c -dependence appears to be much weaker than the sensitivity to the parameters of CGC initial conditions, which will be discussed in Sect. 3.4. These results support early findings on the N c -scaling of classical simulations [53]. We checked this observation for several parameter settings with the same outcome and will therefore focus mostly on SU(2) in the following, in order to reduce the numerical cost. Boundary effects In the MV model, the nucleus is usually "spread" over the whole lattice. This introduces a systematic error when using periodic boundary conditions. However, for our choice of parameters the total diameter of the plane representing the nucleus is about 12 fm, which should be large enough to suppress boundary effects. In Fig. 2 we show the total energy density (times the proper time ฯ„ ) in comoving coordinates for three different scenarios: first, the nucleus is "spread" over the whole 400 2 points on the transverse lattice plane, second, the nucleus is represented by 400 2 lattice points within a 600 2 lattice and third, the same nucleus is embedded in an 800 2 lattice. We observe an effect at the 5%-level. We have explicitly checked that the size of finite volume effects does not change when additional model parameters are introduced, as in the following subsections. 3 In the following, we will keep the scaling factor for the energy density, but we will drop the โˆš N c normalisation factor in front of Qฯ„ in order to ease the comparison with other works, where this is almost always neglected, too. Discretisation effects Ideally, the non-physical scales a ฯƒ or a โŠฅ entering our calculations because of the lattice discretisation should have no effect on our results. On the other hand, a continuum limit does not exist for a classical theory and one has to investigate which values of the lattice spacing are appropriate and to which extent observables are affected by it. For our problem at hand, the transverse lattice spacing is set by the number of lattice points spanning the size of the nucleus, cf. (26). On a coarser lattice less momentum modes are available, which translates into lower initial energy density for a fixed colour charge density Q, as shown in Fig. 3 (top). For a non-expanding system the energy density stays constant, thus implying large discretisation effects. In the expanding system, these differences are quickly diminished below percent level, which in the literature is often interpreted as a sign for continuum-like behaviour. However, this behaviour should not be confused with a proper continuum limit, which does not exist for the classical theory. Rather, the expansion adds more and more infrared modes to the system, thus "diluting" the initial UV modes affected by the lattice cutoff and maintaining the apparent classicality. Note also, that the apparent freedom to choose a lattice spacing results from our ignorance of the detailed physics. While yet unknown, there must be a relation (Q) between energy density and colour charge density for given nuclei and collision energy. The lattice spacing would then be fixed by matching the energy density of the classical system to the physical one, similar to the situation in equilibrium. For our further investigations we will choose a 400 2 lattice, since it is a reasonable compromise between small discretisation effects and computation time. As can be seen in Fig. 3 (top), with this choice the discretisation effects are negligible for Qฯ„ 0.3. We also have to be sure that there are no discretisation effects coming from the numerical integration over the time variable. To this end we vary the anisotropy parameter ฮพ , with the results for the transverse and longitudinal energy density shown in Fig. 3 (bottom). We used ฮพ = 20 โ‡” a t|ฯ„ = 0.05 a ฯƒ |โŠฅ for all the results presented in this work, since this choice leads to negligible systematic errors coming from our time discretisation. Investigation of the parameters of the CGC initial conditions In the following we elaborate on the different parameters entering the system's description through the CGC initial conditions. Number of longitudinal sheets N l As shown in [39], the originally proposed initial conditions of the MV model lack randomness within the longitudinal dimension. Fukushima proposed to use N l sheets of the nucleus rather than only a single one. This is a merely technical parameter coming from the numerical implementation and thus vanishes in continuous time, where N l โ†’ โˆž. Figure 4 shows that the total energy density depends strongly on N l for small values 30 and then saturates. This effect is amplified by adding an IR cutoff m, leading to a faster saturation for m/Q = 0.1 than for m/Q = 0. This has also been observed in [37] and can be expected: the IR cutoff introduces an additional screening of the colour sources and hence reduces the correlation length also in the rapidity direction. The computation time of the system's initialisa-tion grows linearly with N l and hence a reasonable choice is N l = 30, which we set for most of our simulations. IR cutoff m As explained in the last section, the IR parameter m provides a simple way to incorporate the colour neutrality phenomenon studied in [54]. While m = 0.1 Q โ‰ˆ 1 R p , with R p being the proton radius, is a physically motivated choice, the precise value of m/Q has a large effect on the initial energy density which can be seen in Fig. 5 (top). With a higher cutoff, less modes are populated to contribute to the energy density. As studied in [37], the parameter m also affects the ratio Q/Q s : at N l = 30 the physical saturation scale Q s is around 0.85 Q for m/Q = 0.1 and around 1.03 Q for m/Q = 0. Since the energy density is normalised by Q 4 , this difference amounts to about a factor of 2 in the dimensionless quantity /Q 4 s . Since the effect of m is in the infrared, it does not get washed out by the expansion of the system, in contrast to the discretisation effects. Hence a careful understanding to fix this parameter is important. For example, one might wonder whether this inverse length scale should not also be anisotropic in the initial geometry. In what follows we will either use m = 0, as in the initial MV model, or the physically motivated choice m/Q = 0.1. UV cutoff ฮ› As discussed in Sect. 2.2, one can apply a UV cutoff ฮ› while solving Poisson's equation (20a), in addition to the existing lattice UV cutoff. This is an additional model parameter limiting the initial mode population to an infrared sector determined by ฮ›. Figure 5 (bottom) shows the influence of this parameter on the energy density, which gets reduced because of the missing higher modes in the Poisson equation. This is similar to the observation we made on the IR cutoff m, but with the important difference that the ratio Q/Q s is independent of ฮ› [55]. We are not aware of a unique argument or procedure to set this parameter, for the sake of comparison with the literature we choose ฮ›/Q = 1.7 [19] in some of our later investigations. As a welcome side effect, with the emphasis of the infrared modes strengthened, the dependence of the total energy density on the lattice spacing is reduced and the expanding system saturates even faster towards a โŠฅindependent values, cf. Fig. 6 and the previous Fig. 3 (top). The energy density mode spectrum The occupation number of field modes in Fourier space is the most direct and often applied criterion to judge the validity of the classical approximation during the time evolution of the system. It is well-established that, starting from CGC with an additional UV cutoff of ฮ› = 1.7 Q initial conditions, simulations in a static box quickly populate higher modes, implying a breakdown of the classical description beyond some time. In the expanding system this process is considerably slowed down [19,38,56,57]. We confirm these earlier findings by plotting our generalised occupation number as a function of the momentum modes defined via (35). full range of the additional UV cutoff, we deliberately chose ฮ› = Q as its smallest value, cf. (36). One clearly sees that the additional UV cutoff causes a strong suppression of higher modes, thus strengthening the validity of the classical approximation. This is also consistent with the observation from Sect. 3.4.3, that the additional cutoff can be used to weaken discretisation effects. Another observation is that the distribution is rather independent of the IR cutoff value. In Fig. 7 (bottom) we present the evolution of the same initial configuration in the static and expanding framework. While without an additional UV cutoff the distributions nearly reach a plateau in the static box, the occupation of the higher modes in the expanding system stays considerably lower, thus extending the validity of the classical approximation. One can now try to get a quantitative measure of the supposed dominance of infrared modes. By integrating the Fourier modes of the energy density up to some momentum scale, one can infer the energy fraction of the system contained in the modes below that scale, thus assessing the classicality of the mix (see for example [29]). For example, without applying any cutoffs, integrating modes up to 2Q โ‰ˆ 4 GeV contains 65% of the total energy of the system at initial time. At Qt|Qฯ„ = 150, this changes to 60% or 77% in the static and expanding cases, respectively. Hence, the quality of the classical approximation deteriorates only slowly or not at all. Nevertheless, a significant systematic error should be expected when several 10% of the energy is in the UV sector, where a running coupling and other quantum effects should be taken into account. This must certainly be the case when modes 5Q โ‰ˆ 10 GeV get significantly populated, as in Fig. 7. At this stage of the evolution a better description might be obtained by an effective kinetic theory [26][27][28], where quantum effects are already included. Finally, we remark that the Fourier mode distribution of energy density, like occupation number in a free field theory, is also sensitive to the homogeneity of the system in coordinate space: a plane wave with only one momentum mode occupied corresponds to a (finite) delta peak in occupation number, whereas wave packets have broader distributions. Isotropisation In this section we add small quantum fluctuations on the initial conditions, as described by Eq. (25). These initial fluctuations lead to an eventual isotropisation of the system, which can be studied by the evolution of the ratio of the pressure components P L /P T . To include their effects, we have to extend our two-dimensional analysis by an additional longitudinal direction N z|ฮท , increasing the computation time linearly with N z|ฮท . Within our computational budget, this forces us to use smaller lattices (200 3 ) for this section, thus inevitably increasing the cutoff and finite volume effects we have discussed so far. However, as we shall see, the effects of the model parameters are by an order of magnitude larger. Static box We begin with the static box. The general behaviour of the pressure ratio P L /P T has been known for a while and is shown in Fig. 8. After a peak at around Qt โ‰ˆ 0.6 follows an oscillating stage until the system isotropises. The oscillating Table 1 The initial total energy density and its relative increase due to the fluctuations for different cutoff setups. First row: no additional cutoff, second row: m/Q = 0.1, third row: m/Q = 0.1 and ฮ›/Q = 1.7. The statistical errors are all below the 1 %-level [18,19] and precludes a hydrodynamical description. We see a strong finite size effect in N z , Fig. 8 (top), which decreases for larger values and should vanish in the limit N z โ†’ โˆž. For very small values of N z โ‰ค 10, the fluctuations cannot evolve and the system behaves as in the unperturbed ฮ” = 0 case. The dependence on the fluctuation amplitude ฮ” is studied in Fig. 8 (bottom). In accord with expectation, increasing the fluctuation amplitude ฮ” reduces the isotropisation time. Note the interesting dynamics associated with this: while for larger initial amplitudes the onset towards isotropisation occurs earlier, the eventual growth of the longitudinal pressure appears to be faster for the smaller amplitudes. The initial fluctuation amplitude ฮ” also significantly affects the early behaviour of the system, causing a strong change of the pressure ratio and a significant increase of the energy density (โˆผ ฮ” 2 ), as shown in Table 1. Also the frequencies of the plasma oscillations are affected. Of course, increasing the quantum fluctuation amplitude weakens the classicality of the initial condition: for ฮ” โ‰ฅ 0.1 the fluctuations already make up โ‰ฅ 20% of the initial energy density. On the other hand, for ฮ” 10 โˆ’2 there is no visible effect on the pressure ratio at early times (Qt 20), and also the energy remains the same within numerical fluctuations. The hydrodynamisation time of a heavy ion collision is the time, after which hydrodynamics is applicable to describe the dynamics of the system. This is commonly believed to be the case once the pressure ratio P L /P T โ‰ฅ 0.7. For an initial amplitude of ฮ” = 10 โˆ’2 and without further model cutoffs, this happens at t โ‰ˆ 770/Q โ‰ˆ 76 fm in our simulations. This value is considerably larger than experimentally expected ones, but it is in line with earlier numerical results in a static box, e.g. [19]. The pressure ratio is highly sensitive both to the additional IR and to the UV cutoff introduced in the initial condition, cf. Fig. 9 (top). Especially the UV cutoff changes the qualitative shape of the curve at early times significantly. Furthermore, both cutoffs considerably slow down the process of isotropisation as shown in Table 2. The hydrodynamisa-tion time grows by factors of 2-6 for cutoff values as chosen before. Hence, a better understanding and fixing of those model parameters is mandatory for any quantitative investigation. Note that our tabulated hydrodynamisation times have been obtained by extrapolation. In principle it would be possible to simulate the late stage of the P L /P T evolution and compare its details to the predicted power law behaviour observed in other studies [18,41]. However, our discussion of the mode distribution in Sect. 3.5 suggests that at such late times a purely classical evolution might no longer be selfconsistent, so we refrained from this computationally very expensive investigation. In accord with Sect. 3.1, we see no significant change in the isotropisation time when using N c = 3 colours instead of 2, cf. Fig. 9 (bottom). By contrast, the details of the oscillatory behaviour at early times differ. This implies that for the investigation of the properties of collective excitations as in [58], the correct gauge group will eventually be important for quantitative results. Chromo-Weibel instabilities It has been suggested that the apparent rapid thermalisation during heavy ion collisions might be caused by chromo-Weibel instabilities [7,8]. Indeed, the final increase of the pressure ratio towards isotropisation, as observed in Fig. 8, may be attributed to such an instability, as we now show. Firstly, our anisotropic initial conditions imply a fluctuating current, which is a necessary ingredient for the occurrence of a Weibel instability. Secondly, an instability causes a rapid population of harder longitudinal modes, which during the evolution in time spreads to others, as suggested by the mode spectrum in Fig. 7. The most striking illustration of the presence of a chromo-Weibel instability is obtained by observing the chromo-electric and chromo-magnetic energy densities in position space, where filaments caused by the instability are clearly visible. Figure The pattern at Qt = 0.3 (first row of Fig. 10) for ฮ” = 10 โˆ’2 and ฮ” = 10 โˆ’3 represents the initial fluctuations which are independent of the longitudinal direction z. At a later time Qt = 90 the chromo-Weibel instability is visible with filaments that are more pronounced for higher fluctuation seeds. At very late times Qt = 300 the filaments dissolve again. Note how the detailed timing of the growth and decay of the filaments crucially depends on the value of ฮ”. It is interesting to compare these plots with Fig. 8 (bottom): apparently the dynamical instabilities arise late, after the oscillatory period around the onset to isotropisation. For consistency, we checked that indeed no filamentation arises in the transverse plane, as expected. This holds for all components of both the chromo-magnetic and for the chromo-electric energy density. Instead, the average values of the energy densities are random with large fluctuations at early stages, which get smoothed during the time evolution. Expanding system By contrast, in an expanding system, as realised in heavy ion collisions, the pressure ratio does not appear to isotropise after the oscillatory stage but settles at a small or zero value, The red solid lines indicate the constant energy densities corresponding to ฮ› โˆˆ {Q, 2Q, 3Q, 4Q} at QL = 120. The yellow solid line refers to the constant energy density contour obtained for QL = 120 without an additional UV cutoff, which is the choice of the majority of previous studies. The gray horizontal line represents the lattice UV cutoff above which the additional UV cutoff no longer affects the system as shown in Fig. 11. This is in accord with the findings in [14,15,41] and robust under variation of all model parameters. In particular, it also holds for the largest fluctuation seed considered, cf. Fig. 11 (bottom). Correspondingly, in the expanding system no dynamic filamentation takes place either. Only for fluctuation amplitudes 10 โˆ’1 filaments are forced right from the beginning, since the initial configuration is equivalent to the one we have shown for the static box scenario. The conclusion is that an expanding gluonic system dominated by classical fields according to the CGC does not appear to isotropise and thermalise. For future work it would now be interesting to check whether adding light quark degrees of freedom helps towards thermalisation, as one might expect. Note that the non-thermalisation of the expanding classical system is in marked contrast to simulations of an effective kinetic theory, which predict hydrodynamisation times consistent with phenomenological expectations [28,59] (see, however, [23]). Further studies of the systematics of both approaches are necessary to see whether quantum effects and/or the role of the UV sector are the reason for this discrepancy. Initial condition at fixed energy density Altogether the numerical results of classical simulations show a large dependence on the various model parameters of the CGC initial condition. This creates a difficult situation, because the initial condition and the early stages of the evolution until freeze-out are so far not accessible experimentally. We now propose a different analysis of the simulation data which should be useful in constraining model parameters such as ฮ›, m and ฮ”. In a physical heavy ion collision the initial state is characterised by a colour charge density, an energy density and some effective values of ฮ›, m and ฮ”. However, these cannot all be independent, rather we must have = (Q, ฮ›, . . .), where the detailed relation is fixed by the type of nuclei and their collision energy. We should thus analyse computations with fixed initial energy density L 4 , while varying the model parameters. The outcome of such an investigation for ฮ› and Q are the contour plots shown in Fig. 12. We consider Qฯ„ = 0.3 as well, since then even without an additional UV cutoff the discretisation effects are negligible for N โŠฅ = 400, cf. Fig. 3. In the same figure we also compare the situation with an additional IR cutoff as discussed earlier. Thus, to the extent that the energy density as a function of time can be determined experimentally, it should be possible to establish relations between the parameters Q, ฮ› and m to further constrain the initial state. The same consideration can be applied to study the fluctuation amplitude. Figure 13 shows contours of fixed energy density /Q 4 in the (ฮ”, ฮ›/Q) plane, where ฮ” = 0 represents the classical MV initial conditions, i.e., the tree-level CGC description without any quantum fluctuations, and we have chosen QL = 120. Clearly, similar studies can be made for any pairing of the model parameters at any desired time during the evolution and should help in establishing relations between them in order to constrain the initial conditions. Conclusions We presented a systematic investigation of the dependence of the energy density and the pressure on the parameters entering the lattice description of classical Yang-Mills theory, starting from the CGC initial conditions. This was done in a static box framework as well as in an expanding geometry and both for N c = 2 and N c = 3 colours. After the leading N c -dependence is factored out, deviations between the SU(2) and the SU(3) formulation are small and only visible in the details of the evolution during the early turbulent stage. This is not surprising in a classical treatment, since in the language of Feynman diagrams most of the subleading N c -behaviour is contained in loop, i.e. quantum, corrections. Finite volume effects are related to the treatment of the boundary of the colliding nuclei and their embedding on the lattice. Given sizes of โˆผ 10 fm, such effects are at a mild 5%level. Note, however, that this effect is larger than the finite size effects of the same box on the vacuum hadron spectrum, as expected for a many-particle problem. The choice of the lattice spacing affects the number of modes available in the field theory and thus significantly influences the relation between the initial colour distribution and the total energy of the system. In the static box, all further evolution is naturally affected by this. Since the classical theory has no continuum limit, the lattice spacing would need to be fixed by some matching condition at the initial stage. By contrast, in the expanding system the energy density quickly diminishes and the effect of the lattice spacing is washed out. A quantitatively much larger and significant role is played by the model parameters of the initial conditions, specifically additional IR and UV cutoffs affecting the distribution of modes and the amplitude of initial quantum fluctuations, whose presence is a necessary condition for isotropisation. For the static box we presented direct evidence for isotropisation to proceed through the emergence of chromo-Weibel instabilities, which are clearly visible as filamentation of the energy density. However, the hydrodynamisation time is unphysically large and gets increased further by additional IR-and UV-cutoffs in the initial condition. Without quantitative knowledge of these parameters, the hydrodynamisation time varies within a factor of five. We suggested a method to study the parameters' influence on the system at constant initial energy densities. This allows to establish relations between different parameter sets that should be useful to constrain their values. Rather strikingly, no combination of model parameters leads to isotropisation in the expanding classical gluonic system. International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse. S.Z. acknowledges support by the DFG Collaborative Research Centre SFB 1225 (ISOQUANT). Data Availability Statement This manuscript has associated data in a data repository. [Author's comment: This is a theoretical work, no experimental data were used.] Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .
Using supply side evidence to inform oral artemisinin monotherapy replacement in Myanmar: a case study Background In 2012, alarmingly high rates of oral artemisinin monotherapy availability and use were detected along Eastern Myanmar, threatening efforts to halt the spread of artemisinin resistance in the Greater Mekong Subregion (GMS), and globally. The aim of this paper is to exemplify how the use of supply side evidence generated through the ACTwatch project shaped the artemisinin monotherapy replacement malaria (AMTR) projectโ€™s design and interventions to rapidly displace oral artemisinin monotherapy with subsidized, quality-assured ACT in the private sector. Methods The AMTR project was implemented as part of the Myanmar artemisinin resistance containment (MARC) framework along Eastern Myanmar. Guided by outlet survey and supply chain evidence, the project implemented a high-level subsidy, including negotiations with a main anti-malarial distributor, with the aim of squeezing oral artemisinin monotherapy out of the market through price competition and increased availability of quality-assured artemisinin-based combinations. This was complemented with a plethora of demand-creation activities targeting anti-malarial providers and consumers. Priority outlet types responsible for the distribution of oral artemisinin monotherapy were identified by the outlet survey, and this evidence was used to target the AMTR projectโ€™s supporting interventions. Conclusions The widespread availability and use of oral artemisinin monotherapy in Myanmar has been a serious threat to malaria control and elimination in the country and across the region. Practical anti-malarial market evidence was rapidly generated and used to inform private sector approaches to address these threats. The program design approach outlined in this paper is illustrative of the type of evidence generation and use that will be required to ensure effective containment of artemisinin drug resistance and progress toward regional and global malaria elimination goals. concern is that artemisinin-resistant Plasmodium falciparum has been identified on the eastern borders of Myanmar [6] and, most recently, along the border with India [7]. Myanmar is thus critical to global efforts to contain resistance, but the complexities are vast. There is extensive migration in high transmission areas, increasing the spread of resistance. The country has been faced with over 60 years of civil conflict in some border areas. Inadequate investment in the health system-overall and for malaria control in particular-has posed challenges to effective malaria case management [8]. In 2011, there was little information on the private sector in Myanmar, but it was estimated that most malaria treatments in Myanmar were accessed directly from this sector, similar to other markets in the region [9]. Low levels of government investment and conflict, particularly along the remote border areas where malaria is highest, meant that health services were of limited quality or availability. Thus, it was anticipated that local drug sellers and other private sector provider types were typically sought-after for treatment [8]. There was also concern that most malaria was not only self-diagnosed but treated inappropriately with incomplete courses of oral artemisinin monotherapy. The use of oral artemisinin monotherapy, incomplete courses and, in particular, substandard drugs was of concern given emerging drug resistance. To address this situation, in 2010, the Government of Myanmar, under the Myanmar artemisinin resistance containment (MARC) framework, developed a comprehensive set of interventions to address the spread of artemisinin resistance in Eastern Myanmar [3]. As part of the private sector initiative, in 2012, Population Services International (PSI), a large, US-based NGO, designed and implemented the evidence-based artemisinin monotherapy replacement (AMTR) malaria project with support from the UK Department For International Development (DFID), the Bill and Melinda Gates Foundation (BMGF) and Good Ventures. At a global level, the AMTR project aimed to ensure a firewall against the spread of artemisinin resistance through South East Asia, India and to Africa; and at a national level, it aimed to increase the availability and supply of high quality, affordable ACT, thus reducing the burden of P. falciparum in the endemic areas and moving the country from control towards elimination of malaria. In its simplest form, the AMTR project approach was to get subsidized, high-qualityassured, first-line ACT into the private sector through key suppliers and distributors, with the aim of squeezing oral artemisinin monotherapy out of the market due to price competition, increased availability of ACT, and other demand-creation activities. This paper documents how two critical research studies informed the development and design of the AMTR project across Eastern Myanmar and describes the program activities that were developed as a result of the malaria landscape in 2010-2012, using evidence that these research studies generated. The studies Prior to the program implementation, two supply side surveys, a supply chain study and an outlet survey were conducted in Eastern Myanmar to investigate the private sector anti-malarial landscape, with the aim of using the findings to inform programming decisions and implementation activities. The studies are described in this sub-section. The supply chain study In 2010, PSI/Myanmar conducted a supply chain study to document the distribution chain for anti-malarials in Eastern Myanmar. The aim of the supply chain study was to provide a snapshot of the current market for antimalarial drugs in Eastern Myanmar: how the supply chain functioned, how the anti-malarial medicines were bought and sold, which anti-malarials were dominating the market, and estimated prices to patients as well as mark-ups. The supply chain study was conducted primarily to provide input into the proposal development for the project and was used as formative research to help shape the project development and rationale. A short, open-ended interview guide was developed to ascertain information from key informants across different levels of the supply chain. Questions included: where the anti-malarials medicines were purchased from, the quantity/amount purchased, the quantity/amount typically distributed, and the type of anti-malarial medicine. Providers who sold to patients were asked the typical selling price. Snowball and purposive sampling methods were used. The assessment utilized a bottom-up approach to identify respondents. Initially, respondents (key informants) at the bottom end of the supply chain were interviewed, namely general retailers and mobile providers. This was the first wave of data collection. A second wave of interviews was implemented with pharmacists and wholesalers that had been identified in the first wave of data collection by the general retailers and mobile providers. Finally, a third wave of data collection was implemented with the higher-order wholesalers and distributors referred to by the key informants in the second wave of interviews. The assessment allowed for a rapid assessment of the supply chain. In total, 34 key informants were interviewed: 11 mobile providers, 8 general retailers, 9 pharmacies, 3 wholesalers and 3 importers and distributors. Data were analyzed and transcribed using standard qualitative thematic methods but focused specifically on identifying the main distributor(s) and a diagrammatic qualitative representation of the supply chain structure. The first study took place along the eastern border, the area with the highest risk of drug resistance. A second study was conducted close to the China border, in Kachin and Shan States, to investigate whether medicines from China played a significant part in the local market. Both studies confirmed a very similar pattern. A core finding was that 70 % of the artemisinin monotherapy in the anti-malarial market was supplied by one distributor: AA Medical Products Limited. The main product being supplied by this wholesaler was called ' AA Artesunat ยฎ , (Fig. 1), although other artemisinin monotherapy products were identified such as Artem ยฎ (Artemether injection and tablets), Arthesis ยฎ , AA-Artemether ยฎ and Artemodi ยฎ (all oral artemisinin tablets). This distributor not only had control of the artemisinin monotherapy market but also had a sophisticated distribution mechanism that penetrated outlets that were directly serving patients, such as general retailers and pharmacies. Many providers in towns and cities in the area were supplied directly by the AA Medical Products Limited distribution mechanism. It was also observed that half of the sales were funneled through wholesalers, and the artemisinin monotherapy distribution was therefore relatively centralized. Regarding information on sales from the supplier, almost all of the anti-malarials distributed by AA Medical Products Limited were oral artemisinin monotherapy. It was estimated that more than 100,000 strips (of 12 tablets) of artesunate monotherapy were distributed per month on average-and around 1.2 million doses were imported per annum. Other evidence gathered across the supply chain helped to set the eventual price of the subsidized, quality-assured ACT. Data showed that the average mark-up of anti-malarial drugs across the different levels of the supply chain varied. Among outlets serving patients, it was feasible to determine how many oral artemisinin tablets were typically sold to patients and at what price. It was estimated that the market price of a full adult course of artesunate, comprised of 12 tablets, was around 2400 (Myanmar) Kyat. However, providers reported that they typically only gave two or three tablets to the patient, preferring to sell individual pills rather than a complete dose (with two to three pills sold for around 500 Kyat or $0.40 in 2012). Interviews with providers suggested that each strip of artesunate monotherapy was typically used to treat two to three malaria-suspected patients. This evidence on price of sale to patients, as well as mark-ups along the distribution chain, helped the program to calculate in a backward fashion a recommended retail price for the quality-assured ACT. Other evidence from key informant interviews conducted with patients indicated that the private sector played a major role in malaria treatment in Myanmar. Most commonly, patients described purchasing anti-malarial medicines from local private sector outlets and were unlikely to receive a malaria diagnostic test. If high-quality ACT treatment drugs were available at these outlets, they were described as prohibitively expensive. Providers also reported that they gave clients 'what they wanted' , and this was typically artemisinin in tablet form. The root cause of the sale of incomplete treatment regimens was most often described as 'cost' , as patients could not afford a full course. The outlet survey A cross-sectional outlet survey based on the ACTwatch methodology was conducted across 26 townships in Eastern Myanmar between March and May, 2012 [10]. The objective of the ACTwatch outlet survey was to investigate the availability, price and market share of anti-malarial medicines across private sector outlets. The methods and study objectives have been described in detail elsewhere [11][12][13]. Briefly, a census of all outlets with the potential to sell anti-malarials was conducted in selected townships and wards across Eastern Myanmar. Among eligible outlets, that is, outlets that stocked anti-malarial medicines on the day of survey or in the previous 3 months, a full audit of anti-malarials was conducted. Information on the brand and generic name, strength, amount sold in the previous week as well as the price was collected for each anti-malarial medicine. A provider interview assessing provider knowledge and dispensing practices was also implemented. A total of 3658 private sector outlets with the potential to sell anti-malarials were screened. In the context of Myanmar, these included private facilities and community health workers, pharmacies, general retailers and mobile providers (or sometimes described as itinerant drug vendors who do not typically operate from a fixed location and may or may not have some medical background/training). Of the 3658 outlets screened, 32 % stocked at least one anti-malarial at the time of the study, including 82 % of private health facilities (N = 273), 73 % of community health workers (N = 348), 79 % of pharmacies (N = 454), 55 % of mobile providers (N = 290), and 15 % of general retailers (N = 2292). Availability was relatively low among general retailers (village stores and grocery stores) although they were the most numerous outlet type in the census. The availability of quality-assured ACT varied across different anti-malarial-stocking outlet types: private health facility (55 %), community health worker (76 %), pharmacy (7 %), mobile provider (8 %) and general retailer (3 %). Availability of oral artemisinin monotherapy was higher across all key outlet types: pharmacy (86 %), mobile provider (30 %), general retailer (81 %) (see Fig. 2). Priority outlets-pharmacies, general retailers, and mobile providers-were outlets that received additional supportive interventions, namely production promotion through PSI. Observing market share of different classes of antimalarials, oral artemisinin monotherapy accounted for 33 % of the total market share. Within outlets, oral artemisinin was most commonly sold by pharmacies (more than 45 % of the market share), mobile providers (38 % of the market share), and general retailers (32 % of the market share) (Fig. 3). Across these three outlet types, market share of quality-assured ACT was less than 5 % among mobile providers and general retailers, and was reportedly not sold or distributed in the previous week by pharmacies. Provider knowledge of the first-line treatment for malaria was exceptionally low among anti-malarialstocking pharmacies, mobile providers and general retailers (<10 %) compared to private facilities and community health workers (58.2 and 54.9 %, respectively). Another finding was that providers often report cutting blisters of pills, lending to cocktail formulations or other sub-optimal treatment regimens, and this was most common in pharmacies, general retailers and among mobile providers [10]. In summary, the supply side data indicate that there were a number of barriers to ensuring access to the firstline ACT treatment for malaria, including availability and price of ACT. The supply side data indicated wide spread use and distribution of other anti-malarials to treat malaria, in particular oral artemisinin monotherapy, which is acknowledged as a major bottleneck in containing resistance to artemisinin derivatives in Myanmar. The findings from the supply chain and the outlet survey were used to inform the AMTR project strategy for oral artemisinin monotherapy replacement in the private sector and are discussed in the next section. How evidence and policy shifts were used to develop and design the AMTR project activities Enabling environment In 2010, at the time of the supply chain rapid assessment, artemisinin monotherapy was legal and registered by the Government of Myanmar, although the national guidelines encouraged use of a combination therapy. Findings from the supply chain provided evidence regarding the distribution of oral artemisinin monotherapy in Eastern Myanmar. PSI, along with other organizations and partners within the MARC framework, shared the findings with the NMCP program managers as a means to help senior officials within the government advocate for tightening the importation and regulation of artemisinin monotherapy so that only intravenous, intramuscular and rectal suppository formulations of artemisinin for the treatment of severe malaria would be permitted. Subsequently, the importation of oral formulations of artesunate and artemether was banned in Myanmar in December 2011, and August 2012, respectively, by the Food and Drug Administration (FDA). However, oral artemisinin monotherapy was still legal to purchase in Myanmar while existing stocks were used up. Theory of change The AMTR project was also guided by a theory of change, pertaining to how the interventions and activities implemented by PSI would shape the market over time. While it is acknowledged at the time of writing that some of these assumptions have changed based on new epidemiological data, the framework helped to guide and visualize the project implementation and vision of success. The framework was developed using information from the supply chain as well as other key resources from the Ministry of Health and the World Health Organization. The following situation was described (see Fig. 4): (1) Based on the number of malaria cases visited in public health facilities and the number of annual oral artemisinin monotherapy drugs imported into the country, it was roughly estimated that there were 5 million fever cases in the PSI target area-equivalent to a two-week fever prevalence of 1.8 % among the general population; enhancing the likelihood of selecting resistant strains, and; (6) Reversing this situation by replacing oral artemisinin monotherapy in the private sector with full treatment courses of quality-assured ACT through the AMTR project would then significantly contribute to the resistance-containment efforts. Stopping oral artemisinin monotherapy distribution Based on the findings of the supply chain survey, in 2012, PSI engaged the major private sector supplier of artemisinin, AA Medical Products Limited, in an agreement to purchase highly-subsidized, pre-packaged, qualityassured ACT from PSI. This was expected to rapidly replace oral artemisinin monotherapy in at least 70 % of all private sector malaria treatment providers in Eastern Myanmar, especially given the extensive distribution networks that were already in place. Developing the right products According to national treatment guidelines, the ACT artemether-lumefrantrine (AL) was indicated for the treatment of P. falciparum malaria. WHO prequalified AL tablets were procured through a competitive bid and overbranded with the brand names Supa Arte ยฎ and Artel ยฎ . Both products were first-line treatment for uncomplicated malaria and packaged according to four child/adolescent and adult weight ranges. In addition to this, an unbranded generic ACT was also distributed in the private sector directly to PSI-franchised Sun Quality Health Clinics. Having two different brands helped to ensure that providers further down the supply chain had a choice between different malaria treatments. This facilitated the progressive replacement of oral artemisinin monotherapy by ACT in the private sector and also created healthy competition. Both brands were marked with a quality seal-the Padonmar (lotus leaf ), designed to identify all qualityassured ACT imported and distributed in Myanmar, allowing for the program to promote the anti-malarial medicine rather than a particular formulation, manufacturer, distributor or/and brand. The NMCP and the FDA were also closely involved in this process, as the seal was meant to be used by all partners distributing qualityassured ACT in the country-regardless of the donor funding or project location. Thus, any WHO-approved ACT that was part of the national treatment guidelines was branded and identified using the lotus leaf seal. A key advantage of the quality seal was that it also enabled PSI and other partners to potentially respond to changing national guidelines of first-line ACT without eroding brand equity, should sentinel surveillance suggest that partner drug efficacy was deteriorating. It was also created to help both providers and consumers identify the best possible treatment for P. falciparum malaria. In addition, the 30 % of the market that was not being reached by the key drug distributors would still be able to know that they were accessing a quality, first-line ACT. The quality seal was developed with participation from over 14 stakeholder organizations, including the NMCP, and the content and design were field-tested with the target audience. How the quality-assured ACT were manufactured and distributed The quality-assured ACT were procured directly by PSI through a competitive bid. The imported ACT were over-packaged (additional packaging with consumer instructions) and over-branded in Myanmar in PSI's own warehouse (Fig. 5). The over-packaging was done with the aim of deterring health care providers from cutting the blister packets and using ineffective single doses, as evidence from the supply surveys suggested this was a common practice. After over-packaging, AA Medical Products Limited collected the quality-assured ACT branded products from the PSI warehouse and distributed them through their existing networks. Ensuring affordable anti-malarials Both of the quality-assured ACT brands were highly subsidized so that patients were motivated to purchase them over other anti-malarial medicines. The subsidy was provided to the extent that the price of a complete dose of quality-assured ACT was equivalent to the same price of a partial dose (2-3 tablets) of oral artemisinin monotherapy, which was typically taken by febrile patients with suspected malaria. The recommended retail selling price was 500 Kyat, reflecting a subsidy of 80 % of the regular non-subsidized price. The price was determined based on the information from the supply chain survey, including average mark-ups across different levels and the number of tablets typically sold to consumers by lower-level suppliers. Acknowledging that people with suspected malaria can typically afford around 500 Kyat, the program calculated the price in a backward fashion, from the lower-level suppliers to the wholesale/distributor level, to determine a price that AA Medical Products Limited would use to sell to its wholesalers. The recommended retail price for quality-assured ACT was then monitored using the annual outlet surveys, reporting on the percentage of providers that sold a full course qualityassured ACT at or below 500 Kyat. Data from subsequent outlet surveys revealed that this assumption was correct, with the majority of priority outlets (~80 %) selling a full course of quality-assured ACT at or below 500 Kyat. Creating consumer demand At a national level, mass media communication campaigns were implemented to encourage the public to demand quality ACT from drug sellers. These campaigns raised awareness of the need to seek prompt treatment and to take a full course of an effective and affordable ACT and highlighted the problems associated with consumption of counterfeit drugs. At the time of implementation, the majority of febrile patients had been using monotherapies for over a decade. The main purpose of the communication campaign was to create demand for a full course of pre-packaged quality-assured ACT rather than loose tablets (and incomplete doses) of a monotherapy. A key aim was to ensure that patients could easily identify a 'quality-assured' ACT from other products already in the market and being sold by providers, including other ACT that may not be quality-assured. Thus, a core component of the communication campaign was to create demand and awareness of the 'lotus leaf ' quality seal. All communication and campaign messages were around WHO/NMCP-recommended qualityassured ACT, which could be recognized by the lotus leaf quality seal. To ensure messaging resonated with a variety of target audiences, the communication campaigns included a variety of people, such as migrant workers and rubber tappers, all of whom spoke to the benefits of the quality seal and of adherence to the full course. Targeting the key areas and promoting provider behavior change The AMTR project was national in scope, as the supply chain intervention reached all states and regions with quality-assured ACT. However, in the eastern part of the country, including the MARC high-risk zones (zones identified as most at risk of artemisinin resistance in Myanmar), PSI designated a target area for an intensified communications campaign. The target area covered a population of a little over ten million people and included all of the MARC-defined high-risk zones and surrounding and linking townships. In this target area, ACT sales and mass media were reinforced by pharmaceutical detailing operations through PSI product promoters. Product promoters implemented provider behavior change and interpersonal communication as a means to amplify an increase in ACT uptake in the supply chain. At the project inception, PSI recruited and trained over 75 product promoters to specifically target the different priority outlets [pharmacies, general retailers and mobile providers (see below for further discussion)] and to promote awareness and Fig. 5 Over-packaging ACT treatments in PSI the warehouse knowledge of quality-assured, effective ACT among providers from these outlets. The product promoters spoke local languages, which helped to ensure ethnic minority communities were reached. The product promoters provided information on general malaria transmission and prevention, malaria drug resistance, and stocking and dispensing decisions and also provided key messages around the need to use ACT and the harm associated with incorrect drug regimes. Product promoters also shared information regarding the risks associated with use of oral artemisinin monotherapy and the benefits of prescribing a full course of quality-assured ACT. They specifically discouraged the stocking of oral artemisinin monotherapy and promoted the replacement of existing stocks with quality-assured ACT. Product promoters were also responsible for provision of information, education and communication (IEC) materials pertaining to malaria treatment, as well as job aid materials. They also collected routine sales data of quality-assured ACT and oral artemisinin monotherapy. Product promoters did not sell ACT directly to the priority outlets but did provide free samples of qualityassured ACT on their first visit and then established a link between the outlets and the supply points by providing providers with a list of the wholesalers and midwholesalers from the nearest towns or areas where quality-assured ACT could be purchased. They also provided providers with a recommended retail selling price. Product promoters visited all of the AMTR priority outlets in their assigned township every quarter. As well as targeting the priority outlets, product promoters also visited the wholesalers in the surrounding towns to ensure the availability of quality-assured ACT and to identify and report back on any potential stockout issues. Targeting the 'right' outlets The outlet survey showed that there were five types of non-governmental outlets that sold or distributed anti-malarial medicines in Eastern Myanmar, including oral artemisinin monotherapy. The census approached used by the ACTwatch study identified the relevancy of community health workers, private health workers, pharmacies, general retailers and mobile providers as anti-malarial-stocking outlets. The last three categories were recognized as being largely unregulated, meaning that these outlets did not receive any access to formal training or commodities from the government or other non-governmental organizations. This was also reflected in the outlet survey findings, which demonstrated relatively high availability of oral artemisinin monotherapy and low ACT availability among the anti-malarial stocking pharmacies, general retailers and mobile providers and high volumes of oral artemisinin monotherapy being distributed by these outlet types (and low-level of knowledge about recommended treatment for malaria). Thus, rather than reaching all outlet types, and given the urgency to implement rapid changes in the supply side across the Eastern areas, PSI product promoters targeted key outlet types: pharmacies, general retailers and mobile providers with the aforementioned provider supervision and regulation. To engage with mobile providers, details of their fixed location were obtained by the product promoters, and follow-up meetings were planned during the product promoter's visit. As telephone coverage increased, the mobile providers were increasingly contacted by phone to secure follow-up visits. Nationwide sales While a targeted behavior change approach was implemented in Eastern Myanmar, sales of ACT were nationwide (Fig. 6). The rationale for this was both practical and epidemiological. Practically, containing private sector ACT sales within a particular region of Myanmar would have been extremely difficult; experience has shown that market forces draw the product where there is demand, and malaria is widespread in Myanmar, with 70 % of the population living in endemic areas. Epidemiologically, national sales of quality-assured ACT also made the most Fig. 6 A map of the PSI target and sales area sense. Although there was no evidence of resistance outside of the eastern half of the country at the time of the implementation, it was recognized that continued availability and use of oral artemisinin monotherapy in western Myanmar could surely promote both the spread of, and potentially the de novo emergence of, artemisinin resistance. See Fig. 5, for an overview of the distribution system. Discussion The AMTR project has been implemented since 2012, following supply chain study findings that showed high levels of oral artemisinin monotherapy sale and distribution in Myanmar. Since project inception, substantial improvements in the anti-malarial market landscape, have been observed and demonstrate the value of a high-level subsidy combined with supportive interventions. Recent findings from three survey rounds (2012, 2013 and 2014) illustrate the success of the project, with notable increases in quality-assured ACT availability and market share and declines in oral artemisinin monotherapy availability and distribution, mainly among priority outlets in the intervention area and, to a lesser extent, among priority outlets in the comparison area [14]. In addition to the outlet survey findings, there are several other lessons from the AMTR project that may be helpful for projects in other malaria-endemic settings struggling with private sector readiness and performance for appropriate malaria case management. First, while the AMTR project included measuring change in supply side indicators over time, an important component was to influence and monitor the demand side, including patient fever treatment-seeking behavior, ACT treatment for suspected malaria and patient adherence to ACT treatment. However, due to low and declining rates of malaria prevalence in Myanmar, standard methods using population-based surveys to measure these indicators have not been useful in this context [15]. Despite screening several thousand households, sufficient numbers of patients with recent fever and patients who reportedly received anti-malarial treatment have been unobtainable through standard survey sampling methods. To address this issue, project indicators were revised, and demand-side evaluation methodologies were updated. The project now uses research methods to follow up with patients who received an ACT from priority and non-priority outlets to measure adherence. Another challenge relates to drug policy for oral artemisinin monotherapy. Although banned by the FDA, there was no ban on oral artemisinin monotherapy sales or distribution and no mechanism for drug recall. Related to this, companies who were awarded 5-year distribution licenses just prior to the ban could legally continue to distribute for the duration of the license. While the AMTR project sought to remove all oral artemisinin monotherapy from the market, the lack of an enabling environment was a threat to the success of the intervention. Furthermore, the long shelf-life of oral artemisinin monotherapy means that this product can remain on the shelves for a long period. Conversely, the PSI-distributed quality-assured ACT has a shorter shelf life. Given the drop in Myanmar's malaria caseload, providers may be incentivized to stock a malaria medicine with a longer shelf life and for which they can sell at a similar price to quality-assured ACT. In fact, recent outlet survey data from 2015 shows a rise in the availability of oral artemisinin monotherapy from 2014. Indeed, among priority outlets in the intervention area, availability of oral artemisinin monotherapy increased from 10 % in 2014 to 30 % in 2015. Similarly, market share of oral artemisinin monotherapy increased from around 21 % in 2014 to 26 % in 2015 [16]. This shows that, while substantial gains have been made over recent years, oral artemisinin monotherapy still stubbornly persists in Myanmar's antimalarial market, calling for the need for a complete ban on the importation, licensing, sale and distribution of oral artemisinin monotherapy as well as measures to enforce the ban. Rapid diagnostic test (RDT) scale-up has only recently been introduced into the project and has been a challenging project component to implement. Changes in the political leadership and policies on the sale and distribution of malaria RDTs meant that, rather than using the private sector ACT supply chain for RDTs, RDTs had to be distributed free-of-charge in the private sector through product promoters. The AMTR project focused on launching a training program for private health providers to ensure correct RDT use, as well as mass media and interpersonal communication campaigns. The success of this arm of the intervention is yet to be evaluated, but the project's need for flexibility and response to changing policies at the national level is important. Finally, Myanmar's anti-malarial market is constantly changing, making the application of appropriate market development strategies challenging and dynamic. Projecting commodities and supply chain management is a constant challenge. With declining malaria prevalence rates and increased coverage of malaria RDTs, the project strives to maintain a balance between the risk of drug expiry versus the risk of ACT stock-out. This is further complicated given challenges with estimating sales and outlet coverage between two distributors and geographical challenges in reaching certain areas of the country that are less accessible due to conflict and instability. There is a need to continually review available evidence and adjust strategies, inputs and targeting.
Effect of taxifolin on cyclophosphamide-induced oxidative and inflammatory bladder injury in rats The role of oxidative stress and inflammation in the pathogenesis of cyclophosphamide-related side effects has been demonstrated in previous studies. This study aimed to investigate the effect of taxifolin, due to its antioxidant and anti-inflammatory properties, on cyclophosphamide-induced oxidative and inflammatory bladder injury in albino Wistar rats. The taxifolin+cyclophosphamide (TCYC) group was given 50 mg/kg of taxifolin orally by gavage. Normal saline was used as a solvent for the cyclophosphamide (CYC) group and the healthy control (HC) group. One hour after taxifolin administration, 75 mg/kg of cyclophosphamide was intraperitoneally injected in the TCYC and CYC groups. This procedure was repeated once a day for 30 days. At the end of this period, biochemical markers were studied in the excised bladder tissues and histopathological evaluations were conducted. In the histopathological evaluation of the CYC group, severe epithelial irregularity, dilatation, congestion, and polymorphonuclear leukocyte accumulation in the vascular structures were observed. Additionally, the malondialdehyde (MDA), tumor necrosis factor-ฮฑ (TNF-ฮฑ), interleukin-1ฮฒ (IL-1ฮฒ), and interleukin-6 (IL-6) levels, the total oxidant status (TOS), and the oxidative stress index (OSI) values were significantly higher, and the total glutathione (tGSH) levels and total antioxidant status (TAS) were significantly lower in the CYC group in comparison to the HC group (P<0.001). Taxifolin reduced the cyclophosphamide-induced increases in the MDA, TNF-ฮฑ, IL-1ฮฒ, and IL-6 levels and the TOS and OSI values; it decreased the tGSH and TAS levels and reduced histopathological damage (P<0.001). Taxifolin may be useful in the treatment of cyclophosphamide-induced bladder damage. Introduction cyclophosphamide is a nitrogen mustard derivative anticancer drug that is mainly used in the treatment of malignancies, such as Hodgkin lymphoma and Non-Hodgkin lymphoma, lymphocytic lymphoma, small lymphocytic lymphoma, Burkitt lymphoma, and multiple myeloma [1]. breast cancer, diffuse neuroblastomas, retinoblastoma, and ovarian adenocarcinomas are also among the additional Fda-approved indications for its use [1]. As an effective immunosuppressive agent, cy-clophosphamide is prescribed for the treatment of autoimmune diseases, such as multiple sclerosis, as well as before transplantation to prevent transplant rejection and graft-vs-host complications [2,3]. However, nausea, bladder cancer, bone marrow suppression, increased opportunistic infections, and hematological and other organ toxicities have been observed during the use of cyclophosphamide [4]. urotoxicity-like hemorrhagic cystitis has also been reported [1,5]. one of cyclophosphamide's main metabolites, acrolein, is thought to be responsible for the drug's bladder toxicity, and Mesna is commonly | Exp. Anim. 2022; 71(4): 460-467 used to counteract it [6,7]. ฤฑt is well known that, in some patients, Mesna does not properly protect the bladder from the harmful effects of cyclophosphamide. The increase in the level of malondialdehyde (mda), a result of lipid peroxidation (lPo), and the decrease in the level of the endogenous antioxidant glutathione (GSH) have been demonstrated to cause the toxic effects of cyclophosphamide on the bladder. It has also been suggested that cyclophosphamide-associated cystitis is associated with the expression of proinflammatory cytokines, such as nuclear factor-ฮบb (NF-ฮบb), tumor necrosis factor-alpha (TNF-ฮฑ), interleukin 1 beta (IL-1ฮฒ), and ฤฑl-6 [1,8]. according to evidence reported in the literature, antioxidant and anti-inflammatory medications may be beneficial in the treatment of cyclophosphamideinduced bladder injury. Taxifolin (3,5,7,3โ€ฒ,4โ€ฒ-pentahydroxy-flavanone) whose effects against cyclophosphamide-induced oxidative and inflammatory bladder injury are investigated in our study, is an antioxidant flavonoid [9]. taxifolin is found in the content of plants, such as milk thistle, onions, French maritime pine, and tamarind [10]. It has been proven that taxifolin has anti-inflammatory, antitumor, antimicrobial, and antioxidant properties [11]. It has been reported that the antioxidant effect of taxifolin is due to its inhibition of reactive oxygen species (ROS) production [12]. according to our extensive literature search, no previous studies have investigated the preventive effects of taxifolin against cyclophosphamide-induced oxidative and inflammatory bladder injury. Therefore, our study aimed to biochemically and histopathologically investigate the effect of taxifolin on cyclophosphamide-induced oxidative and inflammatory bladder injury in rats. Materials and Methods Animals the animals we used in the study were supplied from Erzincan binali Yฤฑldฤฑrฤฑm University Medical Experiments application and Research center. a total of 18 male, albino Wistar rats weighing between 250 and 265 grams were used in the experiment. the animals were obtained from the Erzincan binali Yฤฑldฤฑrฤฑm University medical experimental application and Research center. The animals were housed and fed in a suitable laboratory environment at normal room temperature (22ยฐc) under appropriate conditions (12 h of light and 12 h of darkness). animal experiments were performed in accordance with the National Guidelines for the use and Care of Laboratory Animals and were approved by the local animal ethics committee of erzincan Binali Yฤฑldฤฑrฤฑm University, Erzincan, Turkey (Ethics Committee No.: 2021/05, dated dec 30, 2021). Chemicals the cyclophosphamide (a vial containing 1 g of powder solution for infusion) used in the experiments was supplied from EฤฐP EczacฤฑbaลŸฤฑ (ฤฐstanbul, Turkey), thiopental sodium was supplied from I.E. Ulagay (ฤฐstanbul, Turkey) and taxifolin was obtained from Evalar (Moscow, Russia). Experimental groups the rats used in the experiment were divided into three groups: a cyclophosphamide (cyc) group, a taxifolin + cyclophosphamide (tcyc) group, and a healthy control (Hc) group. Experiment procedure to implement this experiment, the tcyc group (n=6) was given 50 mg/kg of taxifolin orally by gavage. This dose of taxifolin has been used in animals before and has been found to be effective against oxidative liver damage [13]. Normal saline (0.9% Nacl) was used as the solvent for the cyc (n=6) and Hc (n=6) groups. one hour after taxifolin and solvent administration, 75 mg/ kg of cyclophosphamide was injected intraperitoneally in the animals in the tcyc and cyc groups. this procedure was repeated once a day for 30 days. at the end of this period, the animals in all three groups were killed with high-dose anesthesia (thiopental sodium 50 mg/kg) and the bladder tissues were removed. The MDA, total glutathione (tGSH), TNF-ฮฑ, IL-1ฮฒ, and IL-6 levels and the total oxidant status (toS) and total antioxidant status (TAS) were determined in the removed bladder tissues. The bladder tissues were also examined histopathologically. The biochemical and histopathological examination results were compared between the three study groups and evaluated. Tissue MDA and tGSH determination after measuring the weights of the tissues, samples were homogenized immediately for use for mda measurements. MDA measurements are based on the method used by Ohkawa et al., which includes the spectrophotometric measurement of the absorbance of the pink-colored complex formed by thiobarbituric acid and mda [14]. briefly, 25 ยตl of tissue homogenate were mixed with 25 ยตl of 80 g/l sodium dodecyl sulfate and 1 ml of combination solution (20 g/l acetic acid + 1.06 g 2-thiobarbiturate + 180 ml distilled water). The mixture was incubated for 1 h at 95ยฐC. The mixture was centrifuged for 10 min at 4,000 rpm after cooling. the supernatant's absorbance was measured at 532 nm. The standard curve was obtained by using 1,1,3,3-tetramethoxypropane. tGSH measurement was made according to the method described [15]. Tissue TOS and TAS determination the toS and taS results of tissue homogenates were determined using a novel automated measurement method and commercially available kits (Rel Assay Diagnostics, Gaziantep Turkey), both developed by Erel [16,17]. The measurement of different oxidants and antioxidant molecules separately is not practical. therefore, measuring the total oxidant or antioxidant capacities of a biological sample and relating them in a ratio, called index is preferred. one of the indexes used for this purpose is the "oxidative stress index (oSฤฑ)". the oSฤฑ value calculated according to the "OSI (Arbitrary Units) = TOS (nmol H2o2 eq/mg protein) / taS (nmol trolox eq/mg protein) ร— 100" formula. TNF-ฮฑ, IL-1ฮฒ, and IL-6 analysis the weight of the tissue samples was measured; then the samples were rapidly frozen with liquid nitrogen and homogenized using a pestle and mortar. the samples were maintained at 2-8ยฐC after melting. We added phosphate-buffered saline (pH 7.4), 1/10 (w/v). Then, the samples were vortexed for 10 s, centrifuged for 20 min at 10,000 ร— g and the supernatants were carefully collected. The levels of TNF-ฮฑ (ng/l), IL-1ฮฒ (pg/l), and ฤฑl-6 (ng/l) were measured using a commercial elฤฑSa kit supplied by Eastbiopharm Co., Ltd. (Hangzhou, china). Histopathological analyses The tissue samples were fixed with 10% formaldehyde for 72 h. Following the fixation process, the tissue samples were washed under tap water in cassettes for 24 h. the samples were then treated with conventional grade alcohol (70%, 80%, 90%, and 100%) to remove the water within the tissues. the tissues were then passed through xylol and embedded in paraffin. Four-to-five micron sections were cut from the paraffin blocks and hematoxylin-eosin staining was administered. the sections were photographed and assessed using the dP2-SAL firmware program and a light microscope (Olympus ฤฑnc., tokyo, Japan). From the serial sections taken, one central and five peripheral areas were selected for semiquantitative scoring, and degeneration criteria were scored in the selected areas for each subject. Urinary bladder tissue damage was defined as the presence of epithelial necrosis, dilatation/congestion, polymorphonuclear leukocyte (PMNL) infiltration, and edema. Each sample was scored for each criterion as follows: 0: indicated no damage; 1: mild damage; 2: moderate damage; 3: severe damage. Histopathological assessment and scoring were performed by a pathologist who was blinded to the experimental groups. Statistical analysis IbM SPSS Statistics for Windows, Version 22.0, released 2013 (ฤฑBm corp., armonk, Ny, uSa) was used for the statistical analysis. the results are presented as mean ยฑ SD. The significance of differences between the groups was determined using the one-way analysis of variance (ANOVA) test followed by Tukey's analysis. The statistical level of significance for all tests was considered to be 0.05. Biochemical findings As can be seen from Table 1, the amount of mda in the bladder tissue of the rats in the CYC group was significantly increased in comparison to the HC and TCYC groups, while the amount of tGSH decreased (P<0.01). There was no statistically significant difference between the Hc and tcyc groups in terms of the mda and tGSH levels (Table 1). TOS, TAS, and oxidative stress index (OSI) analysis results ฤฑn the cyc group, cyclophosphamide induced an increase in the TOS value of the bladder tissue and it decreased the taS value. the toS level was lower in the Hc and tcyc groups in comparison to the cyc group (P<0.001), and the taS value was higher (P<0.001, Table 1). Furthermore, the difference in the TOS values between the HC and TCYC groups was determined to be insignificant, despite the fact that there was a significant difference in TAS (P=0.046, Table 1). The OSI value was found to be significantly higher in the CYC group than the Hc and tcyc groups. However, the difference in the OSI value between the HC and TCYC groups was insignificant (P>0.05). TNF-ฮฑ, IL-1ฮฒ, and IL-6 analysis results TNF-ฮฑ, IL-1ฮฒ, and IL-6 levels in the bladder tissue of the animals in the CYC group increased significantly in comparison to the Hc and tcyc groups (P<0.001, Table 1). Taxifolin significantly inhibited the increase in the TNF-ฮฑ, IL-1ฮฒ, and IL-6 levels induced by cyclophosphamide. The difference in the IL-1ฮฒ levels was found to be insignificant, although there was a significant difference in the TNF-ฮฑ and IL-6 levels between the TCYC and Hc groups (P=0.002 and P=0.015 respectively, Table 1). Histopathological findings In the bladder tissues of the animals in the HC group, the stratified transitional bladder epithelium, connective tissue, muscle tissue underneath the epithelium, and the distribution and architecture of the blood vessels in the connective tissue and muscle tissue were all normal (Fig. 1). In the bladder tissues of the animals in the CYC group, in low magnification, epithelial dysregulation, grade-3 dilatation and congestion in the blood vessels in the underlying connective and muscle tissue, grade-3 polymorphonuclear leukocytes (PNl) accumulation in the blood vessels, and grade-2 PNL cell infiltration around the blood vessels were observed (Fig. 2). Furthermore, when the high magnification images of the CYC group were examined, the stratified transitional epithelium lost its continuity in some areas, and grade-3 necrotic cells appeared in the areas where it was continuous. ฤฑn the connective tissue underneath the epithelium, grade-3 edema and fibril separations were identified (Fig. 3). conversely, in the samples from the tcyc group, the epithelium is continuous, the underlying connective tissue is free of edema, and the fibrils are in a normal arrangement. mild-to-moderate (grade-1 and grade-2) dilatation and congestion in the blood vessels, as well as grade-1 polymorphonuclear cell infiltration (PMNL), were observed in the connective tissue (Fig. 4). Table 2 presents the statistical analysis of histopathological findings in the Hc, cyc, and tcyc groups. Discussion In this study, taxifolin's effects on the oxidative and inflammatory bladder damage caused by cyclophosphamide in rats were studied biochemically and histopathologically. The balance between the oxidant and antioxidant systems in cyclophosphamide-induced oxidative bladder injury has been studied previously [8]. the level of mda was increased and the level of tGSH was decreased in the animals with cyclophosphamide-induced bladder tissue injury, according to Abraham et al. [18]. as previously stated, RoS initiates the lPo reaction in the cell membrane, which results in the creation of toxic products, such as mda from lipids [19]. these findings suggest that cyclophosphamide may have increased the production of ROS in the bladder tissue. In a study supporting this, it was reported that cyclophosphamide increased the production of ROS in the bladder tissue [20]. The fact that the MDA levels in the bladder tissue were higher in the cyc group than the Hc group indicates that our experimental results are consistent with the data reported in the literature [21]. ฤฑt is known that cyclophosphamide, which causes oxidative stress in bladder tissue, reduces endogenous antioxidant levels [22]. the decrease in the amount of tGSH in the bladder tissue to which we applied cyclophosphamide shows that our experimental results are in agreement with previous studies. tasdemir et al. reported that cyclophosphamide decreased the amount of tGSH and increases the oxidant parameters in the blad-der tissue [23]. ฤฑn the literature, an increase in the oxidant parameters and a decrease in the antioxidant parameters have been associated with ROS overproduction [24]. Superoxide, singlet oxygen, hydroxyl radical, and hydrogen peroxides are known types of RoS [25]. tGSH is an endogenous antioxidant with ROS scavenging ability. Disruption of the ROS/GSH balance results in the oxidation of biomacromolecules; this leads to cell cycle arrest and even cell death [26]. to evaluate total enzymatic and non-enzymatic oxidant and antioxidant parameters, toS and taS in the bladder tissue were analyzed and the OSI values were calculated. The TOS and TAS analysis results obtained from the cyc group and the Hc group verify that the level of MDA was increased in the bladder tissue and the level of tGSH was decreased. as is known, toS and TAS reflect the total effects of all antioxidants and oxidants in tissues [16,17]. our results and this information obtained from the literature reveal that the oxidant/antioxidant balance in the bladder tissue of the animals in the CYC group is maintained by the predominance of oxidants. In our study, it was found that the levels of proinflam- matory cytokines, such as TNF-ฮฑ, IL-1ฮฒ, and IL-6, significantly increased in the bladder tissue of rats in the cyc group in comparison to the Hc group. ฤฑn recent studies, it has been reported that the levels of TNF-ฮฑ, IL-1ฮฒ, and IL-6 in the bladder tissues of animals with hemorrhagic cystitis due to cyclophosphamide are increased in comparison to the healthy control group; it has also been confirmed histopathologically that these proinflammatory cytokines cause inflammation in the bladder tissue [27]. A recent study by Wrรณbel et al. revealed that TNF-ฮฑ, IL-1ฮฒ, and IL-6 are important factors in the pathogenesis of cyclophosphamide-associated cystitis [28]. There also appears to be a correlation between the increase in proinflammatory cytokines and the oxidant and antioxidant parameters in cyclophosphamide-induced bladder damage [27]. In our study, it was observed that significant histopathological damage developed in the bladder tissue of the CYC group, in which the oxidant and proinflammatory parameters increased and the antioxidants decreased. the animals treated with cyclophosphamide experienced severe dilation and congestion of the bladder blood vessels. It has been reported that cyclophosphamide causes congestion in animal bladder tissue [29]. However, no previous studies reported that cyclophosphamide caused vasodilation in animal bladder tissue, based on a histopathological examination. ฤฑn a patient who was treated with cyclophosphamide, it was found that capillary dilatation developed in the bladder mucosa in the cystoscopy performed [30]. many studies have supported our findings showing PMNL accumulation in the bladder tissue of the cyclophosphamide group. Abd-Allah et al. and moraes et al. reported that cyclophosphamide causes intense PMNL infiltration in rat bladder tissue [29,31]. as it is known, the accumulation of PmNl in tissues is an indicator of inflammatory reaction. In our study, edema, which is one of the main signs of inflammation, developed in the connective tissue of the bladder in the CYC group. Other studies have also reported this finding [22,29,32]. Apart from inflammatory markers, necrosis was determined in the epithelial cells of the bladder tissue of the cyc group. ฤฑn recent studies, there was no information that cyclophosphamide causes necrosis in bladder epithelial cells. However, in a previous study, it was stated that cystitis caused by cyclophosphamide is characterized by detachment of the urothelium, edema, microvascular damage, and focal muscle necrosis [33]. In our study, no histopathological damage was observed except mild and moderate dilatation, congestion, and PMNL infiltration in the connective tissue of the TCYC group. Histopathological findings were associated with the TCYC group's MDA, tGSH, TNF-ฮฑ, IL-1ฮฒ, and ฤฑl-6 levels and its toS and taS values. ฤฑt is known that the effect of taxifolin against oxidative stress is due to its inhibitory effect on ROS production [12]. ฤฑt has also been proven that taxifolin has anti-inflammatory, antitumor, antimicrobial, and antioxidant properties [11]. No previous studies have investigated the antioxidant and anti-inflammatory effects of taxifolin on the bladder. It has been documented that taxifolin protects liver tissue from oxidative damage by inhibiting the increase in mda and toS and decreasing the tGSH levels and taS values due to pazopanib, an anticancer drug [13]. another study reported that taxifolin protected optic nerve tissue against oxidative and inflammatory damage of the platinum-derived anticancer drug [34]. It has been argued that the protective effect of taxifolin is due to the inhibition of MDA, TNF-ฮฑ, and IL-1ฮฒ overproduction [35]. It has also been shown experimentally that taxifolin does not have a significant effect on the normal oxidant-oxidant balance in healthy tissue [21]. Taxifolin has also been shown to have cytoprotective properties via activating nuclear factor erythroid-2 related factor 2 (Nrf2). ฤฑn response to oxidative stress, Nrf2 stimulates the transcription of cytoprotective genes, which is regulated by the adaptor protein kelch-like ecH-associating protein 1 (Keap-1). Physiologically, Nrf2 is present at a low level and is maintained through ubiquitination mediated by keap-1. Through the Nrf2dependent pathway, taxifolin increases the production of phase ฤฑฤฑ antioxidant and detoxifying enzymes, which had a critical protective effect against DNA oxidative damage. ฤฑmportantly, taxifolin enhances Heme oxygenase-1 expression by increasing Nrf2 cytoplasmic expression and nuclear translocation. taxifolin may increase the expression of the Nrf2 gene and its nuclear translocation through epigenetic changes [36,37]. Quercetin is also a flavonoid that exhibited anti-inflammatory, anti-apoptotic, anti-thrombotic, and antitumor properties. the administration of quercetin effectively reduced cyclophosphamide-induced toxicity in multiple organs, including the heart, bladder, gonads, and lungs, in experimental mice. two prior investigations in the literature found quercetin to be protective against cyclophosphamide-induced hemorrhagic cystitis. these studies indicated that quercetin could help protect the bladder against cyclophosphamide urotoxicity by lowering MDA levels in the bladder and raising antioxidant decreased GSH levels in the bladder, indicating quercetin's antioxidant activity [38,39]. as a result, cyclophosphamide caused oxidative and inflammatory damage in the bladder tissue of rats. Taxifolin attenuated cyclophosphamide-related oxidative and inflammatory bladder injury by inhibiting the increase in the oxidants and proinflammatory cytokines and decreasing the antioxidants. The protective effect of taxifolin against cyclophosphamide-induced bladder damage has also been demonstrated histopathologically. Our experimental results indicate that taxifolin may be useful in the treatment of bladder damage due to the use of cyclophosphamide. However, histopathological studies at the molecular level are required to clarify the protective effect mechanism of taxifolin against cyclophosphamide-induced bladder toxicity. Furthermore, it is important to investigate the role of anti-inflammatory cytokines in the protective action mechanism of taxifolin.
Nutritional Quality and Oxidative Stability during Thermal Processing of Cold-Pressed Oil Blends with 5:1 Ratio of ฯ‰6/ฯ‰3 Fatty Acids The growing awareness of consumers means that new products are sought after, which, apart from meeting the basic demand for macronutrients and energy, will have a positive impact on our health. This article is a report on the characteristics of the new oil blends with a nutritious ฯ‰6/ฯ‰3 fatty acid ratio (5:1), as well as the heat treatment effect on the nutritional value and stability of the oils. Prepared oil blends were heated at 170 and 200 ยฐC. The fatty acid composition and the changes in tocochromanols content during heating were analyzed, as well as the formation process of polar compounds and triacylglycerol polymers. During heating the highest loss of tocochromanols was characteristic of ฮฑ-tocopherol and ฮฑ-tocotrienol. The total content of tocopherols after heating was reduced to 1โ€“6% of the original content in the unheated oil blends. The exception was the blend of oil with wheat germ oil, in which a high content of all tocopherols was observed in unheated and heated samples. The content of the polar fraction during heating increased on average 1.9 and 3.1 times in the samples heated at 170 and 200 ยฐC, respectively, compared to the unheated oils. The level of the polar fraction was related to the high content of tocopherols or the presence of tocopherols and tocotrienols in the heated sample. The polymerization of triacylglycerols led mainly to the formation of triacylglycerol dimers. Trimers were observed in a small number of heated samples, especially those heated at 200 ยฐC. Regardless of the changes in heated oils, none of the prepared blends exceeded the limit of the polar fraction content, maintaining the programmed ratio of ฯ‰6 to ฯ‰3 acids. The principal component analysis (PCA) used to define the clusters showed a large variety of unheated and heated samples. An outlier in all clusters was a blend of oil with wheat germ oil. In these samples, the degradation of tocopherols molecules and the increase of triacylglycerol polymers and the polar fraction content were the slowest. Introduction Oils and fats are important components of the diet, providing not only a significant portion of energy but also lipophilic vitamins (A, D, E, and K) and bioactive compounds [1,2]. The vast majority of the fats we consume are fats of plant origin [3], which are characterized by a high content of monounsaturated (MUFA) and polyunsaturated fatty acids (PUFA) [4]. The growing interest of consumers in low-processed foods with high nutritional value makes cold-pressed oils in particular demand. In this process, oil is pressed from the pre-cleaned seeds under controlled temperature conditions, keeping the screw press temperature below 40 โ€ข C. Low temperature reduces the loss of MUFA and PUFA and also oxidation, isomerization, and polymerization [35]. Many methods have been described and proposed for assessing the quality of frying fats during frying. Total polar compounds and polymer triglycerides are proposed as the most reliable methods of monitoring fat changes [36]. To date, there is little literature data on oil blends with programmed ฯ‰6/ฯ‰3 acid ratio, especially using cold-pressed oils to obtain them. There is also a lack of data on changes in the time of heat treatment commonly used in food production. Considering the above, the aim of this study was to design and characterize oil blends with 5:1 ratio of ฯ‰6/ฯ‰3 fatty acids and to evaluate the stability and changes during heating process. The content of tocochromanols, polar compounds, and polymers of triacylglycerols was also assessed. Materials The basic materials for the research were seven pressed and one refined oil commercially purchased. The cold-pressed oils were: rapeseed oil, evening primrose oil, camelina oil, black cumin oil, hemp oil, linseed oil, and wheat germ oil. The rice bran oil was a refined oil. All oils were packed in dark glass bottles and stored in a refrigerator at +3 โ€ข C until oil blends were prepared. From basic oils, 8 blends were prepared. The prepared blends were characterized by 5:1 ratio of ฯ‰6/ฯ‰3 fatty acids. The blends were prepared by mixing the appropriate oils in a specific proportion in accordance with Polish patent applications [37,38]. The composition of the prepared oil blends is shown in Table 1. Heating Procedure Each blend of oils was heated in a thin layer using a steel pan with a diameter of 20 cm. 50 mL of oil was transferred to the pan and heated at 170 โ€ข C and 200 โ€ข C ยฑ 5 โ€ข C. Two different temperatures were chosen to determine the dynamics of changes in oil degradation. The temperature of 170 โ€ข C is often chosen as a compromise between the fast frying process, appropriate sensory features, low-fat content, and fat degradation. Heating in a thin layer promotes uncontrolled overheating of the oil, and a slight increase in temperature can lead to a sharp change in the rate of oil degradation. Heating process was carried out in two independent replications. The magnetic stirrers (IKA RET basic, MS-H-Pro, IKA Works, Inc., Wilmington, NC, USA) were used for the heating process. The temperature was controlled throughout the heating using an electronic thermometer. The heating cycle consisted of two stages: heating to the set temperature and then maintaining the temperature for 10 min. When the heating temperature was 170 โ€ข C the first step lasted 7 min, and when the heating temperature was set at 200 โ€ข C the first step lasted 9 min. After heating, the oil samples were sealed under nitrogen in dark glass bottles and kept frozen at โˆ’24 โ€ข C until analysis. Fatty Acid Composition Analysis The fatty acid composition was determined according to the AOCS Official Method Ce 1 h-05 [39] using an Agilent 7820A GC equipped with a flame ionization detector (FID) (Agilent Technologies, Santa Clara, CA, USA). The oil samples were first dissolved in hexane and transesterified with sodium methylate. After trans-esterification, the fatty acid methyl esters (FAME) were separated using the SLB-IL111 capillary columns (Supelco, Bellefonte, PA, USA) (100 m, 0.25 mm, 0.20 mm). The conditions during the analysis were as follows: the initial oven temperature was 150 โ€ข C and it was increased to 200 โ€ข C at 1.5 โ€ข C/min; the injector and detector temperatures were 250 โ€ข C; split 1:10; the carrier gas was helium at 1 mL/min. The FAME were identified by comparison with retention times of commercially available standards grain fatty acid methyl ester mix (Supelco, Bellefonte, PA, USA). The results were expressed as a percentage of total fatty acids. Tocochromanols Analysis The tocochromanols (tocopherols, tocotrienols, and plastochromanol-8 (PC-8)) content was determined according to Siger et al. [40]. The tocochromanols content was analyzed using a Waters HPLC system (Waters, Milford, MA, USA) equipped with a LiChrosorb Si 60 column (250 ร— 4.6 mm, 5 ยตm, Merck, Darmstadt, Germany), a fluorimetric detector (Waters 474), and a photodiode array detector (Waters 2998 PDA). The conditions during the analysis were as follows: mobile phase was a mixture of n-hexane with 1.4-dioxane (96:4, v:v); the flow rate was 1.0 mL/min; injection sample volume was 10 mL; the excitation wavelength was set at 11, x FOR PEER REVIEW 4 of 18 Fatty Acid Composition Analysis The fatty acid composition was determined according to the AOCS Official Method Ce 1 h-05 [39] using an Agilent 7820A GC equipped with a flame ionization detector (FID) (Agilent Technologies, Santa Clara, CA, USA). The oil samples were first dissolved in hexane and transesterified with sodium methylate. After trans-esterification, the fatty acid methyl esters (FAME) were separated using the SLB-IL111 capillary columns (Supelco, Bellefonte, PA, USA) (100 m, 0.25 mm, 0.20 mm). The conditions during the analysis were as follows: the initial oven temperature was 150 ยฐC and it was increased to 200 ยฐC at 1.5 ยฐC/min; the injector and detector temperatures were 250 ยฐC; split 1:10; the carrier gas was helium at 1 mL/min. The FAME were identified by comparison with retention times of commercially available standards grain fatty acid methyl ester mix (Supelco, Bellefonte, PA, USA). The results were expressed as a percentage of total fatty acids. Tocochromanols Analysis The tocochromanols (tocopherols, tocotrienols, and plastochromanol-8 (PC-8)) content was determined according to Siger et al. [40]. The tocochromanols content was analyzed using a Waters HPLC system (Waters, Milford, MA, USA) equipped with a LiChrosorb Si 60 column (250 ร— 4.6 mm, 5 ฮผm, Merck, Darmstadt, Germany), a fluorimetric detector (Waters 474), and a photodiode array detector (Waters 2998 PDA). The conditions during the analysis were as follows: mobile phase was a mixture of n-hexane with 1.4dioxane (96:4, v:v); the flow rate was 1.0 mL/min; injection sample volume was 10 mL; the excitation wavelength was set at สŽ = 295 nm and the emission wavelength at สŽ = 330 nm. The tocochromanols were identified by comparison with retention times of standards purchased from Merck (>95% of purity). Total Polar Compounds (TPC) Analysis Total polar compounds in the oil were analyzed according to the AOCS Official Method 982.27 [41]. The oil sample was dissolved in toluene and applied to a glass column packed with silica gel with 5% of water (silica gel 60, 63-200 ฮผm, Sigma-Aldrich, Poznaล„, Poland). A nonpolar fraction was eluted with a mixture of hexane and diisopropyl ether (82: 18, v:v), and was collected. After evaporation of the solvent, the nonpolar fraction was weighted, and from the weight difference between the oil sample and nonpolar fraction, the polar fraction was calculated. The results were expressed as % of the content of oil. Polymerized Triacylglycerols (PTG) Analysis The polymers composition of triacylglycerol (TAG) in oil samples was determined according to AOCS Official Method 993.25 [42]. The polymer composition was analyzed using an Infinity 1290 HPLC (Agilent Technologies, Santa Clara, CA, USA) equipped with ELSD (Evaporative Light Scattering Detector) and two connected Phenogel columns (100 ร… and 500 ร… , 300 ร— 7.8 mm) (Phenomenex, Torrance, CA, USA). The conditions during the analysis were as follows: column temperature 30 ยฐC, detector temperature 30 ยฐC, detector pressure 2.5 bars, and injection sample volume 1 mL. The liquid phase was dichloromethane (DCM) with flow rate 1 mL/min. Calculated Iodine Value (CIV) The determination of the iodine value was conducted according to the AOCS Official Method Cd 1c-85 [43] and calculated (CIV) from fatty acid composition. The method of calculation is based on the percentage of hexadecenoic acid, octadecenoic acid, octadecadienoic acid, octadecatrienoic acid, eicosanoid acid, and docosenoic acid. Fatty Acid Composition Analysis The fatty acid composition was determined according to the A Ce 1 h-05 [39] using an Agilent 7820A GC equipped with a flame ion (Agilent Technologies, Santa Clara, CA, USA). The oil samples were ane and transesterified with sodium methylate. After trans-esterif methyl esters (FAME) were separated using the SLB-IL111 capilla Bellefonte, PA, USA) (100 m, 0.25 mm, 0.20 mm). The conditions du as follows: the initial oven temperature was 150 ยฐC and it was incr ยฐC/min; the injector and detector temperatures were 250 ยฐC; split 1: helium at 1 mL/min. The FAME were identified by comparison w commercially available standards grain fatty acid methyl ester mix PA, USA). The results were expressed as a percentage of total fatty Total Polar Compounds (TPC) Analysis Total polar compounds in the oil were analyzed according Method 982.27 [41]. The oil sample was dissolved in toluene and app packed with silica gel with 5% of water (silica gel 60, 63-200 ฮผm, Si Poland). A nonpolar fraction was eluted with a mixture of hexane (82: 18, v:v), and was collected. After evaporation of the solvent, the weighted, and from the weight difference between the oil sample a the polar fraction was calculated. The results were expressed as % o Polymerized Triacylglycerols (PTG) Analysis The polymers composition of triacylglycerol (TAG) in oil sam according to AOCS Official Method 993.25 [42]. The polymer comp using an Infinity 1290 HPLC (Agilent Technologies, Santa Clara, CA ELSD (Evaporative Light Scattering Detector) and two connected P ร… and 500 ร… , 300 ร— 7.8 mm) (Phenomenex, Torrance, CA, USA). The analysis were as follows: column temperature 30 ยฐC, detector tempe pressure 2.5 bars, and injection sample volume 1 mL. The liquid methane (DCM) with flow rate 1 mL/min. Calculated Iodine Value (CIV) The determination of the iodine value was conducted accordin Method Cd 1c-85 [43] and calculated (CIV) from fatty acid compo calculation is based on the percentage of hexadecenoic acid, octade dienoic acid, octadecatrienoic acid, eicosanoid acid, and docosenoic Statistical Analysis = 330 nm. The tocochromanols were identified by comparison with retention times of standards purchased from Merck (>95% of purity). Total Polar Compounds (TPC) Analysis Total polar compounds in the oil were analyzed according to the AOCS Official Method 982.27 [41]. The oil sample was dissolved in toluene and applied to a glass column packed with silica gel with 5% of water (silica gel 60, 63-200 ยตm, Sigma-Aldrich, Poznaล„, Poland). A nonpolar fraction was eluted with a mixture of hexane and diisopropyl ether (82:18, v:v), and was collected. After evaporation of the solvent, the nonpolar fraction was weighted, and from the weight difference between the oil sample and nonpolar fraction, the polar fraction was calculated. The results were expressed as % of the content of oil. Polymerized Triacylglycerols (PTG) Analysis The polymers composition of triacylglycerol (TAG) in oil samples was determined according to AOCS Official Method 993.25 [42]. The polymer composition was analyzed using an Infinity 1290 HPLC (Agilent Technologies, Santa Clara, CA, USA) equipped with ELSD (Evaporative Light Scattering Detector) and two connected Phenogel columns (100 ร… and 500 ร…, 300 ร— 7.8 mm) (Phenomenex, Torrance, CA, USA). The conditions during the analysis were as follows: column temperature 30 โ€ข C, detector temperature 30 โ€ข C, detector pressure 2.5 bars, and injection sample volume 1 mL. The liquid phase was dichloromethane (DCM) with flow rate 1 mL/min. Calculated Iodine Value (CIV) The determination of the iodine value was conducted according to the AOCS Official Method Cd 1c-85 [43] and calculated (CIV) from fatty acid composition. The method of calculation is based on the percentage of hexadecenoic acid, octadecenoic acid, octadecadienoic acid, octadecatrienoic acid, eicosanoid acid, and docosenoic acid. Statistical Analysis All assays were replicated four times. Mean values and standard deviations were calculated with Microsoft Office Excel 2019 (Microsoft Corporation, Redmond, WA, USA). STATISTICA PL 13.3 (Dell Software Inc., Round Rock, TX, USA) were used to calculate standard errors and significant differences between means (p < 0.05, analysis of variance ANOVA), Tukey's multiple range test. R software (v4.1 with packages FactoMineR v2.4 and factoextra v1.0.7) were the software used for principal components analysis (PCA). Characteristics of Oil Blends Chromatographic analysis of the fatty acid composition confirmed that the prepared oil blends were characterized by an appropriate ratio of omega 6 to omega 3 fatty acids, and the ฯ‰6/ฯ‰3 ratio ranged from 4.86 to 5.13 ( Table 1). The dominant fatty acids were oleic acid (C18:1, n-9) and linoleic acid (LA, C18:2, n-6). Among the n-3 fatty acids, ฮฑ-linolenic acid (ALA) constituted the highest percentage in the fatty acid profile. The contents of C18:1, 18:2, and 18:3 fatty acids, as well as SFA, MUFA, and PUFA in oil blends are presented in Figure 1. Long-chain n-3 polyunsaturated fatty acids, especially eicosapentaenoic acid (EPA, 20:5) and docosahexaenoic acid (DHA, 22:6), have several health benefits reported in the literature, especially in relation to cardiovascular and inflammatory diseases [13,16]. There is evidence of a reduction in cardiovascular disease when consuming high doses of EPA [44], as well as a significant effect of DHA on mental health and cognitive functions [45]. The main source of them in the human diet are fish and supplements, e.g., fish oil. In the case of plant-based diets, the supply of DHA and EPA is negligible, which may cause negative health effects [12,17,18]. However, it is possible to synthesize them de novo from a plant-derived precursor (ALA) in the n-3 fatty acid metabolic pathway [46,47]. A sufficient high intake of ALA in the diet can be a viable substitute for marine sources and protect against deficiencies [48]. The degenerative changes in unsaturated fatty acids can be prevented by antioxidants naturally occurring in oils, in particular tocochromanols, which include tocopherols, tocotrienols, and plastochromanol-8 (PC-8) [49]. The total content of tocochromanols in the analyzed oil blends was in the following order: RBWg > EpCR > CRb > REp > BcHRb > RRb > CRbB > LBcRb ( Table 2). The highest content of tocopherols was recorded for RBWg, amounting to 90.5 mg/100 g of oil, while the lowest content, amounting to 14.34 mg/100 g, for LBcRb. In the case of tocotrienols, the highest content was recorded for CRb (24.12 mg/100 g), while in EpCR and REp no presence of these compounds was detected. In vitro studies of antioxidant activity indicate that ฮด-tocopherol is the most active and ฮฑ-tocopherol has the lowest activity. It is worth noting that in in vivo studies, the activity of tocopherols is opposite [50]. PC-8 is characterized by much higher antioxidant activity [51], as much as 1.5 times greater than the activity of ฮฑ-tocopherol. The highest PC-8 content was found in REp and EpCR, while BcHRb and LBcRb did not contain PC-8. Characteristics of Oil Blends Chromatographic analysis of the fatty acid composition confirmed that the prepared oil blends were characterized by an appropriate ratio of omega 6 to omega 3 fatty acids, and the ฯ‰6/ฯ‰3 ratio ranged from 4.86 to 5.13 ( Table 1). The dominant fatty acids were oleic acid (C18:1, n-9) and linoleic acid (LA, C18:2, n-6). Among the n-3 fatty acids, ฮฑ-linolenic acid (ALA) constituted the highest percentage in the fatty acid profile. The contents of C18:1, 18:2, and 18:3 fatty acids, as well as SFA, MUFA, and PUFA in oil blends are presented in Figure 1. Long-chain n-3 polyunsaturated fatty acids, especially eicosapentaenoic acid (EPA, 20:5) and docosahexaenoic acid (DHA, 22:6), have several health benefits reported in the literature, especially in relation to cardiovascular and inflammatory diseases [13,16]. There is evidence of a reduction in cardiovascular disease when consuming high doses of EPA [44], as well as a significant effect of DHA on mental health and cognitive functions [45]. The main source of them in the human diet are fish and supplements, e.g., fish oil. In the case of plant-based diets, the supply of DHA and EPA is negligible, which may cause negative health effects [12,17,18]. However, it is possible to synthesize them de novo from a plant-derived precursor (ALA) in the n-3 fatty acid metabolic pathway [46,47]. A sufficient high intake of ALA in the diet can be a viable substitute for marine sources and protect against deficiencies [48]. Table 1. The degenerative changes in unsaturated fatty acids can be prevented by antioxidants naturally occurring in oils, in particular tocochromanols, which include tocopherols, tocotrienols, and plastochromanol-8 (PC-8) [49]. The total content of tocochromanols in the analyzed oil blends was in the following order: RBWg > EpCR > CRb > REp > BcHRb > RRb > CRbB > LBcRb ( Table 2). The highest content of tocopherols was recorded for RBWg, amounting to 90.5 mg/100 g of oil, while the lowest content, amounting to 14.34 mg/100 g, for LBcRb. In the case of tocotrienols, the highest content was recorded for CRb (24.12 mg/100 g), while in EpCR and REp no presence of these compounds was detected. In vitro studies of antioxidant activity indicate that ฮด-tocopherol is the most active and ฮฑtocopherol has the lowest activity. It is worth noting that in in vivo studies, the activity of tocopherols is opposite [50]. PC-8 is characterized by much higher antioxidant activity [51], as much as 1.5 times greater than the activity of ฮฑ-tocopherol. The highest PC-8 content was found in REp and EpCR, while BcHRb and LBcRb did not contain PC-8. Table 1. The oxidative stability of oils largely depends on the degree of unsaturation of fatty acids. The degree of unsaturation can be determined by calculating iodine value (CIV), the greater CIV, the greater the unsaturation, and the greater the susceptibility to oxidation [52]. RRb was characterized by the lowest degree of unsaturation. The highest CIV was characteristic for BcHRb sample (Table 2). Tocochromanols ฮฑ-, ฮฒ-, ฮณ-, ฮด-tocopherols and ฮฑ-, ฮฒ-, ฮณ-, ฮด-tocotrienols are fat-soluble compounds that make up the vitamin E group [53]. Detailed characteristics of the content of tocopherols and tocotrienols, as well as changes in their content after thermal treatment at 170 and 200 โ€ข C, are presented in Table 3. The main tocopherol homologs found in unheated oils were ฮณand ฮฑ-tocopherol. Their content ranged from 9.45 to 40.89 mg/100 g of oils, and from 1.7 to 51.19 mg/100 g of oils, respectively. They constituted 75 to 98.9% of all tocochromanols in the EpCR, REp, RRb, BcHRb, and RBWg oils. In the three remaining samples (LBcRb, CRb, RbB), tocopherols constituted only 40 to 50%. The remaining 50-60% consisted primarily of ฮฒand ฮณ-tocotrienols. Their content ranged from 3.04 to 15.59 mg/100 g of oil and from 2.13 to 19.98 mg/100 g of oil, respectively. The high content of tocotrienols results from the presence of wheat germ oil and black cumin oil in these blends, which are characterized by a high content of tocotrienols [54,55]. Rapid losses of all tocopherols and tocotrienols were observed during heating, both at 170 โ€ข C and 200 โ€ข C. Only in the case of RBWg, the high content of all tocopherols was observed in unheated and heated samples. Excluding the RBWg oil sample, the highest loss of tocochromanols was characteristic of ฮฑ-tocopherol and ฮฑ -tocotrienol. Heating the samples at 170 โ€ข C resulted in a complete loss of these homologs. The only exceptions were the EpCR and CRbB samples. However, the content of these compounds in heated samples was only from 1 to 6% of the original content in unheated oils. A similar phenomenon was observed for ฮฒ-tocopherols. However, its original content was very low and ranged from 0.04 to 0.58 mg/100 g of unheated oil. For other tocochromanols (ฮณ-and ฮด-tocopherol as well as ฮฒand ฮณ-tocotrienols) their presence was also found in samples heated both at 170 and 200 โ€ข C. However, the content of homologs of individual tocochromanols was drastically reduced. Barrera-Arellano et al. [56] also described the rapid degradation of tocopherols as a result of high temperature. In their research, during heating at 180 โ€ข C for 2, 4, 6, 8, and 10 h, they analyzed changes in the content of tocopherols, indicating that the higher the loss of these compounds, the greater the content of polymeric triacylglycerols in the heated oil. As in the case of tocopherols, the content of tocotrienols also decreased significantly after the heating process. The tocochromanols content is also important from a nutritional point of view. Each of their isoforms is biologically active, and the highest bioavailability and activity in the body is characterized by ฮฑ-tocopherol. In addition to the antioxidant activity of all vitamin E isoforms, anti-proliferative, pro-apoptotic, anti-angiogenic, and anti-inflammatory effects are also described [57]. On the one hand, the high content of tocochromanols can protect the oils against degenerative changes, and on the other hand, they are nutrients important for health. Formation of Polar Compounds during Heating Heat treatment of oil, e.g., during frying food products, causes the formation of many polar compounds. Their type and number depend not only on the properties of a given oil, but also on the fried product, its weight and size, time and temperature of frying, and the presence of antioxidants. The content of total polar compounds in oils is a good indicator of degenerative fat changes [35,58]. The highest content of TPC in oils before the heating process was demonstrated for RRb (5.85%), and the lowest for CRb (1.98%) ( Table 4). All samples showed a significant increase of TPC during heating, and the increase was more intense the higher the temperature was used. However, in none of the heated samples, the content of TPC exceeded the limit, which in many countries is defined at 24-25%. During the heating of the oil samples at 170 โ€ข C, the content of TPC ranged from 4.75 to 9.85%, in samples CRb and REp, respectively. In oil samples heated at 200 โ€ข C, the TPC content ranged from 8.45% (CRb) to 15.74% (REp). In most of the oil blends subjected to heating, the TPC content was related to the original TPC content of the unheated oil. The higher initial TPC content led to higher levels of this index after the heating process. The average increase in the content of the polar fraction in the heated samples was 1.9 times and 3.1 times, for the samples heated at 170 โ€ข C and 200 โ€ข C, respectively. The greatest increase in TPC level was observed in REp and CRb samples heated at 170 โ€ข C The TPC content was 2.3 times higher than that of the unheated sample. During heating at 200 โ€ข C, the highest increase was characteristic for the mixture of Camelina oil and Rice bran oil (CRb) and was 4.3 times. However, we must remember that despite the highest increase in TPC, the CRb sample was characterized by the lowest content of polar compounds, both in the unheated mixture and after heating at any temperature. The level of TPC in unheated oils may depend both on the raw material used and the method and conditions of oil pressing. The increase in TPC content during heating is the result of the transformation of fatty acids in the process of oxidation, hydrolysis, and thermal polymerization, which we can observe during frying process or heating oils [32,59]. However, the final content of polar compounds depends not only on the fatty acid profile of the oil but also on the content of antioxidants (tocochromanols, polyphenols) and other protective substances such as plant sterols. The stability of oils depends not only on the presence of these compounds in the oil but also on their concentration, structure, and synergistic interaction [60,61]. The results shown in Table 4 may confirm this thesis. The samples with the highest content of tocotrienols and average tocopherol content (LBcRb, CRb, and CRbB) had a lower content of TPC after the end of heating. When the sample contained a very high content of tocopherols (4 times higher) and a small amount of tocotrienols (RBWg sample), a similar result was obtained. When the oil samples contained only the average content of tocopherols and no tocotrienols (EpCR and Rep), the samples were characterized by the highest level of TPC. According to the data published by Seppanen et al. [60], tocotrienols are characterized by a higher antioxidant activity than tocopherols, and their different isoforms also show different activity, e.g., ฮณ-tocotrienol has a higher antioxidant activity than ฮฑ-tocotrienol [62]. The high content of tocopherol in the analyzed oil blends was as effective in protecting the oils against degenerative changes as the smaller amounts of tocotrienols. Moreover, synergistic interactions between tocochromanols are also possible, enhancing the protective properties of these compounds [63]. Nogala-Kaล‚ucka et al. [64] indicate that the antioxidant activity of tocochromanols may be increased during synergistic interactions with phenolic compounds that are also present in cold-pressed oils [7]. 3.41 ยฑ 0.01 aA 5.85 ยฑ 0.14 bB 9.85 ยฑ 0.20 cA 1 Composition of blends as shown in Table 1. Values are means of four determinations ยฑ SD. Means in the same row, followed by different small letters indicate significant differences (p < 0.05) between the same samples with different heating temperatures. Means in the same column, followed by different capital letters, indicate significant differences (p < 0.05) between samples in the same heating temperature. Polymerized Triacylglycerols (PTG) Polymerization of triacylglycerols is one of the undesirable reactions that occur in oils during thermal treatment. As a result of this reaction, compounds with a high molecular weight (ranging from 692 to 1600 Daltons) are formed by a combination of -C-C-, -C-O-C-, and -C-O-O-C-bonds [65], which remain in the oil and are absorbed into the fried food, becoming part of the human diet. Dimerization and polymerization during the frying process are radical reactions. Radicals that are formed in these processes have a negative effect on oxidative stress in the intestine and their quantity may be associated with many metabolic diseases [66]. In addition, the polymers formed during frying are oxygen-rich, and oxidized polymer compounds accelerate further oxidation of the oil [67]. The high content of polar compounds also increases the viscosity of the oils, extends the frying process, and increases the fat content of fried food [68]. PTG were not observed in not heated oils. The formation of polymers is significantly influenced, among others, by the temperature of the thermal treatment, but also by the heating time and the composition of fatty acids [69]. High PUFA content in oils significantly increases the reaction rate of the PTG forming process [70]. The increase of the content of polymers was characteristic for samples heated at 170 and 200 โ€ข C, and the content of individual polymers and their total content depended on the type of sample and the heating temperature ( Figure 2). While the blends were heated to 170 โ€ข C, dimers were observed in 7 out of 8 oil samples. No dimers were found in the mixture of rapeseed, black cumin, and wheat germ oils (RBWg). In the remaining oils, dimers constituted from 1.38% to 4.36%. The lowest share of TAG dimers was characteristic for the LBcRb oil sample and the highest for EpCR. When the oil blends were heated at 200 โ€ข C, TAG dimers were present in all samples. Increasing the temperature of process also increased the content of this polymer fraction in oils. The share of TAG dimers in these samples ranged from 2.33% (RBWg) to 10.23% (CRbB). Comparing the results to samples heated at 170 โ€ข C the increase was 1.53 to 7.4 times. The lowest increase was observed in the EpCR and REp samples, and it was 1.53 and 1.62 times, respectively. The highest increase was characteristic of the CRbB sample (10.23 times). TAG trimers were only observed in a small number of samples. In the samples heated at 170 โ€ข C, trimers were found only in two oil blends, EpCR and REp. At a temperature of 200 โ€ข C, trimers were observed in four samples, EpCR, REp, CRb, and CRbB. The higher temperature also resulted in a higher proportion of trimers in total polymers. The highest trimer level was observed in the REp oil sample heated at 200 โ€ข C. Table 1. The lowest content of PTG in the blend of oil containing rapeseed, black cumin wheat germ oils (RBWg) may result from the highest content of tocochromanols heated oil sample. Due to the very high content of tocopherols (90.5 mg/100 g of oi PC-8 (1.03 mg/100 g of oil), this sample contained 1.7 to 2.6 times more tocochrom than the other samples. The content of tocochromanols in RBWg samples was mg/100 g of oil. In their research, Lampi and Kamal-Eldin [71] analyzed the influe ฮฑ-and ฮณ-tocopherols on the inhibition of the polymerization process. They showe ฮณ-tocopherol is characterized by much higher effectiveness as an antipolymerizati hibitor. It is also argued that ฮณ-tocotrienol has a higher antioxidant activity than ฮฑ-to enol, and that tocotrienols are better antioxidants than the corresponding tocopherol Therefore, the process of PTG formation is influenced not only by the PUFA contain the oil blends but also by antioxidants that inhibit the polymerization process. The lowest content of PTG in the blend of oil containing rapeseed, black cumin, and wheat germ oils (RBWg) may result from the highest content of tocochromanols in not heated oil sample. Due to the very high content of tocopherols (90.5 mg/100 g of oil) and PC-8 (1.03 mg/100 g of oil), this sample contained 1.7 to 2.6 times more tocochromanols than the other samples. The content of tocochromanols in RBWg samples was 94.24 mg/100 g of oil. In their research, Lampi and Kamal-Eldin [71] analyzed the influence of ฮฑand ฮณtocopherols on the inhibition of the polymerization process. They showed that ฮณ-tocopherol is characterized by much higher effectiveness as an antipolymerization inhibitor. It is also argued that ฮณ-tocotrienol has a higher antioxidant activity than ฮฑ-tocotrienol, and that tocotrienols are better antioxidants than the corresponding tocopherols [62]. Therefore, the process of PTG formation is influenced not only by the PUFA contained in the oil blends but also by antioxidants that inhibit the polymerization process. Principal Components Analysis The principal component analysis (PCA) was applied to observe possible clusters in the oil blends heated at 170 โ€ข C and 200 โ€ข C. The first two principal factors accounted for 65.9% (Dim1 = 44.1% and Dim2 = 21.8%) of the total variation. The PCA results showed differences between the individual oil non heated samples and heated at 170 โ€ข C and 200 โ€ข C ( Figure 3). Factor 1 was mainly correlated with the total tocopherols content (r = 0.877) and ฮณ-tocopherol content (r = 0.744). It was also negatively correlated with the content of the total polymer (r = โˆ’0.879), dimers (r = โˆ’0.878), and total polar compounds content (r = โˆ’0.841). Factor 2 was mainly correlated with the ฮฒ-tocopherol content (r = 0.709). The data shown in the score plot are divided into three groups. The first group (red) includes samples of unheated oil blends. The two remaining groups contain samples heated at 170 โ€ข C (blue) and 200 โ€ข C (yellow). For unheated samples, two subgroups and one outlier were observed. The first subgroup contained 3 samples (REp, EpCR, and RRb) and was located close to the x axis. These samples differed from the others with a high content of ฮฑ-tocopherol (16.61-20.62 mg/100 g of oil) and total tocopherols content (43.84โˆ’61.25 mg/100 g of oil). In the second subgroup, which was located under the x axis and to the right of the y axis, there were 4 samples (BcHRb, LBcRb, CRb, and CRbB). The content of ฮฑ-tocopherol ranged from 1.70 to 9.71 mg/100 g of oil. The outlier sample, which was located high above the x axis and to the right of the y axis, was a blend of oil mixed with wheat germ oil. This blend was characterized by the highest content of ฮฑ-tocopherol (51.19 mg/100 g of oil) and the total content of tocopherols (106.50 mg/100 g of oil). In the two remaining groups (heated oils), much smaller distances between individual samples were observed. Oil samples heated at 170 โ€ข C are located close to the center of the plot on the x axis. Samples heated at 200 โ€ข C are located above the x axis and to the left of the y axis. Within the group, the oil samples were most differentiated by the content of TAG dimers and the content of ฮณ-tocopherol. As before, in each group, outliers were observed, which were oil blends with wheat germ oil. In these samples, the degradation of tocopherols and the increase of TAG polymers and the polar fraction content were slower. However, the trend of the outliers was comparable to that of other oils. On the score plot, the movement of the samples from the left to the right side of the graph was observed. Conclusions Enriching the human diet with vegetable oils is one of the ways to counteract the consumption of excessive amounts of animal fats with high concentrations of saturated fatty acids. Vegetable oils provide many essential substances such as unsaturated fatty acids, especially PUFA, but also fat-soluble vitamins, antioxidants, and plant sterols. The ratio of omega-6 to omega-3 fatty acids is also very important from a health point of view. However, the presence of a high PUFA content makes oils susceptible to oxidation, espe- Conclusions Enriching the human diet with vegetable oils is one of the ways to counteract the consumption of excessive amounts of animal fats with high concentrations of saturated fatty acids. Vegetable oils provide many essential substances such as unsaturated fatty acids, especially PUFA, but also fat-soluble vitamins, antioxidants, and plant sterols. The ratio of omega-6 to omega-3 fatty acids is also very important from a health point of view. However, the presence of a high PUFA content makes oils susceptible to oxidation, especially when there are exposed to high temperatures. Heating oils with a programmed ratio of ฯ‰6/ฯ‰3 fatty acids led to their thermal degradation. The degree of degradation was higher, with the higher temperature of the process being used. In addition, the degradation rate was related to the fatty acid profile and the tocochromanols content in individual blends. During the heating process, a sharp decrease in tocochromanols (tocopherols and tocotrienols) and an increase in TPC content were observed. The high temperature of heating also increased the level of polymerization products of triacylglycerols, in particular TAG dimers. The changes were related to the content of tocopherols and tocotrienols. The high level of tocopherols limited the transformations in a similar way as the much lower share of both tocopherols and tocotrienols in the oil. This may suggest the possibility of obtaining appropriate oils also with a lower content of antioxidants, however, coming from different groups and using the synergistic phenomenon. Regardless of the observed changes, none of the prepared oil blends exceeded the limit value of the TPC content, maintaining the programmed ratio of ฯ‰6 to ฯ‰3 acids. This confirms the possibility of creating and using oils with a nutritious ratio of ฯ‰6/ฯ‰3 fatty acids in nutrition and food production technology. The use of developed oil blends can be used both to form plant-based meat analogs with high nutritional value and to improve the quality of conventional products. Further research on oil blends may deepen our knowledge about the mechanisms of MUFA and PUFA degradation during technological processes and develop effective methods of their prevention. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data generated or analyzed during this study are included in this published article. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript, or in the decision to publish the results.
Pleistocene climate change promoted rapid diversification of aquatic invertebrates in Southeast Australia Background The Pleistocene Ice Ages were the most recent geohistorical event of major global impact, but their consequences for most parts of the Southern hemisphere remain poorly known. We investigate a radiation of ten species of Sternopriscus, the most species-rich genus of epigean Australian diving beetles. These species are distinct based on genital morphology but cannot be distinguished readily by mtDNA and nDNA because of genotype sharing caused by incomplete lineage sorting. Their genetic similarity suggests a Pleistocene origin. Results We use a dataset of 3858 bp of mitochondrial and nuclear DNA to reconstruct a phylogeny of Sternopriscus using gene and species trees. Diversification analyses support the finding of a recent rapid speciation event with estimated speciation rates of up to 2.40 species per MY, which is considerably higher than the proposed average rate of 0.16 species per MY for insects. Additionally, we use ecological niche modeling and analyze data on habitat preferences to test for niche divergence between species of the recent Sternopriscus radiation. These analyses show that the species can be characterized by a set of ecological variables referring to habitat, climate and altitude. Conclusions Our results suggest that the repeated isolation of populations in glacial refugia might have led to divergent ecological adaptations and the fixation of morphological traits supporting reproductive isolation and therefore may have promoted speciation. The recent Sternopriscus radiation fulfills many characteristics of a species flock and would be the first described example of an aquatic insect species flock. We argue that the species of this group may represent a stage in speciation past the species flock condition because of their mostly broad and often non-overlapping ranges and preferences for different habitat types. Pleistocene climate change promoted rapid diversification of aquatic invertebrates in Southeast Australia Hawlitschek et al. Conclusions: Our results suggest that the repeated isolation of populations in glacial refugia might have led to divergent ecological adaptations and the fixation of morphological traits supporting reproductive isolation and therefore may have promoted speciation. The recent Sternopriscus radiation fulfills many characteristics of a species flock and would be the first described example of an aquatic insect species flock. We argue that the species of this group may represent a stage in speciation past the species flock condition because of their mostly broad and often non-overlapping ranges and preferences for different habitat types. Background Global biodiversity is shaped by the processes of speciation and extinction, whose rates vary depending on region, environment, taxonomic group and geohistorical events [1][2][3]. Evidence for shifts in the rates of speciation and extinction have been inferred from the fossil record since early paleontology [4], and advances in molecular biology have greatly improved our capabilities to study these processes particularly for taxa with sparse or inconsistent fossil evidence [5,6]. The most recent geohistorical event of major global impact on biodiversity was the Pleistocene glaciations, or Ice Ages, which represent the largest expansion of cold climates since the Permian period 250 million years (MY) earlier. Until 10,000 years ago, temperatures repeatedly oscillated between warm and cold phases. The effects on the environment varied depending on geographical region, but were always accompanied by major biotic shifts. Boreal regions, particularly in the Northern hemisphere, were mostly glaciated and drove species into refugia [7]. In the tropics and subtropics, where glaciations were mostly restricted to high altitudes, a similar effect was attributed to the aridification of formerly humid forest habitats [8]. It has been a matter of discussion whether these cycles of environmental change promoted speciation [9] or whether species responded solely by shifting their ranges toward ecologically suitable areas [10]. In Australia, glaciations occurred only at its highest elevations, but biota faced an ongoing process of aridification that was initiated in the Miocene c. 15 million years ago (MYA) when Australia drifted northward [11]. During the Ice Ages, the relatively rapid shifts between warm and wet versus cold and dry conditions had severe consequences particularly for the fauna [12,13]. Aquatic environments were strongly affected by oscillations between arid and humid conditions [14]. The genesis of the Australian arid zone promoted radiations in various organism groups, e.g., hypogean faunas in the ground waters underneath the spreading deserts, which most likely began with the onset of the aridification c. 15 MYA [14]. However, many rapid radiations of insects dating back only 2 MY or less have been described from all around the world. Coyne & Orr [15] proposed an average speciation rate of 0.16 species per MY, which is exceeded by an order of magnitude by the fastest known radiation [16][17][18]. Phylogenies of such young radiations based on mitochondrial gene trees are often poorly resolved, and species may appear para-or polyphyletic because of shared alleles with other species, which may be the result of incomplete lineage sorting or hybridization [19]. Species trees may cope with these problems: in a method based on a coalescent model and Bayesian inference, all gene trees are co-estimated and embedded in a single species tree whose tips represent species and not single samples [20,21]. Aside from morphological and molecular characters, ecological factors can be useful to distinguish and even delimit species. Many studies have shown that a variety of climate factors often have a profound effect on the distributions of species, and these factors can be combined to project potential distributions of species in an Ecological Niche Modeling (ENM) approach [22,23]. The predictive powers of this method have been demonstrated [24], and it has been successfully applied in species delimitations [22,25]. Naturally, the distinction of species based on differences in their responses to ecological factors is sensible only if there are actual response differences. Evidence of niche conservatism in closely related species, promoting allopatric speciation, is abundant [26]. However, in many examples of rapid radiations in limited geographic areas niche divergence appears to be the more common condition, and closely related species show different responses to ecological factors [2004,27]. The focus of our study is on the genus Sternopriscus (Coleoptera: Dytiscidae: Hydroporini), which is the most species-rich epigean genus of Australian diving beetles and contains 28 species [27,28]. Sternopriscus species inhabit a wide variety of lentic and lotic habitats from sea level to high altitudes. 18 species are found in southeastern Australia, of which four species are endemic to Tasmania. The corresponding freshwater ecoregions according to Abell et al. [29] are Eastern Coastal Australia, Bass Strait Drainages, Southern Tasmania, and small parts of the Murray-Darling region. Unlike many other aquatic invertebrates, such as crustaceans and gastropods, most species of epigean aquatic beetles use flight to colonize new habitats. Therefore, the presence of suitable habitats most likely has a higher impact on aquatic beetle distribution than the drainage systems defining the biogeographic regions of Abell et al. [29]. Nevertheless, only 2 of these 18 species have a wider distribution over mainland Australia (S. multimaculatus and S. clavatus). 6 species, including some taxonomically and geographically isolated species, are endemic to peaty habitats in the southwest, in an area with cold and humid climate during winter, and 5 species are distributed over the tropical north, including one endemic species in the deep gorges of the Pilbara. None, or only one, species is shared by 2 or more of these areas of endemism. This distribution reflects the restriction of all but the widespread pioneer species S. multimaculatus to the more humid coastal areas of Australia. The high level of endemism in the southeast and southwest suggests that the arid barrier between these two regions is long-standing. Another strong pattern is the virtual absence of S. tarsalis group members from the north and southwest regions of the continent, whereas members of the S. hansardii group, with highly modified male antennae and median lobes, are more widespread [27,28]. Based on male morphological characters, the genus has been divided into 3 groups: the S. hansardii group (11 species), the S. tarsalis group (13 species), and 4 'phylogenetically isolated' species. The species in the S. tarsalis group have been assigned to 3 species complexes: the S. tarsalis complex (2 species), the S. meadfootii complex (5), and the S. tasmanicus complex (3). 3 species have not been assigned to any complex. The 10 species belonging to the S. tarsalis, S. meadfootii and S. tasmanicus complexes in the S. tarsalis group are genetically similar and centered in mesic southeastern Australia. Below, we refer to this group of species as the S. tarsalis radiation (STR). The STR is supposedly the result of recent diversification; some of these morphologically well-defined species occur in sympatry, and some in syntopy [27,28,30]. Previous genetic studies [30] suggest that species belonging to the STR are not easily delimited using mtDNA and nDNA. In this study, we attempt to test the following hypotheses: (1) the delimitation of species in the STR, based on morphological characters, can be supported by genetic or ecological data; (2) the STR species originated in a rapid and recent diversification event, most likely in the Pleistocene; and (3) the radiation of the STR was promoted by the Pleistocene climate oscillations. We use a molecular phylogeny with gene and species trees and diversification rate analyses to investigate how environmental change has affected speciation and extinction rates in the genus Sternopriscus. We then discuss which factors might have promoted lineage diversification in the STR and whether the molecular similarities are caused by hybridization or incomplete lineage sorting. Aside from the results of our molecular phylogeny, we use phylogeographic network analyses and ENM paired with empirical ecological data in an attempt to reveal how this diversification was promoted. Sampling and laboratory procedures Specimens were collected by sweeping aquatic dip nets and metal kitchen strainers in shallow water or operating black-light traps [27] and preserved in 96% ethanol. DNA was extracted non-destructively using Qiagen blood and tissue kits (Qiagen, Hilden). Primers are listed in Additional file 1: Table S1. New sequences were submitted to GenBank under accession numbers [EMBL: HE818935] to [EMBL:HE819178]; cox1 data are [EMBL: FR732513] to [EMBL:FR733591]. The individual beetles from which we extracted and sequenced DNA each bear a green cardboard label that indicates our DNA extraction number (e.g., "DNA 1780 M. Balke"). This number links the DNA sample, the dry mounted voucher specimen and the GenBank entries. Phylogenetic analyses The aligned 3858 bp dataset contains three mitochondrial (16 S rRNA, cytochrome oxidase b (cob), and cytochrome c oxidase subunit I (cox1)) and four nuclear gene fragments (18 S rRNA, arginine kinase (ARK), histone 3 (h3), and histone 4 (h4)) for 54 specimens of 25 Sternopriscus species and 2 Hydroporini outgroups, Barretthydrus stepheni and Carabhydrus niger. Among the known species of Sternopriscus, only S. mouchampsi and S. pilbaraensis were not available for sequencing. S. emmae was excluded from the phylogenetic analyses because we only had DNA from museum specimens and only obtained a short cox1 sequence. DNA alignment was performed in MUSCLE 3.7 [31]. We then used jModelTest 0.1.1 [32] to identify appropriate substitution models for each gene separately, assessing lnL, AIC and BIC results and giving preference to BIC. To evaluate different partition schemes, we performed a Bayes factor test with MrBayes 3.1 [33] and Tracer v1.5 [34]. The eleven schemes tested were mitochondrial versus nuclear, protein-coding versus ribosomal, and according to codon positions (1 + 2 versus 3 or one partition for each codon position). We used raxmlGUI 0.93 [35] for maximum likelihood analyses with 1000 fast bootstrap repeats. MrBayes 3.1 [33] was used for Bayesian analyses, with two runs and four chains with 30,000,000 generations (samplefreq = 1,000 and 25% burnin). Runs were checked for convergence and normal distribution in Tracer v1.5 [34]. We then used parsimony analysis to infer phylogenetic relations as implemented in the program TNT v1.1, which we also used to run 500 jackknife replications (removal 36%) to assess node stability [36] (hit the best tree 5 times, keep 10,000 trees in memory). Finally, we used coalescent-based species tree inference models in *BEAST v1.6.1 [21] for comparison with the results of the phylogenetic gene tree. *BEAST requires a-priori designation of species, which we performed based on morphological data [27,28]. We conducted two runs over 100,000,000 generations (sample freq = 1,000 and 20% burnin) and checked for convergence and normal distribution in Tracer v1.5 [34]. Additionally, as proposed in Pepper et al. [13], we repeated this analysis using simpler substitution models (HKY + G). All analyses in MUSCLE and MrBayes were run on the CIPRES Portal 2.2 [37]. Pairwise distances were calculated in MEGA 5.0 [38]. Lineage diversification and radiation Analyses were conducted in R with the packages APE [39] and Laser [40]. Based on the phylogenetic tree created in MrBayes, we used the 'chronopl' function of APE to create an ultrametric tree in R and cropped all representatives but one of each species. We then constructed Lineage-Through-Time (LTT) plots [41] and calculated ฮณ-statistics [42]. Because new species continue to be discovered in Australia and incomplete taxon sampling might influence ฮณ-statistics, we conducted a Monte Carlo constant rates (mccr) test with 10,000 replicates, assuming 10% missing species. We then tested the fit of two rate-constant [41] and four rate-variable diversification models [43] to our dataset. Finally, we calculated p-values by simulating 10,000 trees with original numbers of present and missing species for a pure-birth scenario and for various birth-death rates (b = 0.5 and d = 0.0, 0.25, 0.5, 0.75 and 0.9). To be able to understand the effect of the near-tip radiation in the STR, we also tested ฮณ for a tree in which this group was treated as a single taxon. Because of a lack of reliable calibration points, we cannot rely on molecular clock analyses to estimate node ages in the Sternopriscus phylogeny. However, we attempt to approximate the age of the rapid radiation in the STR using the standard mutation rates of the cox1 gene [44,45]. We apply the equation presented in Mendelson & Shaw [16] to estimate the relative speed of this radiation for comparison with other known rapid radiations in insects. For young and monophyletic radiations, such as the STR, the equation is r = lnN/t, where r is the rate of diversification, N is the number of extant species, and t is the divergence time. Phylogeographic structure analysis We assembled a matrix of 710 bp of only cox1 for 79 specimens of STR species to investigate the phylogeographic structure of this group. Additional sequences were obtained from Hendrich et al. [30]. The standard population genetic statistics Fu's Fs [46] and Tajima's D [47] were calculated, and mismatch distribution analyses to untangle demographic histories were performed using DnaSP 5.10 [48]. The multiple sequences were collapsed in haplotypes also using DnaSP 5.10. A minimumspanning network was then inferred in Arlequin 3.5.1.3 [49] and used to create a minimum-spanning tree (MST) using Hapstar 0.5 [50]. The scalable vector graphics editor Inkscape 0.48 was further used to map geographic and taxonomic information on the MST. Distinguishing incomplete lineage sorting from hybridization We used an approach developed by Joly et al. [51], and employed in Joyce et al. [52] and Genner & Turner [53] to test whether the haplotype sharing between STR species was mainly the result of incomplete lineage sorting or influenced by hybridization. In this approach, mtDNA evolution is simulated using a species tree topology that assumes hybridization is absent. If low genetic distances between species pairs are due to incomplete lineage sorting, these similarly low genetic distances should be observed in the simulations. If low genetic distances between species pairs are due to hybridization, then significantly lower genetic distances should be present than observed in the simulations. First, we ran another *BEAST [21] analysis of a subset of the entire multilocus dataset containing only the STR species, using the HKY + G model for 11,000,000 generations (samplefreq = 1,000 and 10% burnin). Second, we used MrModeltest [54] to estimate the parameters of the substitution model for the cox1 dataset from Hendrich et al. [30], which was previously used in the phylogeographic structure analysis. Third, we conducted a run of the JML software [55] using the same cox1 dataset, the locus rate of cox1 as yielded by *BEAST, a heredity scalar of 0.5, and the parameters yielded by MrModeltest. Ecological niche modeling and analyses In an attempt to detect possible divergence in response to climatic variables in their ranges, we created ecological niche models (ENMs) for the species of the STR. We excluded S. montanus and S. williamsi from the ENM analyses because of an insufficient number of localities. Our models were based on a total of 215 distribution points [27,28] (Additional file 2: Table S2) and unpublished data by L. Hendrich. With the exception of three records of S. wehnckei, all STR species occur in broad sympatry in southeastern Australia including Tasmania. We preliminarily selected climate variables according to ecological requirements considered critical for the species. Bioclimatic variables [56] represent either annual means or maxima and minima in temperature and precipitation, or variables correlating temperature and precipitation, e.g., "mean temperature of wettest quarter" (BIO8). Such variables are useful for representing the seasonality of habitats [25]. After the preliminary selection, we used the ENMtools software [57] to calculate correlations between the selected climate layers in the area of interest. In our final selection, we removed layers until no two layers had correlation coefficients (rยฒ) higher than 0.75. ENMs for each species were created in Maxent 3.3.2 [58] (our procedure: Hawlitschek et al. [25]). Suitable background areas that were reachable by the species were defined by drawing minimum convex polygons around the species records, as suggested by Phillips et al. [59]. We conducted runs with 25% test percentage, 100 bootstrap repeats, jackknifing to measure variable importance and logistic output format. Model validation was performed by calculating the area under the curve (AUC) [60]. To compare ENMs of different Sternopriscus species, we measured niche overlap [57] in ENMtools. We also used ENMtools' niche identity test [61] with 500 repeats because the niche overlap values alone do not allow any statements whether the ENMs generated for the two species are identical or exhibit statistically significant differences. In each repeat of this test, pairwise comparisons of species distributions are conducted and their localities pooled, their identities are then randomized and two new random samples are extracted to generate a set of pseudoreplicates. The results are compared with the true calculated niche overlap (see above). The lower the true niche overlap is in comparison to the scores created by the pseudoreplicates of the pooled samples, the more significant the niche difference between the two compared species. Finally, we classified species by altitudinal and habitat preference and compared all data. Molecular phylogenetics Bayes factor analyses favored separate partitioning of genes and codon positions (17 partitions in total). This was the most complex partition strategy tested. Substitution models applied were according to jModeltest: the GTR + I + G model (16 S rRNA, mitochondrial non-protein-coding), the GTR + G model (cox1, cob, mitochrondrial protein-coding), the HKY + I + G model (18 S rRNA, nuclear non-protein-coding), and the HKY + G model (ARK, h3, h4, nuclear proteincoding). Bayesian, maximum likelihood, and maximum parsimony analyses revealed compatible topologies ( Figure 1) that were largely congruent with the previously recognized classifications based on morphology. Here, we assign the four species previously supposed to be 'phylogenetically isolated' to either the S. tarsalis (S. browni and S. wattsi), or the S. hansardii (S. eikei and S. marginatus) group. Within the S. tarsalis group, all S. tarsalis complex species form a strongly supported clade ( Figure 1). The *BEAST species tree is largely congruent to the gene trees. The main difference is that in the gene trees, S. multimaculatus is the sister taxon to the STR, whereas in the *BEAST tree S. minimus is the sister taxon to the STR and S. multimaculatus is the sister taxon to all other members of the S. tarsalis group. Almost all species tree nodes within the STR are poorly supported. Notably, the analysis of the *BEAST run log file showed near-critically low posterior and prior effective sample sizes (< 120). This problem could neither be solved by repeating runs with higher sample frequencies nor with the application of simpler substitution models, as proposed in Pepper et al. [13], and indicates that the species tree results must be treated with caution. The largest calculated cox1 p-distance between species in the STR was only 3.4% (S. tarsalis/S. barbarae), but interspecific distances may be as low as 0.3% (e.g., between S. alpinus, S. mundanus and S. weckwerthi, all belonging to different S. tarsalis complexes) or 0.2% (S. alpinus/S. wehnckei). Thus, no genetic distinction between the three complexes was possible because specimens often cluster with those belonging to other morphologically well-characterized species. This problem could not be solved by inspecting trees based on single or combined nuclear loci; the species S. mundanus and S. weckwerthi were polyphyletic in single-gene trees of cob, cox1, and ARK. The STR species shared identical haplotypes in all other nuclear genes studied. Figure 2 shows the LTT plot for Sternopriscus. APE yielded a positive ฮณ value of 3.22 (p = 0.0013*). According to the mccr test, the critical value is 1.73 (p = 0.9*10E-3**) and is therefore met by the true value of ฮณ. The test in Laser yielded a Yule-2-rate model as significantly better than the next best model, which was a constant rate birth-death model. The level of significance was highest (p = 0.0073*) for equal rates of b (birth) and d (death) (both 0.5), but all tested combinations of b and d yielded significant test results. In the test run in which the S. tarsalis-group was treated as a single clade, ฮณ was negative but not significant at a value of โˆ’0.01 (p = 0.4956). This means that for this dataset the null hypothesis that the diversification rates have not decreased over time cannot be rejected. Diversification analyses The STR appears to have a thorough effect on the diversification analysis of the genus Sternopriscus. A high positive ฮณ represents a rather unusual condition [6]. While many phylogenies are characterized by a decreasing rate of diversification (logistic growth or impact of extinctions [62]), a ฮณ = 3.22 suggests a diversification rate that is highly increasing over time. This pattern is hard to explain in general. In the case of Sternopriscus, it appears appropriate to attribute this pattern to the recent speciation burst of the STR, which comprises 10 of 28 known species. This view is also supported by the test results that indicate a Yule-2-rate model as the most adequate, which fits to a sudden shift in diversification rates. Papdopoulou et al. [44] suggested using substitution rates of 3.54% cox1 divergence per MY which suggest an origin of the STR c. 0.96 MYA, and interspecific distances indicate divergence times as recent as 60,000 to 80,000 years ago. The slower substitution rate (2.3%) suggested by Brower [45] yields an approximate origin of the STR around 1.48 MYA and interspecific divergence times of 87,000 to 130,000 years ago (but see Papadopoulou et al. [44] for a discussion of these estimates). The equation by Mendelson & Shaw [16] was used to estimate speciation rates in the STR. Applying the proposed rate of Papdopoulou et al. [44], we estimate a speciation rate in the STR of 2.40 species per MY. Applying the proposed rate of Brower [45], we estimate a speciation rate in the STR of of 1.56 species per MY. Phylogeographic structure The matrix of 79 cox1 sequences contained 69 polymorphic sites with a nucleotide diversity of ฯ€ = 0.0121 and a haplotype diversity of H = 0.9815. We identified 61 distinct and mostly unique haplotypes within the STR with only 8 haplotypes comprising more than one sequence. Neither geographic nor taxonomic (Figure 3) mapping on the star-like MST yielded a comprehensive pattern. More precisely, no geographic structuring could be noticed based on the zoning of Australia, and the haplotypes of individuals of identical species were not systematically gathered in groups. Interestingly, the MST appears to be composed of two central haplotypes of South Australian and Victorian S. mundanus from which the rest of the sequences appears to have derived. In addition, even if there is a lack of geographical or taxonomic structuration, one might notice that several haplotypes representing different species are separated from the central network by a deep break of multiple mutation steps. While Tajima's D value does not significantly support a scenario of demographic expansion (D = โˆ’1.27773, p-value = 0.06), Fu's Fs significantly support such a demographic history (Fs = โˆ’35.731, p-value = 0.01) (see Tajima [47] and Fu [46] regarding the interpretation of Tajima's and Fu's statistics). However, the mismatch distribution analyses yield a multimodal distribution of the pairwise genetic distances, which favors a scenario of demographic equilibrium for the STR even if unimodal distributions are recovered only for recent and fast expansions [63]. Incomplete lineage sorting vs. hybridization *BEAST yielded a high relative locus rate of 2.332 for cox1, which was expected because many other markers included in our multilocus dataset, mainly nuclear markers, are known to evolve slower. The results of the JML run are given in Table 1. All species pairs exhibit genetic distances that are not significantly lower than expected. Thus, we cannot reject the hypothesis of incomplete lineage sorting in any cases. Minimum genetic distance (*1,000), as estimated by JML, of STR species pairs. Lower left: observed minimum genetic distance. Upper right: expected minimum genetic distance (median). Species pairs in which the observed genetic distance is 0 due to the sharing of haplotypes are indicated by #. Species pairs in which the observed minimum genetic distance is higher than the expected distance are indicated by +. There is no case in which the probability that the minimum observed genetic distance is lower than expected is significant (p โ‰ค 0.05). Ecological niche modeling are given in Figure 6. AUC values for all models range from 0.981 to 0.997. Because all values are > 0.9, the ability to distinguish presence from random background points is considered "very good" for all models according to Swets [60]. We preliminarily selected the climate layers "maximum temperature of the warmest month" (BIO5), "minimum temperature of the coldest month" (BIO6), "mean temperature of the wettest quarter" (BIO8), "mean temperature of the driest quarter" (BIO9), "precipitation of the wettest month" (BIO13), "precipitation of the driest month" (BIO14), "precipitation of the warmest quarter" (BIO18) and "precipitation of the coldest quarter" (BIO19). In our final selection, we omitted BIO13 and BIO14 because of correlation coefficients with other variables of rยฒ > 0. 75. Thus, all models presented here are based on six climate variables. Jackknifing to measure the importance of variables showed that either "maximum temperature of the warmest month" (BIO5: S. barbarae, S. weckwerthi, S. wehnckei), "mean temperature of the wettest quarter" (BIO8: S. alpinus, S. mundanus), or "precipitation of the coldest quarter" (BIO19: S. meadfootii, S. tarsalis, S. tasmanicus) were the most important variables in creating ENMs. Niche overlap values (I and D) and identity test results are given in Table 2. The results of the identity test are highly significant (Bonferroni corrected) for I in all and for D in nearly all pairwise species comparisons. However, the null hypothesis of identity in the ENMs of two compared species can be rejected only if the true calculated niche overlap is below the 99.9% confidence interval of the values generated in the identity test. In a few cases, the true calculated niche overlap is above this interval, and the null hypothesis of niche identity cannot be rejected [61]. Ecological analyses All species of the STR were compared for their preferences in altitude and habitat and for the most important climate factor in their ENM, which resulted from the jackknifing test in the ENM runs. Discussion In the opening section of this article, we suggested three hypotheses: (1) species delimitation in the STR can be supported by genetic or ecological data; (2) the STR species originated in a rapid Pleistocene diversification event; and (3) Pleistocene climate oscillations promoted the radiation of the STR. In the following, we will discuss how our results support these hypotheses. Our data shows that the molecular methods applied in our study do not serve to unambiguously distinguish and delimit the species of the STR. This is because of the widespread genotype sharing of mitochondrial genes and lack of diversification in nuclear genes between these species. However, the analysis of our ecological data shows that STR species appear to respond differently to ecological variables. Below, we initially discuss whether incomplete lineage sorting or hybridization may have caused the abundance of shared haplotypes in the STR. Then, we discuss the importance of the results of our ecological analyses in the context of the entire genus, and specifically for the STR. Genotype sharings between species may be explained by incomplete lineage sorting, by hybridization, or a combination of both. Funk & Omland [19] also mention imperfect taxonomy, inadequate phylogenetic information and paralogs as causes for genotype sharing. However, the taxonomy of Sternopriscus based on morphological characters is well supported [27,28], and our multi-gene phylogeny is well supported by different analytical approaches. Paralogs can almost certainly be excluded because the patterns of species polyphyly are repeated by different mitochondrial and nuclear markers. Hybridization, as a reason for genotype sharing in closely related species, has been proposed for various animal groups [64,65], including groups with strong sexual selection (e.g., mating calls [66]), and has been shown to contribute to speciation [64]. However, in the case of Sternopriscus, the results of our analyses, the diversity in genital morphology, and the absence of specimens identifiable as hybrids, do not support hybridization [67]. Incomplete lineage sorting, or the retention of ancestral polymorphism, is the more likely explanation for genotype sharing in the case of the STR. Incomplete lineage sorting has often been recognized as a problem in resolving phylogenies of young and closely related taxa [68]. This phenomenon affects nuclear loci more commonly than faster evolving mitochondrial loci, but mitochondrial genes can be equally affected, particularly in closely related taxa where hardly any diversification in nuclear genes is found [19]. Incomplete lineage sorting as an explanation for haplotype sharings in the STR supports the view that the STR is a recent radiation. A comparison of our ecological findings concerning the STR species with data on other Sternopriscus species shows that the STR occupies ecological ranges similar to those of other related species. The currently known altitudinal distribution and ecology of all Sternopriscus species in Australia is shown in Additional file 3: Table S3, modified after Hendrich & Watts [27,28]. 10 species of the genus are rheophilic and inhabit rivers and streams that are mainly of intermittent character. 11 species are acidophilic and live in seasonal or permanent swamps, ponds and pools of different types of peatlands. 7 species appear to be more or less eurytopic and occur in various water bodies in open or forested country. The highest species diversity is in lowland or coastal areas and hilly or low mountain ranges from 0 to 500 m. Only 6 species were collected at 1000 m or above (S. alpinus, S. meadfootii, S. montanus, S. mundanus, S. williamsi and S. weckwerthi). Within the STR, all species inhabit broadly overlapping areas in mesic southeast Australia, except for a few localities of S. wehnckei in the northeast (the Eastern Coastal Australia region and small parts of the Murray-Darling region of Abell et al. [29]. Many species also inhabit Tasmania, including two endemics (Bass Strait Drainages and Southern Tasmania). ENMs indicate niche diversification within this group of closely related and broadly sympatric species. Aside from the high levels of significance in the identity test, the degree of niche diversification is hard to measure. Therefore, we rely on the importance of the various climate variables used to characterize the species ENMs. The variables of highest importance are (See figure on previous page.) Figure 5 Climate variables used for ENM creation. Variables were selected to represent the effects of temperature, precipitation and seasonality. "maximum temperature of the warmest month" (BIO5), "mean temperature of the wettest quarter" (BIO8), or "precipitation of the coldest quarter" (BIO19). Figure 5 shows that all the species studied inhabit areas with relatively low maximum temperatures, with the lowest on Tasmania. The two species most characterized by this factor are the two Tasmanian endemics, S. barbarae and S. weckwerthi. A distinction between the two remaining factors is more Niche overlap values (D and I), calculated with ENMtools, are given for species pairs and are mostly lower than the randomized overlap levels generated in the identity test at significant (*, p โ‰ค 0.05, Bonferroni corrected) or highly significant (**, p โ‰ค 0.001, Bonferroni corrected) level. This means that niches are more divergent than expected at random. In some cases, results are not significant, or significantly higher than the randomized overlap (indicated by #). In these cases, niches are not more divergent than expected by random. Note that results yielded by D and I do not accord in all cases. difficult. Considering Figure 5, "mean temperature of the wettest quarter" is lowest in areas where winters (the wettest quarter in our region of interest) are cold, whereas "precipitation of the coldest quarter" is highest where winters are wet. Some species (e.g., the high-altitude S. alpinus) may be tolerant of winter temperatures that are too low for other species, whereas other species are more dependent on sufficient precipitation. Species that require the latter are eurytopic species that also inhabit ephemeral waters, such as ponds at the edge of rivers and creeks, which are only filled after heavy rainfall. The acidophilic species, which inhabit more permanent water bodies with dense vegetation, are often "cold winter" species. The low divergences between haplotypes in the STR species suggest that these species originated in a recent and rapid radiation. Unfortunately, we could not rely on any calibration points to support our molecular clock approach. Instead, we attempted to estimate the origin of the STR based on standard cox1 mutation rates [44,45]. We estimated an origin of c. 0.96 to 1.48 MYA, which leads to an estimated speciation rate of 2.40 or 1.56 species per MY. Genetic distance might indicate the age of the ancestral species, however divergence time estimates for the extant species should not be considered reliable beyond assumption of a comparably recent origin of the STR. This fact alone, however, suggests that the STR is an exceptional event for what is known of aquatic beetles. For other insect groups, little evidence exists for similarly fast diversification events. The fastest rate (4.17 species per MY) was estimated for a clade of 6 species of Hawaiian crickets over 0.43 MY [16]. However, in the same study, for a related clade comprising 11 species, the estimated rate was much lower at 1.26 species per MY over 1.9 MY. Additional estimates are available for Galagete moths in the Galapagos [17] of 0.8 species per MY (n = 12, t = 1.8 MY) and for Japanese Ohomopertus ground beetles [18] of 1.92 (n = 15, t = 1.4 MY) to 2.37 species per MY (n = 6, t = 0.76 MY). The average speciation rate in insects was proposed to be 0.16 species per MY [15]. This comparison shows that rapid radiation events, as exemplified in the STR, appear to be exceptional among insects and particularly in continental faunas because all other examples recorded were island radiations. Species groups that originated from rapid radiation events have been detected in almost all organismic groups and habitats [69]. An overview of many recent and past events suggests three major promoters of rapid radiations: the appearance of a key innovation that allows the exploitation of previously unexploited resources or habitats [70], the availability of new resources [71], and the availability of new habitats, e.g., because of a rare colonization event or drastic environmental changes [72,73]. In the case of the STR, we find no evident key innovation distinguishing this group from other Sternopriscus species. We have no data concerning internal morphology or physiology. Additionally, our data show that the observation that STR species have ecological requirements similar to those of other Sternopriscus species does not indicate the presence of any key innovations. There is also no indication of any new resource that could be specifically exploited by the STR species. Therefore, we explore the possibility that drastic environmental changes during the Pleistocene climate oscillations mediated the radiation of the STR species. During most of the Cenozoic, the climate of Australia was hot and humid and currently remains so in the northern rainforest areas [11]. Aridification began in the Miocene (c. 15 MYA) and gradually led to the disappearance of forests and to the spread of deserts over much of the present continent. Most of today's sand deserts, however, are geologically younger and appeared only after the final boost of aridification that accompanied the Ice Ages, particularly since the later Pleistocene (c. 0.9 MYA). The climate was subjected to large oscillations in temperature and rainfall, which drove many groups of organisms into refugia and also promoted speciation [12,13]. Our results also document a strong and abrupt increase in speciation in the genus Sternopriscus about 1 to 1.5 MYA, represented by the STR. This age estimate is congruent with the Pleistocene oscillations. Byrne et al. [12] present cases of organisms restricted to mesic habitats that were formerly most likely more widespread, but today occupy relictual areas with suitable climates. However, some of the young species of the STR occupy rather large areas in southwestern Australia. This distribution indicates good dispersal abilities, which are necessary for organisms that inhabit habitats of relatively low persistence [74]. Ribera & Vogler [75] argue that for this reason, beetle species that inhabit lentic aquatic habitats often have better dispersal abilities than those inhabiting lotic habitats. However, it is possible that the STR species of lotic habitats only recently derived from an ancestor adapted to lentic habitats with good dispersal abilities that are maintained in the newly derived species. Speciation in Pleistocene refugia was previously described for dytiscid beetles on the Iberian Peninsula [9]. During the Pleistocene climate oscillations, the ancestral species of the STR might have been forced into ongoing cycles of retreating into, and the re-expansion from, refugia. Under the recurrent, extremely unsuitable climate conditions, the isolation of small populations over many generations might have promoted speciation and the fixation of morphological traits. This scenario might also explain the lack of clear geographic or taxonomic structuring in the striking haplotypic diversity presented by the STR species. This diversity might be attributed to the cycles of expansion and retreat that repeatedly isolated haplotypes in various geographic locations before newly allowing the expansion and colonization of other areas. The phenomenon of groups of young and closely related species within a defined distributional range is most familiar in ichthyology, in which it was termed "species flock". Among the most prominent species flocks are the cichlids of the African Great Lakes and other lake ecosystems around the world, the Sailfin Silversides of Sulawesi, and the Notothenioid Antarctic Ice Fishes (see review in Schรถn & Martens [76]). Schรถn & Martens [76] summarize the criteria for naming a group of species a species flock as "speciosity [= species-richness], monophyly and endemicity". Compared with the large fish species flocks, the STR is poor in species. Nevertheless, the number of species is "disproportionally high" [77] in relation to the surrounding areas, as no other region in Australia is inhabited by a comparable assemblage of closely related species. In the last decade, an increasing number of less species-rich radiations have been termed species flocks with as little as 3 or 4 species [76,78]. Most other species flocks inhabit lakes, islands or archipelagoes. These are areas more "narrowly circumscribed" [77] than the area of endemism of the STR, which can be broadly termed "the southeast Australian region". Most STR species have relatively large ranges that do not share a common limit and sometimes do not even overlap. Our results show that STR species often occupy different habitat types. Additionally, the clade is not strictly endemic to southeastern Australia, as shown by the northeastern records of S. wehnckei. Based on this criterion, other rapid radiations among insects [16,17] are much more adequate examples of species flocks. Conclusions Our results provide evidence that STR species are the result of an extremely recent, most likely Pleistocene, radiation. The STR species cannot be distinguished with the molecular methods used in this study, however, the species show clear divergences in their responses to ecological factors of habitat type and climate. We proposed a scenario in which the Pleistocene climate oscillations led to the repeated restriction and expansion of the ranges of the ancestral species of the STR, which may have promoted fixation of ecological adaptations and morphological traits in small and isolated populations restricted to refugia. This suggests that Sternopriscus is an example for the hypothesis that Pleistocene refugia promoted speciation. Taking this scenario into account, the STR does not appear as an evolving or fully evolved species flock but as a radiation based on a species flock. While possibly confined to a narrowly circumscribed area during the Pleistocene, the STR species were able to break the boundaries of their refugia with the end of the Ice Ages and increase their ranges. Today, because the species are no longer confined to a common limited area, the term "species flock" may best fit a stage in speciation the STR has previously passed.
Topological dynamics of the Weil-Petersson geodesic flow We prove topological transitivity for the Weil Petersson geodesic flow for two-dimensional moduli spaces of hyperbolic structures. Our proof follows a new approach that exploits the density of singular unit tangent vectors, the geometry of cusps and convexity properties of negative curvature. We also show that the Weil Petersson geodesic flow has: horseshoes, invariant sets with positive topological entropy, and that there are infinitely many hyperbolic closed geodesics, whose number grows exponentially in length. Furthermore, we note that the volume entropy is infinite. Introduction The moduli space M g,n is the space of hyperbolic metrics for a surface of genus g with n punctures. There are several interesting metrics on M g,n , and during the past few years there has been intense activity on studying the geometry and relationships between these metrics [LSY05]. There has also been significant activity on investigating the fine dynamics of the geodesic flow for the Teichmรผller metric [AGY06,AF07,ABEM15]. The Teichmรผller metric is a complete Finsler metric that describes a straight space, geodesics connecting points are unique and maximal geodesics are bi infinite. Minsky has provided a simple model for the Teichmรผller metric. For a Riemann surface with a collection of non peripheral, disjoint, distinct free homotopy classes F , parameterize Riemann surfaces by: the (product) Teichmรผller space for the F complement and the Fenchel-Nielsen parameters for F . Introduce a comparison metric -the supremum of the Teichmรผller metrics for the components of the complement, and formal hyperbolic metrics for the Fenchel-Nielsen length-angle planes. The comparison metric approximates within an additive constant in the region of Teichmรผller space, where F represents the short hyperbolic geodesics [Min96]. We undertake an investigation of the topological dynamics of the geodesic flow for the Weil-Petersson (WP) metric. Our results are for real twodimensional moduli spaces, i.e., the moduli space M 1,1 for once punctured tori and the moduli space M 0,4 for spheres with four punctures. These spaces are completed by adding points, cusps, for the degenerate hyperbolic structures, with the property that a dense set of singular unit tangent vectors are initial to geodesics that end at a cusp in finite time [Bro05]. Although non-compact, these moduli spaces have finite area and finite diameter. Curvature is negative, bounded away from zero and is unbounded in each cusp [Wol08b]. It follows with these properties that the geodesic flow (GF) is an incomplete uniformly hyperbolic flow, and thus needs to be studied as a non-uniformly hyperbolic dynamical system with singularities. Our main result is topological transitivity for the WP GF for two-dimensional moduli spaces. Our proof is based on a new approach. Earlier proofs of topological transitivity for geodesic flows or systems with singularities use some form of coding, by introducing an ideal boundary or a Markov partition. Given the density of singular tangent vectors, there appears to be no complete notion of an ideal boundary [Bro05], and the construction of a Markov partition would be quite delicate. The geometry of a cusp combined with the CAT (โˆ’วซ), วซ > 0, geometry provide a substitute for coding. We develop a shadowing lemma, Proposition 3.1, to approximate piecewise geodesics by geodesic chords. We find that the WP GF on M 1,1 and M 0,4 has: horseshoes, subsets with positive topological entropy, and that there are infinitely many hyperbolic closed geodesics, whose number grows exponentially in length. Furthermore, we find that the volume entropy is infinite. We start considerations with the density in the unit tangent bundle of M g,n of geodesics connecting cusps of T g,n . We next use that in dimension two a pair of geodesic segments emanating form a cusp (modulo Dehn twists) is approximated by geodesic segments connecting a point and Dehn twist translates of a point. A piecewise geodesic C of controlled small exterior angle is then constructed from geodesics connecting cusps and approximating geodesic segments. A limit of chords of C is a bi infinite geodesic asymptotic to C. We use the shadowing lemma to control the chords and to establish the asymptotic behavior. Brock, Masur and Minsky establish topological transitivity for all moduli spaces M g,n [BMM07]. They start by defining an ending lamination for an infinite WP geodesic ray. They find for the full measure set of bi infinite WP geodesics, that bi recur to some compact set of M g,n , that the ending laminations characterize the ending asymptote classes. Given a geodesic connecting cusps, they approximate by bi recurrent geodesics. For a bi recurrent geodesic they use the resulting ending laminations as data for specification of a pseudo Anosov mapping class with axes approximating the geodesic. They further use the dynamics of pseudo Anosov elements and compositions to approximate a sequence of geodesics. The main result follows. They also show that axes of pseudo Anosov elements are dense in the unit tangent bundle of M g,n . The present and Brock, Masur and Minsky approaches begin with the density of geodesics connecting cusps. The approaches differ in that the present approach is based on approximating piecewise geodesics and a shadowing argument, whereas the later approach develops a partial boundary theory to construct pseudo Anosov elements with controlled axes. Our approach also applies to complete constant negatively curved surfaces with cusps. We believe that the approach should also have applications to certain systems with singularities, including billiards. Preliminaries We present a unified discussion for the basic metric geometry of the Weil-Petersson metric. The treatment is based on earlier and recent works of a collection of authors including [Abi80,Ber74,Bro05,DW03,Hua07,MW02,Wol03,Wol08a,Wol07]. Once punctured tori and four punctured spheres are related in the tower of coverings. The Teichmรผller spaces are isomorphic and isometric. We study once punctured tori. Let T be the Teichmรผller space for marked once punctured tori. Points of T are equivalence classes {(R, ds 2 , f )} of marked complete hyperbolic structures R with reference homeomorphisms f : F โ†’ R from a base surface F . A self homeomorphism of F induces a mapping of T and the mapping class group MCG of F acts properly discontinuously on T with quotient the moduli space M of Riemann surfaces. The Teichmรผller space T is a complex one-dimensional manifold with cotangent space at R being Q(R), the space of holomorphic quadratic differentials with at most simple poles at punctures. The Teichmรผller-Kobayashi co metric for T is defined as For complex one-dimensional Teichmรผller spaces the Teichmรผller metric coincides with the hyperbolic metric. The Weil-Petersson (WP) co metric is defined as and ds 2 the hyperbolic metric of R. The Teichmรผller and WP metrics are MCG invariant. A non trivial, non peripheral, simple free homotopy class on F , now called an admissible free homotopy class, determines a geodesic ฮณ on R and a hyperbolic pants decomposition. The length โ„“ ฮณ for the joined pants boundaries and the offset, or twist ฯ„ ฮณ , for adjoining the pants boundaries combine to provide the Fenchel-Nielsen (FN) parameters (โ„“ ฮณ , ฯ„ ฮณ ) valued in R + ร— R. The FN parameters provide global real analytic coordinates with each admissible free homotopy class and choice of origin for the twist determining a global coordinate. The Dehn twist T about ฮณ, an element of MCG, acts by T : (โ„“ ฮณ , ฯ„ ฮณ ) โ†’ (โ„“ ฮณ , ฯ„ ฮณ + โ„“ ฮณ ). A bordification (a partial compactification) T , the augmented Teichmรผller space, is introduced by extending the range of parameters. For an โ„“ ฮณ equal to zero, the twist ฯ„ ฮณ is not defined and in place of the geodesic ฮณ on R there appears a further pair of cusps (a formal node). The extended FN parameters describe marked (possibly) noded Riemann surfaces. The points of T โˆ’T are the cusps of Teichmรผller space and are in one-one correspondence with the admissible free homotopy classes. The neighborhoods of {โ„“ ฮณ = 0} in T are given as {โ„“ ฮณ < วซ}, วซ > 0; T is non locally compact, since neighborhoods of {โ„“ ฮณ = 0} are stabilized by the Dehn twist about ฮณ; T โŠ‚ T is a convex subset. Each pair of distinct cusps of T is connected by a unique finite length unitspeed WP geodesic, now called a singular geodesic. There is a positive lower bound for the length of singular geodesics. Tangents to singular geodesics at points of T are called singular tangents. The augmented Teichmรผller space T is CAT (0) and in particular is a unique geodesic length space, [BH99]. Triangles in T are approximated by triangles in T and for once punctured tori the WP curvature is bounded above by a negative constant [Wol08b]. In particular for once punctured tori, the curvature of the augmented Teichmรผller space is bounded away from zero, and thus the space is CAT (โˆ’วซ) for some วซ > 0. Theorem 2.1 The augmented Teichmรผller space for once punctured tori T is the WP completion of T and is a CAT (โˆ’วซ) space for some วซ > 0. The WP curvature is bounded above by a negative constant and tends to negative infinity at the cusps of Teichmรผller space. The singular geodesics have tangents dense in the unit tangent bundle of Teichmรผller space. For once punctured tori the upper half plane H and elliptic modular group P SL(2; Z) are the models for T and MCG. The augmentation T is H โˆช Q provided with the horocycle topology. The rational numbers are the cusps of T . The density for singular WP geodesics [Wol03, Coro. 18] is a counterpart of the density for the hyperbolic metric of geodesics with rational endpoints. We are interested in the geometry of the cusps of Teichmรผller space. The WP metric has the cusp expansion for J the almost complex structure, and for bounded length โ„“ ฮณ for the curve corresponding to the cusp. We are most interested in geodesics that approach a cusp. A unit tangent at a point of T is special for a cusp provided it is tangent to the geodesic ending at the cusp (a tangent v is singular provided v and โˆ’v are special). For a given cusp, at each point of T there is a unique associated special unit tangent, the initial tangent for the geodesic connecting to the cusp. We consider the spiraling of geodesics to a cusp as follows. Proposition 2.2 For a given admissible free homotopy class, let T be the Dehn twist and v 1 , v 2 special tangents with base points p 1 , p 2 . Given วซ > 0 there exists a positive integer n 0 , such that for n โ‰ฅ n 0 the geodesic p 1 T n p 2 connecting p 1 to T n p 2 has initial and terminal tangents respectively within วซ of v 1 and โˆ’T n v 2 . Proof. The behavior of โ„“ ฮณ on the geodesic p 1 T n p 2 is basic. The geodesiclength function โ„“ ฮณ is T -invariant and convex on p 1 T n p 2 . The function โ„“ ฮณ is bounded by its values at p 1 and p 2 . The Dehn twist T acts in Fenchel-Nielsen coordinates by (โ„“ ฮณ , ฯ„ ฮณ ) โ†’ (โ„“ ฮณ , ฯ„ ฮณ +โ„“ ฮณ ). We let c 1 = inf โ„“ ฮณ ( p 1 T n p 2 ) (c 1 could be zero) and c 2 = max{โ„“ ฮณ (p 1 ), โ„“ ฮณ (p 2 )}. We further let L 0 be the infimum of WP length for geodesics connecting points q 1 , q 2 satisfying c 1 โ‰ค โ„“ ฮณ (q 1 ), โ„“ ฮณ (q 2 ) โ‰ค c 2 and ฯ„ ฮณ (q 2 ) = ฯ„ ฮณ (q 1 )+1. By compactness of the set of connecting geodesics, L 0 is positive provided c 1 is positive. A segment of p 1 T n p 2 with a โ‰ค ฯ„ ฮณ โ‰ค a + 1 satisfies the stated conditions and so has length at least L 0 . It follows that the length satisfies L( p 1 T n p 2 ) โ‰ฅ (n โˆ’ n 0 )L 0 , for a suitable n 0 . At the same time, if we write p for the cusp {โ„“ ฮณ = 0}, then L( . It now follows for n tending to infinity that min โ„“ ฮณ on p 1 T n p 2 tends to zero. We write q n for the minimum point for โ„“ ฮณ . By definition of the topology of T , the points q n and T โˆ’n q n limit to the cusp p. For a CAT (0) space the distance between geodesic segments is convex and achieves its maximum at endpoints. It follows that p 1 q n tends uniformly to p 1 p and T โˆ’n q n p 2 tends uniformly to pp 2 . On T local convergence of geodesics implies C 1 -convergence and the desired convergence of tangents. The proof is complete. The counterpart result for the upper half plane is valid. For the cusp at infinity, consider a translation T and vertical vectors at the basepoints p 1 , p 2 . The vectors are approximated by the initial and terminal tangents of the geodesics p 1 T n p 2 . Generalizing the proposition for higher-dimensional Teichmรผller spaces is an open question. Approximating a pair of special tangents by the initial and terminal tangents of connecting geodesics is only possible for a proper subset of tangent pairs. A special geodesic has a non zero terminal tangent in the Alexandrov tangent cone of the terminal point in T , [Wol08a]. In Example 4.19 of [Wol08a], a necessary condition is provided for a pair of terminal tangents to arise from a twisting limit of connecting geodesics. The expectation is that a necessary and sufficient condition for approximating a pair of special tangents is for the terminal tangents to satisfy the condition of Example 4.19. Twisting of geodesics. We describe a piecewise-geodesic modification of a pair of geodesics ฮณ 1 , ฮณ 2 emanating from a cusp of Teichmรผller space. Given distance parameter ฮด > 0, we will replace the segments of ฮณ 1 , ฮณ 2 within ฮด of the cusp to obtain a concatenation of three geodesic segments with small exterior angles at the two concatenation points. Let p j be the point of ฮณ j at distance ฮด from the cusp. From Proposition 2.2, for the twisting exponent n sufficiently large, the exterior angles at p 1 (and at T n p 2 ) between ฮณ 1 and p 1 T n p 2 (at T n p 2 between T n ฮณ 2 and p 1 T n p 2 ) are sufficiently small. Conclusion: for n sufficiently large, then the concatenation of three segments -the segment of ฮณ 1 outside the ฮด-ball concatenate p 1 T n p 2 concatenate the segment of T n ฮณ 2 outside the ฮด 1 -ball -is piecewise-geodesic with sufficiently small exterior angles. Furthermore, given a smaller neighborhood of the cusp, for n sufficiently large, the small exterior angles provide that outside the smaller neighborhood the segments of p 1 T n p 2 are C 1 sufficiently close to their corresponding segments of ฮณ 1 , ฮณ 2 . In particular for n sufficiently large, the concatenation is sufficiently close to ฮณ 1 โˆช T n ฮณ 2 . Piecewise-geodesics and chordal limits We construct infinite WP geodesics dense in the unit tangent bundle of the moduli space. The prescription begins with a sequence of WP singular geodesics on T with projection dense in the unit tangent bundle. We then apply twisting to construct a sequence of piecewise geodesics with small exterior angles, and consider a limit of chords for points tending to infinity. We preclude degeneration of the chords and using a shadowing lemma, we establish the existence of a suitable limit, provided the sequence of exterior angles has small norm in the sequence space โ„“ 1 . The result is presented in Theorem 3.2. We begin consideration with observations about piecewisegeodesics with small exterior angles. We begin with the discussion of chords for concatenations of geodesics. The first consideration is the distance to a closed geodesic segment. The Teichmรผller space T is a convex subset of a CAT (โˆ’วซ), วซ > 0 space and is a Riemannian surface. Accordingly for a geodesic segment ฮณ, closed in T , we introduce the nearest point projection ฮ  to ฮณ, [BH99, Chap. II.2]. The fiber of ฮ  for an interior point of ฮณ is the complete geodesic orthogonal to ฮณ at the basepoint of the projection. The fiber of ฮ  for an endpoint is the domain bounded by the geodesic orthogonal to the endpoint. We recall the formula for the derivative of the distance to ฮณ along a smooth curve ฯƒ. At a point p of ฯƒ, consider the angle ฮธ between ฯƒ and the away pointing fiber of the projection. The derivative of the distance d ฮณ to ฮณ along the unit-speed parameterized curve ฯƒ is cos ฮธ. We consider a (possibly infinite) concatenation C of geodesic segments with vertices in T , and write ea(C) for the finite total variation of exterior angles at vertices. We first observe for ฮณ a geodesic chord between points of C the bound max |d โ€ฒ ฮณ | โ‰ค ea(C), for the derivative of distance to ฮณ. The argument is as follows. The distance d ฮณ is convex along the segments of C with jump discontinuities for d โ€ฒ ฮณ at vertices. In particular the graph of d โ€ฒ ฮณ consists of increasing intervals and jump discontinuities. Since the cosine is 1-Lipschitz the total negative jump discontinuity of d โ€ฒ ฮณ is at most ea(C). Consider a segment pq of ฮณ intersecting the concatenation C only at endpoints. The function d ฮณ is positive on pq and so has a non negative initial derivative and non positive final derivative. It follows for pq, the absolute total jump discontinuity of d โ€ฒ ฮณ bounds the sum of the initial value of d โ€ฒ ฮณ and the total increase in d โ€ฒ ฮณ . Conclusion: the total variation of exterior angle ea(C) bounds the first-derivative of distance d ฮณ to a chord as follows Combined with the formula for the derivative of distance, it follows for ea(C) small, that the angle between C and the fibers of the projection to ฮณ are close to ฯ€/2. We further consider the segment pq of a chord, intersecting C only at p, q with: r a distinct third point on C between p, q, and s the projection of r to pq. We now observe for ea(C) small, that the projection of r is an interior point of pq and that the angle between pr and rs is close to ฯ€/2. The argument is as follows. The union of pr and the subarc C r of C between p, r forms a polygon P with n+2 vertices, for n the number of vertices of C r . The sum sa of interior angles of P at the vertices of C r satisfies |sa โˆ’ nฯ€| โ‰ค ea(C r ) and by Gauss-Bonnet the sum of all interior angles of P is bounded by nฯ€. It follows that the interior angle of P at r is bounded by ea(C r ). From the paragraph above, the angleฮธ r between the segment of C r containing r and rs satisfies | cosฮธ r | โ‰ค ea(C r ). Combining estimates, the angle ฮธ r between pr and rs is bounded as |ฮธ r โˆ’ ฯ€/2| โ‰ค ea(C r ) + arcsin ea(C r ). Conclusion: for ea(C r ) โ‰ค 1/2, then |ฮธ r โˆ’ ฯ€/2| < ฯ€/2 and consequently the chords pr and rq are distinct from rs. In particular r projects to an interior point of pq and rs is orthogonal to pq. The geodesic triangle โˆ†prs has a right-angle at s, and angle ฮธ r at r close to ฯ€/2, with the difference bounded in terms of ea(C r ). The comparison triangle in the constant curvature โˆ’วซ T plane has corresponding side lengths and angles at least as large, [BH99, Chap. II.1]. Since a sum of triangle angles is at most ฯ€, it follows for ea(C r ) small, that the comparison triangle has two angles close to ฯ€/2 with differences bounded in terms of ea(C r ). We now recall that for triangles in constant negative curvature with two angles close to ฯ€/2 the length of the included side is correspondingly bounded. In particular for a triangle โˆ†abc in hyperbolic geometry with angles ฮฑ, ฮฒ close to ฯ€/2 then the angle ฮณ is close to zero and the length of the opposite side satisfies cosh c = cos ฮฑ cos ฮฒ + cos ฮณ sin ฮฑ sin ฮฒ . The angle bounds combine to bound cosh c and c. Conclusion: for ea(C r ) โ‰ค 1/2, there is a universal bound for the distance between C and any chord, with the bound small for ea(C r ) small. In general for a chord of a concatenation, we consider minimal segments connecting points of the concatenation and apply the above considerations to establish the following. Proposition 3.1 Let C be a concatenation of geodesics with vertices in T and total exterior angle variation ea(C) โ‰ค 1/2. There is a uniform bound in terms of ea(C) for the distance between a chord of C and the corresponding subarc of C. The bound for distance is small for ea(C) small. The counterpart result for concatenations with small exterior angle in the upper half plane is also valid. Main considerations We are ready to combine considerations. We begin from Theorem 2.1 with a bi sequence {ฮณ n } of singular (cusp-to-cusp) unit-speed geodesics of T with dense image in the unit tangent bundle of T /MCG. We will apply elements of MCG and perform twisting of geodesics to obtain a concatenation C with total exterior angle variation ea(C) appropriately small. We find that suitable chords of C limit to a bi infinite geodesic C โˆž , strongly asymptotic to C. It follows that the geodesic C โˆž is dense in the unit tangent bundle. The result is presented in Theorem 3.2. We consider the sequence {ฮณ n } and define the concatenation C. Select ฮด 1 > 0 smaller than half the length of any singular geodesic and ฮด 2 , ฮด 1 > 2ฮด 2 > 0. The cusps of T lie in a single MCG orbit. Begin with the geodesic ฮณ 1 and MCG translate the geodesic ฮณ 2 to arrange that the terminal point of ฮณ 1 coincides with the initial point of the translated ฮณ 2 . Perform a suitable twisting of ฮณ 2 with distance parameter ฮด 2 (see twisting of geodesics) to arrange for a concatenation with small exterior angles. Proceed inductively, using the distance parameter ฮด 2 , to define an infinite concatenation C with: i) the sequence of exterior angles absolutely summable, and ii) all geodesic chords containing corresponding subarcs of C within ฮด 2 neighborhoods. The second property is realized by Proposition 3.1, for sufficiently small total exterior angle variation. From properties of twisting of geodesics, since the twisting exponents tend to infinity along C, the distance from C to the MCGtranslates of {ฮณ n } tends to zero and the pair have coinciding accumulation sets in T /MCG. The pair also have coinciding accumulation sets in the unit tangent bundle, a consequence of the C 1 -approximation for twisting of geodesics. Conclusion: the concatenation C has dense image in the unit tangent bundle of T /MCG We note that geodesic chords of C do not degenerate. Write B(ฮด) for the ฮด neighborhood of a particular cusp. We first show that the intersection C 0 = C โˆฉ B(ฮด 1 ) is a single connected segment. By construction, C 0 is within ฮด 2 of a segment of the concatenation of MCG-translates of {ฮณ n }, and by construction, C 0 is also within ฮด 2 of the chord connecting its endpoints. By convexity the entire chord is contained within B(ฮด 1 ). The estimates provide that the segment of the MCG-translates of {ฮณ n } is within 2ฮด 2 of the chord and within ฮด 1 + 2ฮด 2 of the cusp. The closest distinct cusps are at distance strictly greater than ฮด 1 + 2ฮด 2 . It follows that the (maximal) segment within B(ฮด 1 ) of the MCG-translates of {ฮณ n } consists of only two geodesics. Since C and the concatenation of MCG-translates of {ฮณ n } coincide outside of a ฮด 2 neighborhood of any cusp, it follows that the concatenations agree on B(ฮด 1 ) โˆ’ B(ฮด 2 ). The description of C โˆฉ B(ฮด 1 ) follows. The intersection of any chord of C and โˆ‚B(ฮด 1 /2) is contained in a ฮด 2 neighborhood of C 0 . The intersection of a ฮด 2 neighborhood of C 0 and โˆ‚B(ฮด 1 /2) is compact, since ฮด 1 > 2ฮด 2 . Conclusion: chords of C do not degenerate on Teichmรผller space. We construct a limit of chords. Let {p n } be a sequence of points along C with p n on ฮณ n . By the compactness argument of [Wol03, Sec. 7] the sequence { p โˆ’n p n } of chords has a subsequence which converges in a generalized sense (a priori the limit may be degenerate). From the above, the subsequence does not degenerate. Conclusion: a limit C โˆž of chords is a bi infinite geodesic on Teichmรผller space (in general a limit is described by a sequence of singular geodesics). We show that the distance between C and C โˆž tends to zero at infinity. The geodesic C โˆž is the base of a nearest point projection. Consider the distance F = d(C โˆž , * ) to C โˆž as a function of arc length along C. The function is piecewise convex with total variation of discontinuities of F โ€ฒ bounded by ea(C). Consider first that F has an infinite number of zeros. The geodesic C โˆž is approximated by chords, and so F is approximated by the distance between C and a sequence of chords. Thus between a pair of zeros of F , the function is bounded by Proposition 3.1. Between zeros, F is bounded by the sum of the absolute discontinuities of F โ€ฒ . The absolute sum of discontinuities of F โ€ฒ between zeros of F tends to zero along C, and thus F limits to zero at infinity. Consider second that F has only finitely many zeros. Since F is bounded, 0 < F < ฮด 2 , it follows that the total variation of F โ€ฒ is bounded. It follows that the terminal total variation of F โ€ฒ tends to zero. Equivalently stated, F โ€ฒ tends to a limit, which in fact is zero since F is bounded. By construction C consists of segments inside ฮด 2 neighborhoods of cusps, connected by geodesic segments (outside of cusp neighborhoods) of length at least 2ฮด 1 โˆ’ 2ฮด 2 > 0. By the second variation of distance formula and strict negativity of curvature, there is a positive lower bound in terms of the magnitude of F for the convexity of F on the geodesic segments. Since F โ€ฒ has a limit along C, it follows that the convexity of F tends to zero along C. It follows that F limits to zero along C. The curves C and C โˆž are strongly asymptotic, and in fact have coinciding accumulation sets in the unit tangent bundle, since close geodesic segments have tangent fields close for proper subsegments. We have the following. Theorem 3.2 A limit of chords between points tending respectively to positive and negative infinity along the concatenation C is an infinite geodesic, dense in the unit tangent bundle of M = T /MCG. The corresponding considerations for the upper half plane are also valid. There are two basic matters to address to apply the approach for higher dimensional moduli spaces. The first is that the strict convexity of CAT (โˆ’วซ) geometry is replaced by the convexity of CAT (0) geometry. For the latter geometry, there is no bound for distance only in terms of exterior angle; an alternative to Proposition 3.1 is needed. The second is the non trivial condition for geodesics emanating from a cusp to be approximated by twisting geodesics, see the above comments before twisting geodesics. Addressing the matters is an open question. Recurrence, volume growth, horseshoes, and closed geodesics The main result of the section is that the moduli space M of once punctured tori contains infinitely many WP closed geodesics, whose number grows exponentially in length. Although by a comparison of metrics the result follows from the corresponding result for the Teichmรผller metric, we present an argument not involving the Teichmรผller geometry. We begin by studying the recurrence of the geodesic flow ฯ† T : SM โ†’ SM on the unit tangent bundle. A vector v โˆˆ SM is recurrent if for every neighborhood U of v there exists arbitrarily large T > 0 such that ฯ† T v โˆฉ U = โˆ…. Lemma 4.1 Almost every vector in SM is recurrent. Proof. Wolpert [Wol83] showed that WP Area(M) = ฯ€ 2 /12 < โˆž and thus Vol(SM) < โˆž. (Mirzakhani's general recursion [Mir07] determines all values Vol(M g,n ).) The GF preserves the Liouville measure and from the Poincarรฉ recurrence theorem it follows that for any open set U โŠ‚ SM we can find a T > 0 with ฯ† T U โˆฉ U = โˆ…. The result follows. The topological entropy h of a complete geodesic flow on a compact surface is a measure of the exponential rate of divergence of nearby geodesics. The entropy h provides one measure of the complexity of the global geodesic structure. Another measure of complexity is the exponential growth rate of closed geodesics h CG , defined as the exponent coefficient for the exponential growth rate in T of the number of closed geodesics with length โ‰ค T . A third measure is the volume entropy h V,x defined as the limit lim rโ†’โˆž (1/r) log Area(B(x, r)), where B(x, r) denotes the ball of radius r around the point x in T . For a compact surface with everywhere negative curvature, the geodesic flow is Anosov [Ano69] and the three entropies coincide, i.e. h = h CG = h V,x for every x [Bow72,Man79]. We will see that the topological entropy for the GF is positive while the volume entropy is infinite. For a compact surface a geodesic flow with positive topological entropy contains a horseshoe [Kat80]. The same statement holds for the GF restricted to a compact invariant subset. Although a horseshoe in SM could have zero volume, the dynamics of the GF on the horseshoe would be chaotic. In particular the horseshoe would contain infinitely many closed geodesics, whose number would grow exponentially in length. We show the following. Proposition 4.2 The moduli space M contains infinitely many closed geodesics, whose number grows exponentially in length. Since the flow ฯ† T : SM โ†’ SM is not complete, there is no natural definition of topological entropy. We will describe a compact, invariant subset of SM on which the GF has positive topological entropy. This will imply h CG > 0 and the desired conclusion. Proof. For วซ > 0, let SM(วซ) be the collection of unit tangent vectors whose corresponding geodesics never enter the วซ-neighborhood of the cusp. The set SM(วซ) is compact and ฯ† T : SM(วซ) โ†’ SM(วซ) is complete. Since the tangent vectors in SM(วซ) only visit regions of uniformly bounded negative curvature, SM(วซ) is a non-maximal hyperbolic set for the GF [BS02]. In general a Markov partition provides a symbolic model for a smooth flow as a special flow over a subshift of finite type [BS02]. Bowen showed that every locally maximal, hyperbolic set has a Markov partition [Bow73]. More recently, Fisher [Fis06] showed that every hyperbolic set has arbitrarily small enlargements on which the flow has a Markov partition. 1 For the WP this implies that for each sufficiently small วซ > 0, there is a hyperbolic set compact enlargement V (วซ) โŠƒ SM(วซ), which stays away from the cusp, and for which ฯ† T : V (วซ) โ†’ V (วซ) has a Markov partition. The flow ฯ† T : V (วซ) โ†’ V (วซ) has positive topological entropy provided that the associated subshift of finite type has positive topological entropy [Abr59]. The subshift of finite type is defined by an adjacency matrix: a square matrix having non-negative entries. The structure theorem for adjacency matrices [LM95] yields that either the subshift has only finitely many periodic points or has positive topological entropy (and thus infinitely many periodic points, whose number grows exponentially in the period.) It follows that in V (วซ), either the GF has only finitely many closed geodesics or contains infinitely many closed geodesics, whose number grows exponentially in length. The following geometric lemma provides that some V (วซ) contains infinitely many simple closed geodesics. The lemma completes the proof of Proposition 4.2. Proof. Choose a torsion free subgroup ฮ“ of MCG with T /ฮ“ having a non trivial, non peripheral simple closed curve. From the classification of surfaces the surface T /ฮ“ has infinitely many distinct non peripheral, free homotopy classes of simple closed curves. From the classification of elements of MCG and the WP axis theorem [Wol03, Theorem 25] each non trivial, non peripheral free homotopy class contains a closed geodesic. From the minimal intersection property for negative curvature each representing geodesic is simple. We observe that simple WP geodesics are uniformly bounded away from cusps and so are contained in a compact subset of T /ฮ“. The lift of a simple geodesic to T is disjoint from its Dehn twist translates. In FN coordinates (โ„“, ฯ„ ) and for n 0 the smallest exponent with T n 0 โˆˆ ฮ“, a simple geodesic has a lift intersecting a neighborhood of the cusp {โ„“ = 0} and contained entirely in the sector {0 < ฯ„ < 3n 0 โ„“}. The lift intersects {โ„“ = 1} in the compact set {โ„“ = 1, 0 < ฯ„ < 3n 0 }. For a CAT (โˆ’วซ) geometry geodesic segments depend continuously on endpoints. The geodesics intersecting the compact set are bounded away from {โ„“ = 0}, as desired. The proof is complete. Finally, we note that the volume entropy h V,x is infinite for every point. This holds since the area of a sufficiently large WP ball in T is infinite. A sufficiently large ball contains a cusp point and the WP area of a neighborhood in T of a cusp point is infinite. In particular the WP area form is ฯ‰ = 1 2 dโ„“ โˆง dฯ„ and a small metric neighborhood of a cusp point is parameterized in FN coordinates by 0 < โ„“ < วซ, โˆ’โˆž < ฯ„ < โˆž.
Clinical and translational research in Pneumocystis and Pneumocystis pneumonia* Pneumocystis pneumonia (PcP) remains a significant cause of morbidity and mortality in immunocompromised persons, especially those with human immunodeficiency virus (HIV) infection. Pneumocystis colonization is described increasingly in a wide range of immunocompromised and immunocompetent populations and associations between Pneumocystis colonization and significant pulmonary diseases such as chronic obstructive pulmonary disease (COPD) have emerged. This mini-review summarizes recent advances in our clinical understanding of Pneumocystis and PcP, describes ongoing areas of clinical and translational research, and offers recommendations for future clinical research from researchers participating in the โ€œFirst centenary of the Pneumocystis discoveryโ€. INTRODUCTION T he year 2009 marked the 100 th anniversary of the first description of Pneumocystis by Carlos Chagas. Over the past 100 years, significant advances have been made in our understanding of both Pneumocystis and Pneumocystis pneumonia (PcP). The preeminence of PcP as a herald of the human immunodeficiency virus (HIV) / acquired immune deficiency syndrome (AIDS) epidemic and as a major cause of HIV-associated morbidity and mortality focused attention and resources on this previously uncommon opportunistic pneumonia. With the use of combination antiretroviral therapy to treat underlying HIV infection, the incidence of PcP has declined, but an appreciation of Pneumocystis colonization in both immunocompromised and immunocompetent populations, and associations between Pneumocystis colonization and significant pulmonary diseases such as chronic obstructive pulmonary disease (COPD), have emerged. This mini-review summarizes recent advances in our clinical understanding of Pneumocystis and PcP, describes ongoing areas of clinical and translational research, and offers recommendations for future clinical research from researchers participating in the "First centenary of the Pneumocystis discovery" held in Brussels, Belgium on November 5-6, 2009. EPIDEMIOLOGY OF PCP P cP is a frequent AIDS-defining diagnosis. At its peak, PcP was an AIDS-defining diagnosis for greater than 20,000 new AIDS cases per year in the US (Centers for Diseases Control andPrevention, 1990-1993). Although the incidence of PcP has decreased in the current era of combination antiretroviral therapy, PcP remains a leading cause of AIDS in North American and European cohorts (Mocroft et al., 2009). In the multinational Antiretroviral Therapy Cohort Collaboration (ART-CC) established in 2000, PcP was the second most frequent AIDS-defining event after esophageal candidiasis (Mocroft et al., 2009). Therefore, continued efforts to improve our understanding of both Pneumocystis and PcP are warranted (Table I). PcP is an important cause of HIV-associated pneumonia but rates of PcP have decreased. At San Francisco General Hospital, the Division of Pulmonary and Critical Care Medicine has tracked confirmed cases of PcP -diagnosed by microscopic visualization of characteristic Pneumocystis cysts and/or trophic forms obtained from sputum induction or bronchoscopy -since 1990 ( Fig. 1). At its peak in 1992, nearly 300 cases of HIV-associated PcP were diagnosed at this institution (Huang et al., 1995). Today, this number has decreased to 20-30 cases per year. Most of these cases occurred in persons who were not receiving antiretroviral therapy or PcP prophylaxis and many were actually unaware of their HIV infection at the time of presentation (Fei et al., 2009). This experience is similar at other institutions, where 23-31 % of reported PcP cases occurred in persons who were newly diagnosed with HIV infection at the time of PcP (Fei et al., 2009a;Radhi et al., 2008;Walzer et al., 2008). Thus, increased efforts to test persons at risk of HIV, to engage HIV-infected persons in regular medical care, and to initiate and adhere to combination antiretroviral therapy and PcP prophylaxis are important strategies to decrease the incidence of the disease. Comprehensive reviews document that HIV-associated PcP is reported throughout the world, in varying rates (Davis et al., 2007;Fisk et al., 2003). However, data on the current rates of PcP in regions of the world that bear the greatest burden of HIV are limited. The scarcity of diagnostic and microbiologic tools to diagnose the disease is an important reason for the limited data on PcP rates. Thus, those institutions that possess these tools offer valuable windows into the epidemiology of PcP in low-and middle-income settings. At Mulago Hospital in Kampala, Uganda, the frequency of PcP among HIV-infected persons hospitalized with suspected pneumonia who have negative sputum acid fast bacilli (AFB) smears and undergo bronchoscopy has decreased from nearly 40 % of bronchoscopies to less than 10 % (Worodria et al., 2003;Worodria et al., 2010). Yet, the mortality associated with PcP remains high. Thus, efforts to improve both diagnostic and microbiologic capacity in low-and middle-income settings and to establish surveillance networks to track PcP cases are important clinical care and epidemiologic resources that should be developed. Although the populations of non-HIV immunosuppressed are rising with the increased use of immunosuppressive or immunomodulating therapies to treat a wide spectrum of medical illnesses, data on the frequency of PcP among these populations are limited. As such, consensus on optimal diagnostic, therapeutic, and preventative strategies lag behind those for HIVinfected populations. Similar to recommendations for tracking PcP in low-and middle-income settings, collaborative networks of institutions that care for substantial numbers of these non-HIV individuals (similar to those established for HIV-infected persons) should be created. Epidemiology 1. What is the current incidence of PcP in HIV-infected populations? a. What is the incidence in high-income countries, including the US and Western Europe, where access to combination antiretroviral therapy is generally widely available? i. In these countries, which populations continue to develop PcP? ii. Strategies to identify persons at risk for HIV and PcP need to be refined and preventative measures need to be implemented. b. What is the incidence in low-and middle-income countries, where access to combination antiretroviral therapy is generally more limited? i. In these countries, which populations develop PcP? ii. Strategies to identify persons at risk for HIV and PcP need to be refined and preventative measures need to be implemented. iii. Efforts to improve diagnostic and microbiologic capacity are needed. iv. Surveillance networks to track PcP cases should be developed. 2. What is the current incidence of PcP in non-HIV, immunocompromised populations? a. What is the incidence in populations immunocompromised from "traditional" immunosuppressive agents (e.g., glucocorticoid medications) and disease therapies (e.g., therapies for hematologic malignancy and cancer and after hematopoietic stem b. What is the incidence in populations immunocompromised from "newer" biologic, immunomodulating agents (e.g., tumor necrosis factor-alpha inhibitors)? c. Surveillance networks to track PcP cases should be developed. PCP DIAGNOSIS C lassically, PcP presents with fevers, non-productive cough, and progressive shortness of breath. Chest radiograph demonstrates bilateral, symmetric reticular (interstitial) or granular opacities. Traditionally, bronchoscopy with bronchoalveolar lavage (BAL) is regarded as the gold standard procedure to diagnose PcP in HIV-infected persons with diagnostic sensitivity โ‰ฅ 98 % reported (Huang et al., 1995). An early study reported a lower number of Pneumocystis organisms and a lower sensitivity of BAL for PcP in non-HIV immunocompromised persons compared to HIV-infected persons (Limper et al., 1989). More recently, BAL combined with sensitive laboratory techniques (e.g., immunofluoresence testing, polymerase chain reaction, PCR) has been reported as sensitive to diagnose PcP in these non-HIV immunocompromised populations (Azoulay et al., 2009). Since Pneumocystis DNA can be detected by PCR assay in the absence of clinical or microbiological pneumonia (Davis et al., 2007), the increased sensitivity of PCR-based assays may be offset by a decreased specificity for PcP. Pneumocystis cannot be cultured. Historically, the diagnosis of PcP has involved an invasive pulmonary procedure (i.e., bronchoscopy) to obtain specimens combined with a basic laboratory test (i.e., microscopic examination of stained respiratory specimens) to visualize the cysts and/or tropic forms. However, bronchoscopy requires specialized personnel, rooms, and equipment, and it is also expensive and carries an associated risk of complications. Thus, bronchoscopy is limited in its availability throughout many areas of the world that are burdened with HIV/AIDS. Laboratory advances (e.g., PCR), however, have revolutionized the diagnosis of many infectious diseases and PCR assays for P. jirovecii have been developed. These factors led researchers to examine whether the use of an advanced laboratory test (e.g., PCR) could be combined with a non-invasive pulmonary procedure to effectively diagnose PcP. Oral or oropharyngeal (i.e., gargle) wash specimens combined with PCR assays have been examined as non-invasive tests to diagnose PcP. Two studies from San Francisco General Hospital that tested three different PCR-based assays found a diagnostic sensitivity of OPW up to 88 % and a specificity up to 90 % (de Oliveira et al., 2007;Larsen et al., 2004). Procedural factors such as collecting the OPW specimen within one day of PcP treatment initiation and having the patient cough vigorously prior to specimen collection may increase the sensitivity of the procedure. Studies are being conducted to validate these findings in both high-and low-income settings. Blood-based assays have also been studied for diagnosis of PcP. A series of studies from New York indicate that plasma S-adenosylmethionine (SAM) levels could be used to distinguish between persons with PcP and those with non-PcP pneumonia and healthy controls (Skelly et al., 2003;Skelly et al., 2008). More recently, serum (1-3)-beta-D-glucan has shown promise as a test for PcP. Studies from 2009 reported that serum (1-3)-beta-D-glucan had a sensitivity of 100 % and a specificity of 96.4 % (using a cutoff of 100 pg/mL), that the assay may be more useful for PcP diagnosis than for monitoring response to treatment, that levels differ between patients with PcP and those who are PcP-negative but colonized with Pneumocystis jirovecii, and that the detection rate is lower in non-HIV PcP patients than in HIV-associated PcP patients (Desmet et al., 2009;Nakamura et al., 2009;Shimizu et al., 2009;Watanabe et al., 2009). Studies to determine which non-invasive test has the best performance characteristics in different immunocompromised populations at risk for PcP should be done. PCP TREATMENT AND PREVENTION AND PUTATIVE TRIMETHOPRIM-SULFAMETHOXAZOLE DRUG RESISTANCE T rimethoprim-sulfamethoxazole is the recommended first-line treatment for PcP in HIVinfected patients with mild, moderate, and severe PcP and also in non-HIV patients. Alternative regimens include intravenous pentamidine, clindamycin plus primaquine, trimethoprim plus dapsone, and atovaquone suspension. Adjunctive corticosteroids are recommended for patients with moderate to severe PcP as demonstrated by an arterial oxygen tension (PaO2) less than 70 mm Hg or an alveolar-arterial oxygen gradient (A-a O2) greater than 35 mm Hg. In HIV-infected patients, the recommended duration of treatment is 21 days while it is usually 14 days in non-HIV patients. However, a substantial proportion of individuals cannot complete a full course of trimethoprim-sulfamethoxazole due to treatment-limiting toxicity or are switched to an alternate treatment regimen due to perceived treatment failure (Fisk et al., 2009). Although there are only limited data from prospective, randomized clinical trials comparing second-line PcP treatments, a tri-center observational study and a systematic review suggest that the combination of clindamycin plus primaquine is an effective alternative to intravenous pentamidine as second-line PcP treatment (Benfield et al., 2008;Helweg-Larsen et al., 2009). The development of new PcP treatment options and prospective, randomized controlled trials Review Parasite, 2011, 18, 3-11 comparing second-line PcP treatments are both important needs. Trimethoprim-sulfamethoxazole is also the recommended first-line prevention for primary and secondary prophylaxis against PcP. However, the widespread use of this medication for PcP prophylaxis has been associated with increases in trimethoprim-sulfamethoxazole-resistant bacteria and has raised concerns over potential trimethoprim-sulfamethoxazole drug resistance in P. jirovecii. Trimethoprim-sulfamethoxazole drug resistance might also result in resistance to trimethoprim plus dapsone (a sulfone), thereby further limiting the therapeutic options available to treat PcP. The inability to culture P. jirovecii has hindered efforts to document drug-resistance in Pneumocystis but researchers have explored this question by examining genetic mutations within the dihydrofolate reductase (DHFR) and dihydropteroate synthase (DHPS) genes, the enzymatic targets of trimethoprim and sulfa (sulfamethoxazole and dapsone) medications, respectively . In other micro-organisms, DHFR and DHPS mutations have been shown to cause drug resistance. Although DHFR mutations are infrequently found, non-synonymous DHPS mutations are found in up to 81 % of HIV-infected patients with PcP and the use of sulfa medications for PcP prophylaxis is strongly associated with the presence of these mutations (Crothers et al., 2005;Huang et al., 2000). DHPS mutations continue to be reported worldwide and there are geographic differences in the observed frequency of mutations (Alvarez-Martinez et al., 2008;Beard et al., 2000;Crothers et al., 2005;Esteves et al., 2008;Huang et al., 2000;Magne et al., 1989;Rabodonirina et al., 2006;Wissmann et al., 2006). Furthermore, the presence of DHPS mutations has been associated with increased mortality in one study and increased risk of trimethoprim-sulfamethoxazole PcP treatment failure in a second study, although other studies have failed to find these associations and instead have reported that risk factors such as low serum albumin and early ICU admission were stronger predictors of PcP mortality (Crothers et al., 2005;Helweg-Larsen et al., 1999;Kazanjian et al., 2000). At present, there exists a seeming paradox regarding the clinical significance of DHPS mutations. Studies consistently report that the majority of patients with PcP and DHPS mutations who are treated with trimethoprim-sulfamethoxazole respond to this treatment (Crothers et al., 2005;Helweg-Larsen et al., 1999;Kazanjian et al., 2000;Navin et al., 2001). However, patients with DHPS mutations who are treated with trimethoprim-sulfamethoxazole tend to have worse outcomes compared to those with wild-type DHPS who are treated with trimethoprim-sulfamethoxazole and those with DHPS mutations who are treated with a non-sulfa-based regimen (Crothers et al., 2005). The precise explanation for these observations remains a focus of current investigation. PCP MORTALITY AND INTENSIVE CARE D espite differences in geography and demographic characteristics, in-hospital mortality among HIV-infected patients with PcP is similar (ranging from 10.3 to 13.5 %) in different cohort studies from Los Angeles, London, and San Francisco (Fei et al., 2009b;Radhi et al., 2008;Walzer et al., 2008). Each of these studies also reported on predictors of mortality. In the Los Angeles study of 262 HIV-infected patients diagnosed with PcP from January 2000 through December 2003, need for mechanical ventilation, development of a pneumothorax, and low serum albumin were independent predictors of increased mortality (Radhi et al., 2008). In the London study of 494 consecutive HIV-infected patients with 547 episodes of laboratory-confirmed PcP from June 1985 through June 2006, increasing patient age, subsequent episode of PcP, low hemoglobin level, low partial pressure of oxygen breathing room air, presence of medical comorbidity, and pulmonary Kaposi sarcoma were independent predictors associated with increased mortality (Walzer et al., 2008). Mortality was comparable during the periods from June 1985 through December 1989, January 1990 through June 1996, and July 1996 through June 2006 (p = 0.14). Finally, in the San Francisco study of 451 consecutive HIV-infected patients diagnosed with 524 episodes of microscopically-confirmed PcP from January 1997 through December 2006, increasing patient age, recent injection drug use, increased total bilirubin, decreased serum albumin, and increased alveolar-arterial oxygen gradient were independent predictors of increased mortality (Fei et al., 2009b). Using these five predictors, a six-point PcP mortality prediction rule was derived that stratified patients according to increasing risk of mortality: score 0-1, 4 % mortality; score 2-3, 12 % mortality; and score 4-5, 48 % mortality. Studies are being conducted to validate these single-institution findings in other cohorts. A few institutions have tracked PcP in HIV-infected patients requiring critical care in the Intensive Care Unit (ICU). Among these institutions, a study from San Francisco General Hospital in the current combination antiretroviral therapy era reported that critically ill HIVinfected patients with PcP who received combination antiretroviral therapy had a significantly lower mortality compared to patients who did not receive antiretroviral therapy (Morris et al., 2003). The use of combination antiretroviral therapy was an independent predictor Review Parasite, 2011, 18, 3-11 that was associated with lower mortality (odds ratio, OR = 0.14; 95 % confidence interval, CI = 0.02-0.84; p = 0.03). A study from London, however, failed to find a mortality difference associated with combination antiretroviral therapy (Miller et al., 2006). In this study, mortality improved from 71 % before mid-1996 to 34 % after mid-1996 (p = 0.008). This improvement in mortality was ascribed to general improvements in care as no patients were started on combination antiretroviral therapy. In the absence of data from randomized clinical trials, the pros and cons of initiating antiretroviral therapy in critically ill HIV-infected ICU patients have been debated. Recently, the National Institutes of Health (NIH)-funded AIDS Clinical Trials Group (ACTG) published its results of a prospective, multicenter randomized clinical trial of HIV-infected non-ICU inpatients hospitalized with an acute opportunistic infection (Zolopa et al., 2009). Overall, 282 subjects were enrolled and 63 % had PcP. The study found no difference in their primary endpoint. However, subjects randomized to the early antiretroviral therapy arm had fewer AIDS progressions/deaths (OR = 0.51, 95 % CI = 0.27-0.94, p = 0.035) and a longer time to AIDS progression/death (stratified Hazard Ratio, HR = 0.53, 95 % CI = 0.30-0.92). Importantly, there was no increase in adverse events in subjects randomized to early antiretroviral therapy in this study but cases of severe immune reconstitution inflammatory syndrome (IRIS, also called immune reconstitution syndrome, IRS, and immune reconstitution disease, IRD) resulting in respiratory failure and requiring invasive mechanical ventilation have been reported and serve as a cautionary note (Jagannathan et al., 2009). Whether the ACTG results from hospitalized but non-critically ill patients can be extrapolated into the ICU, where patients are often receiving mechanical ventilation and have renal and/or hepatic insufficiency, is an important but largely unanswered question (Huang et al., 2006). PNEUMOCYSTIS COLONIZATION AND POTENTIAL PERSON-TO-PERSON TRANSMISSION T he presence of Pneumocystis organisms or P. jirovecii DNA detected in the absence of PcP has been termed Pneumocystis colonization (also called carriage and sub-clinical infection). Pneumocystis colonization has been increasingly reported and documented to occur in infants and children, pregnant women, immunocompetent adults with underlying pulmonary disease, non-HIV immunocompromised individuals, and HIV-infected persons . For example, in one study of 58 infants < 1 year of age who died, Pneumocystis colonization was detected in 100 % . HIV-infected inpatients hospitalized with non-PcP pneumonia also appear to have a high prevalence of Pneumocystis colonization, with one study reporting a prevalence of 68 % (Davis et al., 2008). Colonization with Pneumocystis has also been associated with airways obstruction and chronic obstructive pulmonary disease (COPD) and possibly with other pulmonary diseases as well (Vidal et al., 2006). In one study, Pneumocystis colonization was detected in 36.7 % of HIV-negative patients with very severe COPD (Global Health Initiative on Obstructive Lung Disease [GOLD] Stage IV) compared with 5.3 % of smokers with normal lung function or less severe COPD (GOLD Stages 0, I, II, and III) (p = 0.004) and with 9.1 % of control subjects (p = 0.007) . Pneumocystis colonized subjects exhibited more severe airway obstruction (median FEV1 = 21 % predicted vs 62 % predicted in non-colonized subjects, p = 0.006). In a second study, patients colonized with Pneumocystis had higher proinflammatory cytokine levels than did those patients without evidence of Pneumocystis colonization (Calderon et al., 2007). Finally, HIV-infected outpatients who were colonized with Pneumocystis had worse airway obstruction and higher sputum matrix metalloprotease-12 levels, suggesting that Pneumocystis colonization may be important in HIV-associated COPD ). Studies to further evaluate the role of Pneumocystis colonization in the development and the progression of HIV-associated COPD are ongoing. Although the clinical significance of Pneumocystis colonization remains to be elucidated completely, important insights can be gained from studies in laboratory animals. An animal model has been developed to study Pneumocystis colonization in immunocompetent and simian immunodeficiency virus (SIV)infected cynomolgus macaques and the potential role of Pneumocystis colonization on COPD (Kling et al., 2009). Animal studies have also been used to study disease transmission. These studies demonstrate that animals are a reservoir for the specific Pneumocystis species that affects them, that animals carrying Pneumocystis develop PcP after being immunosuppressed (reactivation), and that immunocompromised Pneumocystis-free animals develop PcP after exposure to not only immunocompromised animals infected with Pneumocystis but also immunocompetent animals that are colonized with Pneumocystis. Animal studies also demonstrate that animal-to-animal transmission occurs and occurs via an airborne route. Person-to-person transmission of Pneumocystis via an airborne (aerosol) route has been suggested. However, Review Parasite, 2011, 18, 3-11 a precise understanding of the factors involved in disease transmission remains unclear and there are no universal recommendations for the respiratory isolation of persons with active PcP. Even if such recommendations became standard practice, disease transmission could hypothetically occur via persons who are colonized with Pneumocystis as is the case in animal studies. Molecular epidemiology and serology studies to examine potential person-to-person transmission are ongoing. CONCLUSIONS O ver the past 100 years, significant advances have been made in our understanding of both Pneumocystis and Pneumocystis pneumonia (PcP). This mini-review described recent advances in our understanding of Pneumocystis and PcP, ongoing areas of clinical and translational research, and offered recommendations for future clinical research from researchers participating in the First centenary commemorative Conference on the discovery of Pneumocystis. The attendees of the conference sincerely hope that the next 100 years bring continued advances in our understanding of the organism and the disease.
Relationship between Anthropometric Parameters and Sensory Processing in Typically Developing Brazilian Children with a Pediatric Feeding Disorder In this study, we aimed to relate anthropometric parameters and sensory processing in typically developing Brazilian children diagnosed with a pediatric feeding disorder (PFD). This was a retrospective study of typically developing children with a PFD. Anthropometric data were collected and indices of weight-for-age, length/height-for-age, and body mass index-for-age (BMI-for-age) were analyzed as z-scores. Sensory profile data were collected for auditory, visual, tactile, vestibular, and oral sensory processing. We included 79 medical records of children with a PFD. There were no statistically significant (p > 0.05) relationships between the anthropometric variables (weight-, length/height-, or BMI-for-age) and the sensory variables (auditory, visual, tactile, vestibular, or oral sensory processing). In conclusion, we found no relationship between anthropometric parameters and sensory processing in the sample of typically developing Brazilian children diagnosed with a PFD under study. Introduction A pediatric feeding disorder (PFD) is identified when a child has impaired oral intake that is not age-appropriate, which can be associated with multiple causal factors including medical, nutritional, oral-sensory-motor, and/or psychosocial dysfunctions [1]. These dysfunctions can be due to medical causes, including gastrointestinal, cardiorespiratory, and/or neurological problems, or nutritional causes, including malnutrition or a deficiency or restriction of a specific nutrient. For children with an oral-sensory-motor dysfunction and a PFD, they require the adjustment and modifications of food textures in order to be able to eat, as well as either positioning strategies or the use of specific feeding utensils. In addition, these children may present psychosocial dysfunctions caused by food aversion behaviors at mealtime, which, according to Berlin et al. [2] and Murphy et al. [3], contribute to parental stress and the use of inadequate feeding strategies that could be avoided if parents were instructed earlier on [1,4]. The prevalence of PFD has not been determined exactly, but is estimated to be from 25% to 30% among typically developing children and approximately 80% among children with neurological problems. However, previous studies have indicated that the percentage is growing every year among typically developing children and in other populations such as autistic, premature, and hyperactive children with attention deficit disorder, among others [4][5][6]. This growing trend is likely due to a true increase in the prevalence and awareness of PFD. A child's growth and development constitute a complex, multifactorial process that is also influenced by nutritional aspects. Children with a PFD may experience deficits in energy or nutritional intake associated with significant weight loss or growth failure, significant nutritional deficiency, dependence on enteral feeding or oral nutritional supplements, or marked interference with psychosocial functioning [7][8][9][10]. This may lead to a more vulnerable nutritional status which predisposes the child to disease, nutritional deficiencies, and/or metabolic disorders in the short, mid, and long-term, which is one of the main concerns of both parents and caregivers [10][11][12]. Recently, a study analyzed the association between food selectivity and child growth and concluded that although food and nutritional intake differed between groups with and without a feeding disorder, a consistent relationship between food selectivity and child growth could not be verified [13]. Another study sought to associate selective behavior and the impact on nutrition and development with alterations in sensory perceptions that are commonly present in children with a PFD. These studies suggested that the impression people get from the sensory properties of foods plays a very important role in the way they select their food and how much they eat [14][15][16]. Schaaf and Roley [17] defined sensory processing as the means by which the central nervous system detects, modulates, discriminates, integrates, and organizes sensory stimuli and the response to sensory input itself. Eight senses make up the sensory systems: tactile, vestibular, proprioceptive, olfactory, gustatory, visual, auditory, and interoceptive [18]. The integration of information allows us to use our bodies effectively in an environment. Therefore, changes in sensory processing can manifest as an inappropriate response to this mechanism and/or as a compromised organization of sensory information. These dysfunctions can potentially have detrimental effects on typically developing children by compromising their participation in daily routines and functional activities, such as eating [19]. Eating is a sensory act. The sensory properties of food (taste, smell, texture, color, and appearance) affect our choices, as well as how much we eat [15]. Studies of children with an altered sensory profile in other populations, such as children with autistic spectrum disorder, have shown that they may exhibit selective, avoidant, or preferential behavior toward certain foods based on their sensory characteristics or specific brands, and may develop problems such as weight loss, adding to the frustration of their parents and family [20][21][22][23]. Navarrete-Muรฑoz et al. [24] reported that the interest in exploring the relationship between eating and sensory processing is due to the fact that the eating process involves the integration of sensory domains which trigger individual responses to the food characteristics, and that early childhood is a period during which food preferences and/or aversions are established. More evidence is needed to understand the nature of these relationships, mainly in the population of typically developing children with a PFD. Therefore, the purpose of this study was to relate anthropometric parameters and the sensory profiles of typically developing Brazilian children with a PFD. The hypothesis of the study was that altered sensory processing in the study sample may be an interfering factor in the anthropometric data. Notably, investigating the association between sensory processing disorders and nutritional status in typically developing children may shed light on how to prevent or minimize the harmful consequences to children's health and contribute positively to the diagnosis and management of PFDs. Ethical Criteria This study was approved by the Research Ethics Committee under protocol number 08120218.7.0000.5505 and followed all ethical criteria to be compliant with current legislation. Design This was a retrospective clinical study of the medical records of typically developing children who received care at the Children Development Institute in Sรฃo Paulo, Brazil, who were examined between 2018 and 2020. Participants For this study, we included and examined the medical records of typically developing Brazilian children of a high socio-economic profile with the characteristics of a PFD [1]. We applied the following selection criteria: (A) Children who had shown an inappropriate oral intake of nutrients for their age for at least three months that was associated with one or more of the following items: a. Medical dysfunction evidenced by one of the following: โ€ข Cardiorespiratory impairment during oral ingestion. b. Nutritional dysfunction evidenced by one of the following: Specific nutritional deficiency or significantly restricted intake of one or more nutrients resulting from a reduction in dietary variety. c. Feeding skill dysfunction evidenced by one of the following: โ€ข Need to adapt the texture of liquids or food; โ€ข Certain position or equipment used to adapt food intake; โ€ข Use of strategies to adapt food intake. d. Psychosocial dysfunction evidenced by one of the following: โ€ข Active or passive escape behavior when the child is eating or being fed; โ€ข Caregiver's inappropriate management of the child's food intake and/or nutritional needs; โ€ข Disrupted social functioning in the context of food; โ€ข Disrupted child-caregiver relationship associated with food. (B) Absence of cognitive processes consistent with eating disorders. Children with neurological disorders and children under seven months and over 36 months of age were excluded from the sample. Dynamics of Care All participants in this study were treated at the same center dedicated to the care of children with feeding problems. This center is staffed by a group of professionals, including a nutritionist, occupational therapist, speech therapist, and psychologist. All of the medical records of the children included in this study covered the same protocol for the dynamics of care and contained data that were collected in an initial interview. Complete anthropometric data and sensory profiles were available for all selected children. Anthropometric Parameters Prior to the first evaluation at the institute, parents were asked to provide important nutritional information including a detailed food record, food inventory, recent biochemical tests (when available), and updated growth charts extracted from medical records. We also collected weight-for-age, length/height-for-age, and BMI-for-age data. We entered the gathered anthropometric data into WHO Anthro software [25], which calculated the indices and plotted the results. We used the graphs recommended by the World Health Organization (WHO) for boys and girls between the ages of 0-5 years [25]. The cutoff points used in the different curves are represented as z-scores, which indicate units of standard deviation from the median value (z-score of 0). For premature patients who were admitted to care at under two years of age and for extremely premature infants (gestational age < 28 weeks) admitted at under three years of age, the WHO curve was adjusted for chronological age. The cutoff points and nomenclature adopted for each z-score range followed the WHO recommendations [25]. We then classified the data as either age-appropriate or not age-appropriate. Sensory Processing Profile Data The occupational therapy team assessed the sensory profiles of typically developing children with feeding problems using the Infant Toddler Sensory Profile (ITSP) tool [26]. This tool consists of a questionnaire with forty-eight questions for the age group between seven and thirty-six months to be completed by the parents or primary caregiver of the baby or young child in order to collect information about the child's sensory processing skills. For this study, the instrument was sent by email, along with instructions on how to properly complete the questionnaire, as described in the manual [27]. Parents rated the frequency of their child's behavior on a scale from one (almost always) to five (almost never). The responses of the parents or caregiver were analyzed according to the standard instructions of the manual, and the sensory profile was scored based on the proposed score table [27]. After tabulating the scores, children were classified into profiles based on typical performance or probable/definite difference in performance for each type of sensory processing (auditory, visual, tactile, vestibular, and oral). For greater objectivity, we opted to cross only the data from the sensory processing segments (auditory, visual, tactile, vestibular, and oral) and focus on the abilities of the senses and not on the neurological condition related to broader questions of sensory integration. We also provided a second classification by dichotomizing the data as either typical (typical performance) or atypical (probable difference or definite difference in performance) for each type of sensory processing [28]. Data Collection We collected demographic data on the children by extracting data from the anthropometric assessment in their medical records and sensory profile analysis. The specific information for the analyses included weight (kg), weight-for-age anthropometric index (z-score), length/height (cm), length/height-for-age anthropometric index (z-score), BMI (kg/m 2 ), BMI-for-age anthropometric index (z-score), auditory processing, visual processing, tactile processing, vestibular processing, and oral sensory processing. Statistical Analysis We employed descriptive statistics to analyze the data, which are presented as mean ยฑ standard deviation when appropriate. For this study, we adopted a statistical significance level of p < 0.05. We tested the normality of the quantitative variables using a Shapiro-Wilk test, and found that the data were non-normally distributed. Therefore, we used nonparametric chi-squared tests. Study Sample We included 79 medical records of children diagnosed with a PFD. The mean age in years was 1.80 ยฑ 0.70 and the mean age at the onset of feeding problems was 7.06 ยฑ 5.32 months. The data showed that 61 (82.4%) children were premature and 16 (21.6%) had food allergies, most frequently cow's milk protein. The results showed that 55 (72.4%) children did not have family meals and that the same number required a distraction during meals. Anthropometric and Sensory Processing Data Our evaluation of the anthropometric parameters showed that the mean ยฑ standard deviation for weight was 10.73 ยฑ 2.50 kg; length/height was 82.43 ยฑ 9.34 cm; and BMI was 15.53 ยฑ 1.58 kg/m 2 . The anthropometric indices showed that 68 (86.1%) and 74 (93.7%) children were in the appropriate classification in terms of weight-for-age and length/heightfor-age, respectively, according to the z-score cutoff points. In the evaluation of the BMI-for-age anthropometric index, 66 (83.5%) children had a normal weight according to the z-score cutoff point (Table 1). Although we classified most of the anthropometric parameters as appropriate, there were a small number of children in the sample whose data in terms of weight, length/height, and BMI-for-age were classified below or even above what was appropriate for their age group. With regard to sensory processing, we noted that 51 (64.6%) children showed typical performance in auditory processing, 48 (60.8%) in visual processing, 52 (65.8%) in tactile processing, and 43 (54.4%) in vestibular processing. However, for oral sensory processing, there was a higher frequency of children with a probable difference (32; 40.5%) and a definite difference in performance (31; 39.2%). Discussion Exploring the relationship between anthropometric parameters and sensory processing in our sample was important because we believe that feeding behavior, which is influenced by a child's sensory preferences and aversions to food at an early age, can compromise their interest and motivation to eat and may result in short, mid, and long-term health consequences. Understanding this relationship will help us detect PFD and discover more preventive and decisive treatments. Our analysis of anthropometric parameters indicated that there were children with appropriate and inappropriate indices for their age group. It is important to highlight that clinical experience shows us that feeding problems are not always associated with malnutrition or growth deficiencies. In a study published by a Brazilian group from a center for child feeding disorders, most of the children were classified as normal weight, even though their developmental patterns tended to be in the lower percentiles, which was similar to our results [29]. In a recent comprehensive review of the nutritional aspects of children with feeding problems, which considered their relationship with anthropometric parameters, the authors observed that in some studies, children with feeding problem complaints had a significantly lower BMI than children without those complaints. In some of the studies reviewed, children with feeding disorders were less likely to be overweight or obese than those in the control group [13]. The findings for BMI-for-age partially agree with the results of our sample, since the findings for overweight subjects were numerically smaller than those for normal weight or underweight subjects in the analyzed population. In other studies, children with feeding problems also presented with lower length/heightfor-age index values than the control group [30,31]. This point deserves attention, since a short stature for a given age may signal a growth deficiency that might be related to chronic diseases such as food allergies, inflammatory bowel disease, and malnutrition [32], which are situations that can favor the development and/or maintenance of a PFD and long-term health complications [33]. We did not observe this statistically significant relationship in our results, although five (6.3%) children in our sample presented with short stature for their age. In this specific group, four (80%) had been diagnosed with a food allergy and/or gastrointestinal issues and were already undergoing appropriate treatment for such conditions. Regarding the BMI-for-age parameter in our study, part of the assessed population proved to be overweight (n = 4; 5%), although this result was not statistically significant in terms of the sample. Children with food selectivity, even with a reduced dietary repertoire, may consume snacks and other foods/products with a high energy density and a poor nutritional value. Moreover, sensory issues can also affect the level of activity and perception of internal signs of hunger and satiety, thereby compromising the regulation of food intake and body weight [34][35][36]. Importantly, methodological issues in these studies should be highlighted, since several publications classified children with feeding problems from a parental and nonprofessional perspective. Furthermore, no single uniform method for classifying pediatric feeding disorders is available, which makes the population in question even more heterogeneous, so that it is not possible to establish reliable universal standards [13,37]. In this study, most children were classified as having an appropriate nutritional status based on the BMI-for-age index. This is a particularly important finding, since we observed in clinical practice that the diagnosis and referral of children with a PFD is often based on anthropometric assessments. Therefore, a clinical assessment based on anthropometric criteria may compromise the early diagnosis of PFD in children. We validated and reiterate the importance of anthropometric assessments. However, we emphasize that assessments of children should go beyond these parameters and include observations of the child's mealtime behavior and their relationship with food and their family at mealtimes, as reported by Nogueira-de-Almeida [38]. In our sample, even a child with normal weight and a PFD could show compromised mealtime behavior. Some of these children can only consume a limited number of foods, do not participate in family meals, eat only as a distraction, eat at night, do not chew, present with a sensory aversion, etc., with biopsychosocial effects [1]. With regard to sensory processing, which has not been explored in previous studies of a typically developing population [28,39,40], we observed different profiles in the sample in this study. Clearly different profiles were most prevalent in the oral sensory processing segment of the data collected. In our clinical practice and in agreement with other authors [41], children who present with changes in oral sensory processing usually have two distinct sensory patterns: (a) Inappropriate intraoral discrimination, which affects a child's perception of food in the mouth, causing limited bolus formation, loss of food through the mouth and choking, or refusal of liquids and food textures that provide insufficient sensory input [41]; and (b) intraoral hypersensitivity, which interferes with the child's proper interpretation of a stimulus and generates an exacerbated response in relation to the nature, intensity, and duration of the stimulus received, so that the child may choke (retching) depending on the type, texture, size, flavor, smell, temperature, viscosity, color, and/or appearance of the food, and limitations of the range of ingestion [41][42][43]. According to recent research, sensory processing dysfunctions can cause feeding and swallowing disorders such as food refusal and self-limited diets [42]. Although there is a lack of studies of typically developing populations, more research focusing on sensory processing by children with a PFD may help us understand certain mealtime behaviors that are still poorly understood in this specific population. The relationship between nutritional status and sensory processing has not been precisely determined in this population. Our results showed no statistically significant relationships between the weight-for-age (z-score), length/height-for-age (z-score), or BMI-forage (z-score) anthropometric indices and sensory processing. Navarrete-Muรฑoz et al. [16] related BMI to changes in sensory processing in school-aged Spanish children between the ages of three and seven years. The authors did not find a positive association between increased BMI and altered sensory processing. Other studies, such as those of Navarrete-Muรฑoz et al. [24], Moding et al. [44], and Suarez [45], reported that the effect of smell, texture, and taste on daily dietary choices directly affects feeding behavior and can have long-term effects, such as changes in body weight or BMI. These studies agree that changes in sensory processing may be related to low appetite, little interest in food-related activities, and less pleasure during eating. In contrast to our results, Navarrete-Muรฑoz et al. [24] demonstrated that almost one third of their preschool and school-aged Spanish participants presented with atypical sensory performance and that a similar proportion of participants were overweight or obese. Although the associations between being overweight and the prevalence of atypical sensory results were not statistically significant in this population, the main findings indicated that an increase in BMI was significantly associated with a higher prevalence of atypical tactile sensitivity in children aged from three to seven years old. In a previously published [13] study with the same population, atypical performance for tactile sensitivity was significantly associated with lower adherence to the Mediterranean diet (that is, a low consumption of fruits, vegetables, grains, and cereals) which may be related to the high BMI-for-age results. This association between tactile sensitivity and specific food choices has been reported previously in the literature [42,[46][47][48]. As for the limitations of this study, we collected data from medical records on a single date (beginning of follow-up care at the Institute) without a follow-up for our evaluation of the anthropometric parameters. Additionally, an evaluation of some measurements (e.g., body circumference and skin folds) that provide more information on body composition could have outlined a more specific relationship between the variables. With regard to sensory processing, it should be noted that the ITSP is not a diagnostic tool, but rather a screening measure completed by parents. Therefore, with the necessary indications, a comprehensive assessment is required to make a diagnosis through a clinical evaluation by an occupational therapist trained in sensory integration. In this respect, children classified according to ITSP scores may not necessarily present with typical or atypical sensory processing. Thus, an underestimation of our conclusions cannot be rejected. This study did not find a statistical relationship in our sample between the anthropometric parameters and the sensory profile of typically developing Brazilian children diagnosed with a PFD.
Rethinking intellectual property rights and commons-based peer production in times of crisis: The case of COVID-19 and 3D printed medical devices At the peak of the coronavirus disease 2019 (COVID-19) pandemic, in March 2020, the Hospital of Chiari (Brescia) was in a state of emergency. The stock of valves needed to operate ventilators was dwindling and the manufacturer was unable to supply them at short notice. Massimo Temporelli, founder of Fablab Milano and one of the 3D printing pioneers in Italy, with the help of the local press, called makers to the rescue. Cristian Fracassi, a young engineer from Brescia, and his colleague Alessandro Romaioli, who works in the world of 3D printing, rose to the challenge. The original manufacturer of the valves was not very cooperative and withheld the design data and blueprints relying on European Union (EU) medical manufacturing regulations. Temporelli, Romaioli and Fracassi were not discouraged and quickly began the process of reverse engineering. The plastic part was re-measured, drawn as a 3D model and finally printed in less than a day with a material cost of less than 1 euro per piece (Fig. 1). Of course, the replica was not certified, but tests were successful, and the devices were subsequently used on more than 10 patients in Italy. Two weeks later, amidst an increasingly serious situation worldwide, in which many supply chains were disrupted, several other maker collectives followed the example of Brescia and supplied hospitals in many parts of the world with spare parts for life support technology. While, under normal circumstances, they would have to fear copyright and patent lawsuits or regulatory intervention, their help is now welcome. This article traces how, in the case of a global crisis, localized co-productive approaches to solve crisisinduced shortages are gaining increased acceptance and question the structural patterns and mechanisms of late capitalism. In the first section, we disentangle the relationships between the three main practical and symbolic contexts of 3D printing in the early 21st century (maker culture, prototyping and industrial contexts). We devote the second section to reflecting upon 3D printing against the backdrop of a social theoretical understanding of intellectual property (IP) rights. Against this backdrop, we explore, in the third section, how the current global crisis could change our understanding of IP and production methods. At the peak of the coronavirus disease 2019 (COVID-19) pandemic, in March 2020, the Hospital of Chiari (Brescia) was in a state of emergency. The stock of valves needed to operate ventilators was dwindling and the manufacturer was unable to supply them at short notice. Massimo Temporelli, founder of Fablab Milano and one of the 3D printing pioneers in Italy, with the help of the local press, called makers to the rescue. Cristian Fracassi, a young engineer from Brescia, and his colleague Alessandro Romaioli, who works in the world of 3D printing, rose to the challenge. The original manufacturer of the valves was not very cooperative and withheld the design data and blueprints relying on European Union (EU) medical manufacturing regulations. Temporelli, Romaioli and Fracassi were not discouraged and quickly began the process of reverse engineering. The plastic part was re-measured, drawn as a 3D model and finally printed in less than a day with a material cost of less than 1 euro per piece (Fig. 1). Of course, the replica was not certified, but tests were successful, and the devices were subsequently used on more than 10 patients in Italy. Two weeks later, amidst an increasingly serious situation worldwide, in which many supply chains were disrupted, several other maker collectives followed the example of Brescia and supplied hospitals in many parts of the world with spare parts for life support technology. While, under normal circumstances, they would have to fear copyright and patent lawsuits or regulatory intervention, their help is now welcome. This article traces how, in the case of a global crisis, localized co-productive approaches to solve crisisinduced shortages are gaining increased acceptance and question the structural patterns and mechanisms of late capitalism. In the first section, we disentangle the relationships between the three main practical and symbolic contexts of 3D printing in the early 21st century (maker culture, prototyping and industrial contexts). We devote the second section to reflecting upon 3D printing against the backdrop of a social theoretical understanding of intellectual property (IP) rights. Against this backdrop, we explore, in the third section, how the current global crisis could change our understanding of IP and production methods. Geek-symbolism, prototyping and additive manufacturing Since 1974, when the chemist and author David Edward Hugh Jones coined the concept of additive The authors Dana Mahr is a senior researcher and lecturer at the University of Geneva where her research focuses on how users, customers and patients make sense of biomedical knowledge. Sascha Dickel is an assistant professor at Johannes Gutenberg University, Mainz, where he explores, inter alia, technologies of future-making and novel modes of public participation in science. This article This article explores how, in the case of a global crisis, localized co-productive approaches to solve crisis-induced shortages challenge existing understandings of intellectual property rights. It focuses on how the maker community (private individuals with access to 3D printing equipment) engaged in the production of medical devices during the SARS-CoV-2 (the novel coronavirus) crisis of 2020 and how international policymakers and industries reacted to their engagement. manufacturing (AM) in his New Scientist column Ariadne, 1 the process of AM has not only become a practical reality but also morphed into a cultural icon of 21st-century maker culture, a tool for prototyping in industrial practice. '3D printing' is a colloquial term for additive manufacturing. Today, a large variety of additive manufacturing technologies exist. They share the common feature of using digital data to create three-dimensional objects layer by layer. Guided by computercontrolled devices, the physical material is joined or solidified. Additive manufacturing technologies make it possible to create diverse objects of very different shapes with the same technological infrastructure. During the last decades, additive manufacturing has slowly been integrated into industrial engineering: '[w]hen Additive Manufacturing began to be used in the 1990s, it was initially employed for prototyping (primarily in the automotive industry) and subsequently to make casting moulds and tools. Today, it is also used to make end products including small parts, small batches and oneoff items for the jewelry or medical and dental technology industries'. 2 While many AM technologies used in industrial contexts are very expensive and out of reach for private users, the term '3 D printing' evokes the image of an inexpensive machine (just like '2 D printing') that allows individuals to turn into potential manufacturers. Thus, the narrative of democratizing production through 3D printing has shaped the public discourse on this technology. Adrian Boyer, founder of the open-source development project RepRap (Replicating Rapid-Prototyper), positions digital fabrication technologies (like 3D printers) as an innovation that might put the means of production back into the hands of 'the people'. According to Boyer, these technologies might 'allow people to manufacture for themselves many of the things they want, including the machine that does the manufacturing. It is the first technology that we can have that will simultaneously make people more wealthy whilst reducing the need for industrial production'. As such, 3D printing might evolve into a technology that challenges the regime of centralized capitalist production systems whose ownership is restricted to a limited number of people and organizations. '[P]eople of modest means [. . .] will be able to make themselves a new flute, a new digital camera, or just a new comb by downloading the designs for them from the Web. Some of the designs will be sold; some will be available free. Industrial production may [only] be needed for the raw materials in considerable quantities.' 3 The technology journalist and entrepreneur Chris Anderson also views 3D printers as drivers of a 'next industrial revolution' in which the role of large-scale factories as traditional sites of innovation and production is eventually replaced by an economy of 'makers,' who collaboratively generate new product ideas that can be materialized anywhere. 4 Economic theorist and political activist Jeremy Rifkin picked up on this idea of a new industrial revolution linking it to a decentralized green economy. He suggests that AM may not only be a more sustainable mode of production that reduces waste, but also one that may reduce emissions if truly used as a decentralized manufacturing technology that would diminish the need for the global shipping of goods: '[i]f we were to put all the disparate pieces of the 3 D printing culture together what we begin to see is a powerful new narrative arising that could change the way civilization is organizing the twenty-first century.' 5 From the perspective of MIT professor Neil Gershenfeld, 3D printers are merely a prototype for advanced digital fabricators. According to Gershenfeld, anyone with access to such a digital fabricator should be able 'to make almost anything'. 6 In contrast to the typical high-tech visions of modernity, imaginaries of 3D printing do not focus on large-scale technological systems created and used by a distinct techno-scientific elite. Instead, all members of society are portrayed as potential cocreators of a new mode of production in which accessible and affordable devices democratize and decentralize manufacturing. Tinkering, understood as DIY production, has historically been closely associated with a US identity. Some associate it with countercultural movements since the 1970s 'as a self-sustaining sensibility that could overcome a reliance on the mainstream consumerist society', while others historicize it and frame it as a genuine part of the frontier spirit that 'made America great '. 7 No matter what political narrative it is associated with, tinkering evokes images of creativity, independence and ingenuity. Its patron saints are, among others, Steve Jobs and Bill Gates. It also evokes an updated version of the American dream: the tinkering geek in his [sic] garage that changes the world or at the very least becomes founder and CEO of a multi-billiondollar enterprise in Silicon Valley. And very much like the cultural imaginary of its counterpart, tinkering is attracting a wide variety of individuals and associations. Anyone who has an idea and explores it by technical means can be described as a tinker. The 'maker' is a contemporary re-imagining of the tinker and also encapsulates the notion of being part of a 21st-century social movement. Adam Savage, the host of the TV Show 'Mythbusters', described 'making' at the 2012 Bay Area Maker Faire in the following way: We are seeing a generational shift back to Making. (. . .) It would be (. . .) really great to build your own things, rather than the things that pop culture feeds (us). I have built and participated in the building of things from scratch: Robots, theatre sets, furniture, and props, but the love of the objects themselves, this child's [sic] like desire for the impossible toy seen in a movie, or seen in my head. Wanting to make it, make it something that I have and something I have held. (. . .) That want of those things, and teaching myself how to make. Things in order to have them is the engine of everything I have achieved in my whole life up till now. It does not matter what you make, and it does not matter why. The importance is that you are making something. 8 On another occasion he stated: 'Humans do two things that make us unique from all other animals; we use tools and we tell stories. And when you make something, you're doing both at once.' 9 For Savage, this much is clear: to make things by ourselves is both a purpose in itself and an essential marker of the human existence. I make, therefore, I am. Conversely, a complete reliance on industrial mass production might be understood as something contrary to our nature. Such a reliance not only casts mass-production in a bad light but also constructs a social difference between those who 'make' and those who don't. Makers are framed as sovereign actors and not as passive consumers. This distinction is extremely important for the self-perception and identity of the maker scene. Since 2009, when the startup MakerBot introduced its first affordable open-source model 'Cupcake', 3D printers have become an integral part of the identity of those who identify as tinkers or makers. No Makerspace, Hackerspace or FabLab (some of the collaborative places in which modern-day tinkers gather and share their interests) seems to be complete without the ability to 3D print. Smartphone cases, camera gear, tabletop figurines, frames for glasses and superhero masks: all these and far more can be designed and created by using CAD software and desktop 3D printers that can be found in online platforms. Platforms like MakerBot's 'Thingiverse' are actively inviting users to share their own designs with a larger community of other makers. At present, there are over 1.6 million 3D printing blueprints shared on the platform. 10 The cultural imagery associated with 3D printing is even more impressive. For some, it fosters 'creative literacy', 11 opens a new perspective on 'global sustainability' 12 , or leads to more 'intellectual freedom' 13 . For others, it seems to endanger lives or even ruin whole industries. 14 However separate these images may seem, they are united by the fact that they attribute a transformative power to 3D printing. It seems that 3D printing has outgrown the actual technology: its possibilities and limitations and have become a 'boundary object' for all those who imagine a technologically induced change in our working and living conditions. The sociological concept of boundary object was introduced by Susan Leigh Star and James R. Griesemer (1989). It refers to objects that are used and interpreted in different ways by different communities and acquire different meanings in different social worlds. 3D printers, both as tools and words, are perfect examples of boundary objects. It does not matter whether the images associated with 3 D printers have a utopian or dystopian character, or whether they are formulated from the left or the right ends of the ideological spectrum. The discursive power of 3D printing transcends such differences. Like the specimens, maps and field notes in Star's and Griesemer's article on the collaboration of amateurs and professionals in building the collections of Berkeley's Museum of Vertebrate Zoology, the new printing technology as a 'boundary object' seems to be both 'plastic enough to adapt to local needs and the constraints of the several parties employing them, [and] robust enough to maintain a common identity across sites'. 15 It yields imaginary and concrete power as a novel technology that inhabits several social worlds and connects them, even if the actors in these worlds interpret and use 3D printers quite differently. Such a communicative function can be observed in both the symbolic and actual use of 3D printing during the present COVID-19 pandemic. A song of shields, ventilators and IP infringements Both the AM industry and members of the maker community have been among the first responders to global shortages in crucial medical equipment during the 2020 COVID-19 crisis. The maker community began its response with immediate actions like a viral #NoTouchChallenge to design and produce 'portable, 3 D printable, multi-purpose no touch tools' 16 or the #DigitalSolidarityChallenge to produce spare parts for emergency ventilators. Meanwhile, industrial actors and lobby groups like the European Association of the Machine Tool Industries and related Manufacturing Technologies (CECIMO) had to first find a way to address and solve an immanent dilemma: first, how to balance the societal urge for a fast response with the protection of IP rights; secondly, how to deal with the public scrutiny and criticism that followed the industry's reaction to the actions of the Fablab Milano, which cumulated into requests by the European Commission (on 20 March 2020) to loosen copyright during the crisis. The immediate answer given by Filip Geerts, director general of CECIMO, was the following: I believe that the additive manufacturing sector could provide immediate solutions to sustain the effort of hospital workers in the middle of this emergency. However, it is in the best interest of all to clarify the regulatory issues in order to move forward quickly and in a way that is not going to delay immediate actions. 17 Consequently, on 30 March 2020, CECIMO published a call for policymakers to clarify the relationship between 3D printing as a technology, industrial actors and public intensive care requirements. This call included the following six points: Use government's official channels to communicate any requests to print parts, upload a list of essential supplies and provide the necessary files for printing to those companies who request them. Temporarily waive the Medical Device and Product Liability Directive requirements that would hamper AM companies' response to the extraordinary demand of equipment by health care sector. Provide temporary authorization to use patents of essential supplies and services without the consent of patent holders. Cooperate closely with the customs authorities to accelerate the approval procedures for imports/exports of essential supplies and/or 3D Printing Hardware and ensure free flow of essential supplies and/or 3D Printing Hardware within the EU's internal market. Include the AM sector in the list of the essential value chains that should continue its activities during the lockdown period. 14 Enable a quicker and smoother access to the market of new essential medical and protection equipment, by providing temporary access to certification, in response to the coronavirus outbreak. 18 As the crisis was worsening during late March 2020 and the public pressure with regard to copyright grew, CECIMO made, as the six points show, temporary concessions on the point of copyright, while simultaneously trying to strengthen the position of industrial actors. Part of this strategic call was to position industrial actors as central contacts for the production of medical parts, to temporarily loosen the regulatory restrictions for new products, and to temporarily stratify market structures. While CECIMO officials stated that 'many companies from (the) European 3 D printing industry (are) already volunteering to aid hospitals and health centers by proposing the use of their machines', 19 the European maker community was worried with respect to its access to ABS (Acrylonitrile Butadiene Styrene) and PLA (Polylactic Acid) filaments. Makers were worried about their ability to help if the EU would implement such policy measures, given that such a call for re-centralization might (in their interpretation) cost more lives than their decentralized endeavours to assist hospitals during the crisis. 20 An even more conservative approach was proposed in the Federal Republic of Germany by the National Academy of Sciences Leopoldina. Its working group 'Additive Manufacturing' called for medical devices manufactured with additive technologies to be clinically and legally safe. In the context of the COVID19 pandemic, this can be understood as a moratorium on the inclusion of privately manufactured medical devices in crisis management. 21 Outside of Europe, the legal position of tinkerers and makers to help their local emergency wards remained unclear. Neither the US nor China, despite experiencing high rates of infection and large maker communities, adopted the April 2020 policies to clarify the role of makers in responding to the global crisis. While the US Food and Drug Administration did not generally forbid the maker community to act, it warned the public that '3D printed masks, for example, might not provide the same level of protection as traditional masks'. 22 This led to a public discussion about safety risks and private 3D printed medical goods. While some regarded makers as heroes of first response and ingenuity, others framed them as next-generation quacks. An example of the latter perspective is the MIT researcher Martin Culpepper, who stated that 'one of the biggest risks with 3 D printing for Covid-19 situations is the false sense of hope that we can quickly print PPE (personal protective equipment) to address needs' and that there are 'a lot of issues with certain types of 3 D printed parts with respect to their use in a clinical setting', including material compatibility with established sterilizing techniques in hospitals. 23 At the other end of the spectrum, the prominent technology YouTube-personality Naomi Wu (SexyCyborg) from Shenzhen ( ), China, posted on Twitter that she not only embraced the resourcefulness of the maker and tinker community during the SARS-CoV-2 crisis, but that she encouraged makers and clinicians all over the world to continue their efforts to help their local hospitals by providing spare parts for life-saving equipment (eg respirator elements) and PPEs. She also stated that she would even help 'reverse engineer' certain parts 'and serve as a team's human shield/patent bullet catcher in China', because she would have 'the support of a good Chinese IP lawyer' (Fig. 2). The World Intellectual Property Rights Organization (WIPO), which is under normal circumstances the world's primary source for information, resources and services surrounding questions about global IP rights, has not reacted in its COVID-19 response strategy to the case of decentralized private 3D printing as of 21 May 2020. At the request of the authors of this text, the WIPO's responsible authorities referred to a white paper from 2015, which, however, does not address action strategies for global emergency situations. 24 However, it is possible to extrapolate from older cases where private 3D printing, IP law and social interests collided. We can thus (at least to a certain extend) expand on the first systemic exploration of the problem by Simon Bradshaw, Adrian Bowyer and Patric Haufe. In their 2010 article, 'The Intellectual Property Implications of Low-Cost 3D Printing', they came to the conclusion that 'personal use of 3D printing technology does not infringe the majority of IP rights', since '[r]egistered design and patent explicitly exempt personal use, trade mark law has been interpreted as doing so, and UDR is only applicable to commercial use'. 25 Furthermore, like the format shifting of the music industry (from CD to MP3 and now streaming models like Apple Music and Spotify) would have shown, purely local and personal infringements are impractical to pursue. Nevertheless, Bradshaw and his co-authors also noted that there were (in 2010) indications that in the near future 'the level of detail and accuracy attainable by personal 3 D printed objects [could become] sufficient to seriously impinge upon the market for quality products'. The discussion of risks in the COVID-19 pandemic has been complemented by a discussion of decentralized aid strategies for hospitals experiencing supply problems in the face of an unforeseen crisis. As early responders to this situation, regulators in Canada released, in mid-April 2020, guidelines for the production of PPE by private individuals, maker collectives and small manufacturers with access to 3D printers. Based on an interim order from 18 March 2020, these actors were urged to use a number of standards and IPs for Both were intended to keep the situation controllable, on the one hand, and to ensure compliance with safety standards, on the other. However, Class II medical products, such as spare parts for medical ventilators were excluded from the scheme. A release of these for private 3D printing should only be granted if the situation deteriorates dramatically in the coming months. The experiences from Lombardy in Northern Italy should be evaluated for this purpose. A 'global hackathon' Even without immediate reactions from global authorities like WIPO, it seems that the COVID-19 crisis could become a catalyst for a major shift in how we understand IP rights and the ways we distribute and produce things in the future. Beyond current discussions between industrial actors, politicians and makers, many front-line medical experts started (despite possible risks) using locally and privately 3D printed supplies. Some examples are the ventilator valves and PPE produced by Cristian Fracassi and the Fablab in Milan discussed above. 27 While COVID-19 mercilessly exposed the weaknesses of our globalized industrial supply chains, 3D printing might give new meaning to local solutions. The current real-time experiment of decentralized and private production can be seen as a kind of socio-technical prototype in itself. Amidst all the challenges of the COVID-19 epidemic, 3D printing appears as a unifying object that connects different stakeholder communities in new ways. The 'global hackathon' of 2020 tested the possibilities and limitations of a comprehensive maker production in a publicly visible and performative way. The example of Naomi Wu is paradigmatic for this. It shows that the ethos of commons-based peer production and open science surrounding the maker community aligns quite well with systems and policies of industrial production that do not respect IP rights and have lower standards with respect to consumer protection. 28 New production methods and questions about IP rights in times of an unprecedented pandemic present a new challenge for the IP rights community. In what way can additive and decentralized manufacturing be reconciled with quality, safety and IP rights? One possibility would be the formation of a joint commission for future crises. This could bring together regulators, ethicists, lawyers and members of the industry community to develop guidelines for global crisis scenarios. An example of this is the cooperation between the FBI in the USA and the DIY biology community to identify and solve security risks and copyright problems in the biochemical sector They have established a forum for exchange in which educational workshops are held on a new way of participating in technology, in which security issues are explored and in which mutual role expectations are discussed. 29 Perhaps, a similar model could address the issues raised above, for the benefit of everyone involved. 28
On the high-density expansion for Euclidean Random Matrices Diagrammatic techniques to compute perturbatively the spectral properties of Euclidean Random Matrices in the high-density regime are introduced and discussed in detail. Such techniques are developed in two alternative and very different formulations of the mathematical problem and are shown to give identical results up to second order in the perturbative expansion. One method, based on writing the so-called resolvent function as a Taylor series, allows to group the diagrams in a small number of topological classes, providing a simple way to determine the infrared (small momenta) behavior of the theory up to third order, which is of interest for the comparison with experiments. The other method, which reformulates the problem as a field theory, can instead be used to study the infrared behaviour at any perturbative order. Diagrammatic techniques to compute perturbatively the spectral properties of Euclidean random matrices (ERM) in the high-density regime are introduced and discussed in detail. Such techniques are developed in two alternative and very different formulations of the mathematical problem and are shown to give identical results up to second order in the perturbative expansion. One method, based on writing the so-called resolvent function as a Taylor series, allows to group the diagrams in a small number of topological classes, providing a simple way to determine the infrared (small momenta) behavior of the theory up to third order, which is of interest for the comparison with experiments. The other method, which reformulates the problem as a field theory, can instead be used to study the infrared behaviour at any perturbative order. I. INTRODUCTION Random matrices [1] are N ร— N matrices whose entries are random numbers drawn from a certain probability distribution. Their statistical spectral properties in the large N limit describe a wide range of physical phenomena: nuclear spectra [2], quantum chaos [3], localization in electronic systems [4], diffusion in random graphs [5], liquid dynamics [6] and the glass transition [7], complex networks [8], superstrings [9]. Random matrices may be grouped in a few universality classes according to their statistical properties [1]. For most of these classes, the density of eigenvalues follows Wigner's semicircle law. It has thus become of interest to identify ensembles where the semicircle law is modified in a non-trivial way. One such ensemble results when the corresponding physical problem has a conserved quantity (e.g. momentum in case of propagating excitations, or number density in diffusion problems). Under such circumstances, the random matrix that best describes the problem is typically a Laplacian matrix [5], which has the property j M ij = 0. (1) This encodes the property that a vector whose components are identical is an eigenvector with eigenvalue zero. A kind of random matrices of particular relevance in the study of off-lattice systems are the so-called Euclidean random matrices (ERM) [10,21]. Place N particles in positions x i , i = 1, 2, . . . , N , belonging to some region of D-dimensional Euclidean space, of volume V . The positions are drawn randomly from some probability distribution function P ({x i }). The entries of an ERM are a deterministic function of these random positions, M ij = f (x i โˆ’ x j ). If a conservation law is relevant for the problem at hand, we will rather have a Laplacian ERM: Note that we never find the same particle label twice in the argument of the function f (x i โˆ’ x j ), since the term f (x i โˆ’ x i ) cancels. In a diagonal term, ฮด ij f (x i โˆ’ x k ), the kth particle shall be called a medium particle, while the ith particle will be the chain particle. The function f in Eq. (2) is quite general: only rotational invariance and the existence of the Fourier transform f (p) are assumed (p = โˆš p ยท p). Furthermore, even if in this work f will be a scalar function, for some applications it should rather be a matrix-valued function. It must be so, for instance, to account for the vector nature (longitudinal or transversal) of vibrational dynamics [18]. Most of our results extend as such to this more general case. ERMs describe topologically disordered systems, at variance with problems were the N positions {x i } are placed on a crystalline lattice [29]. We will be considering a extreme case, in which the N positions are placed with uniform probability on the volume V . The particle-number density, ฯ = N/V will be held fixed while we take the large N limit. Note that there are two sources of statistical correlation among the entries of matrix (2), even if the positions {x i } are totally uncorrelated. First, it is a Laplacian matrix, recall Eq. (1). Second, due to the triangular inequality of Euclidean geometry, the distances from two neighbouring particles to a third one are necessarily similar. In order to compute the basic spectral properties of ERM it turns out to be convenient to introduce the resolvent, where the complex number z โ‰ก ฮป + iฮท has a tiny imaginary part ฮท and the overline stands for an average over the {x i }. If the ERM describes physical excitations (phonons, electrons, etc.) in topologically disordered systems, the resolvent (3) corresponds to the single-particle Green function, or propagator, for such excitations. If the system is isotropic, the resolvent depends only on p. The density of eigenvalues g(ฮป), or density of states (DOS), is given by This limiting behaviour is characteristic of topologically disordered systems. It does not hold for lattice systems. We note as well that the constraint (1) implies that a plane wave e ipยทxi is an eigenvector of the matrix (2) if p = 0: As we shall discuss below, the resolvent takes a very simple form in the high-density limit (it is actually the bare propagator of the theory): analogous to what one finds in the Rayleigh theory of scattering and in lattice models where disordered spring constants mimick the effect of topological disorder [29]. By reconsidering in detail the perturbative expansion, in this work we show that the prefactor A in Eq. (8) is actually null, due algebraic cancellations, and that this cancellations arise at all orders in the perturbative expansion in 1/ฯ. This is not related to any known symmetry of the problem, but rather reflects the mathematical structure of the perturbative contributions. On the other hand, we will also show that the result in Ref. 30,recall Eq. (9), is incomplete, since the imaginary part admits a formal expansion for small z The constraint (5) implies that g n (0) = 0 for all n, so that in general g n (p) = A n p 2 + O(p 4 ). However, we find that, for all functions f and all ฯ, A 0 = 0, so that g 0 (p) โˆผ p 4 while g 1 (p) โˆผ p 2 . In this respect, we confirm that the interaction between free excitations and disorder in topologically disordered systems (as long as ERMs describe them) has a peculiar mathematical structure that is different from disordered lattice systems (for lattice systems g 0 vanishes identically). To show this we shall compute the self-energy perturbatively within two unrelated approaches: a) an improved form of the combinatorial formalism introduced in [16], and b) a field-theoretic formulation. The field theory introduced here is quite different from standard formalisms in the theory of Random Matrices (see e.g. Refs. [10,15]). It probably deserves an indepth study, which is left for future work. We remark that our combinatorial formalism is simpler than the field theory, and is probably the method of choice to carry out higher-order computations in the 1/ฯ expansion. However, it has the drawback that the asymptotic g 0 (p 2 ) โˆผ p 4 appears at order 1/ฯ 2 from an exact cancellation of two contributions of order p 2 (at order 1/ฯ 3 we find an exact cancellation of ten contributions of order p 2 ). The field-theoretic framework clarifies that these cancellations are not accidental, and thus not restricted to low orders in the 1/ฯ expansion. The layout of the remaining part of this work is as follows: in sec. II we discuss a particular phenomenon (phonons in topologically disordered systems) where a theory based on ERMs has been proposed in recent years. In sec. III we anticipate our main result, namely the leading order of Imฮฃ(p, z + i0 + ). In sec. IV we discuss in detail the combinatorial formalism up order 1/ฯ 2 . We describe the rules to group all the diagrams that arise at this order in a very small number of diagrams, according to their topological structure, and show that up to second order in the function g 0 (p 2 ) the prefactor of the term โˆ p 2 cancels out. We also see that this cancellation appears in a given class of diagrams at 1/ฯ 3 . In order to shed a light over the mathematical origin of such cancellation, in sec. V we introduce a field-theoretical formulation that, despite producing a much larger number of diagrams, allows to give an argument explaining the origin of the cancellation at any perturbative order. II. A CASE STUDY FOR ERM: PHONONS IN TOPOLOGICALLY DISORDERED SYSTEMS Although ERMs have a wide range of application, in this paper we are mainly interested in the study of phonons in amorphous systems, such as glasses or supercooled liquids [24], since the big amount of experimental evidences may provide fundamental insights about the correctness of the theory. Of particular interest is the case where the frequencies ฯ‰(p) of the phonons with wave vector p lie in the GHz to the THz region (high-frequency sound). This is in fact the range explored by neutron and X-ray inelastic scattering experiments. These give the inelastic contribution to the dynamic structure factor, i.e. a Brillouin-like peak with position ฯ‰(p) and width ฮ“ (p). Summarizing the experimental findings, for p < p 0 (p 0 is the first maximum of the static structure factor, typically a few nm โˆ’1 [25]) one finds a linear dispersion relation ฯ‰(p) โˆผ cp, where the speed of sound c is quite close to that obtained by acoustic measurements. The dispersion relation typically saturates at p โˆผ p 0 . Moreover, the p-dependence of the peak width is often described by ฮ“ (p) โˆ p ฮฑ . Interestingly enough, ฮ“ (p) also saturates as the momentum becomes p โˆผ p 0 . There has been a hot debate among different experimental groups about the value of the exponent ฮฑ [27], some claiming ฮฑ โˆผ 2, and some ฮฑ โˆผ 4. There is now some consensus that in the region where ฮ“ is independent of temperature (i.e. ฯ‰(p) โ‰ฅ 1 THz) one has ฮฑ = 4, while at lower frequencies (the GHz region), where ฮ“ has a strong temperature dependence, the experimental value is ฮฑ = 2 [28]. A simple model of the high-frequency sound is afforded by scalar harmonic vibrations around a topologically disordered structure made of N oscillation centers x i , placed with uniform probability on a volume V [31]. Particle displacements ฯ• i have an elastic energy where the matrix M has the form Eq. (2) and f (x) is the spring constant connecting particles separated by the vector x. We assume that f (x) is spherically symmetric, so thatf (p) = g(p 2 ), where g is a smooth function. In the framework of the one-phonon approximation, the inelastic dynamic structure factor is related to the resolvent via S(p, ฯ‰) = โˆ’ p 2 ฯ‰ฯ€ ImG(p, ฯ‰ 2 + i0 + ). As a consequence, the width of the Brillouin peak is related to the imaginary part of ฮฃ by Then Eq. (10) implies that ฮ“ (p) โˆผ p 4 for very small p (for p โˆผ p 0 the width saturates and a mixed, more complex scaling should be expected). In that regime the phonon-disorder interaction can be thought of as a scattering phenomenon of the Rayleigh type. Since ERMs describe the dynamics of vibrating particles within the context of the harmonic approximation, the theoretical predictions based on ERM theory must be compared with experiments in the region where ฮ“ is independent of temperature; in fact the temperature dependence is an indication that the width of the peak is rather due to thermal processes, such as anharmonicities or relaxations, which require more refined theoretical approaches. We finally mention that vibrational frequencies ฯ‰ are related to ERMs eigenvalues ฮป (z = ฮป + i0 + ), by the relation ฮป = ฯ‰ 2 , see Eq. (12). Hence, the width of spectral peaks in ฮป-space and in ฯ‰-space are related by Eq. (13). Furthermore, Eqs. (4) and (10) imply that the DOS in ฮป space behaves for small ฮป as g ฮป (ฮป) โˆ ฮป (Dโˆ’2)/2 , which translates to frequency space as a Debye spectrum g ฯ‰ (ฯ‰) โˆ ฯ‰ Dโˆ’1 (because of the Jacobian in the change of variable: dฮป = 2ฯ‰dฯ‰). At this point, the reader may object that lattice systems have a Debye spectrum even if g 0 in Eq. (10) vanishes for them. In fact, their Debye spectrum is possible because Eq. (4) does not hold in the lattice case. III. THE MAIN RESULT The main result of this work is the following. Expanding the self-energy in powers of 1/ฯ, i.e. where ฮฃ (k) is of order 1/ฯ k , one has only one first-order contribution, while at second order where the three topologically different pieces are In Eqs. (15)- (17) we have used which, as we will see below, plays the role of the interaction vertex. The bare propagator G 0 was defined in Eq. (6). Note that V (q, p) = V (p, q). Other useful identities are Note that since V (q, 0) = 0, we have The high-density expansion for Laplacian ERM was introduced in [15,16]. Eq. (15) was already reported there but, instead of Eq. (16), one had 39 diagrams of order 1/ฯ 2 . Even if the final expressions were cumbersome, a numerical evaluation of the amplitude A in Eq. (8) was attempted for a simple choice of the function f . Presumably because of a numerical mistake, it was wrongly concluded that A = 0. Afterwards, it was announced (without supporting technical details) that the 39 diagrams previously found at order 1/ฯ 2 could be grouped as in Eq. (16) [18]. Unfortunately, a numerical reevaluation of the amplitude A was not attempted from these simpler expressions. We remark as well that an independent computation of ฮฃ to order 1/ฯ 2 has appeared recently [30]. We have checked that their results are consistent with ours, letting aside contact terms (in fact, these authors explictly state that some contact terms are lacking from their final expressions). Thus, their failure in identifying the g 0 term in Eq. (10) is not due to discrepancies in the final expressions. The underlying reason is rather more mundane, as we explain below. At first order the theory has the following behaviour. For small ฮป, z = ฮป + i0 + , we approximate the imaginary part of the propagator G 0 by (assuming a linear dispersion relation วซ(p) โ‰ˆ c 2 p 2 ). Then the only contribution to the imaginary part comes from q = โˆš ฮป/c. To evaluate the vertex V (q, p) at small q, and small p we just need to recall thatf (p) = g(p 2 ). It is important to avoid any assumptions about the ratio p/q, which can be either very large or very small when both p and q are small (at the Brillouin peak p/q โˆผ 1, but in Ref. [30] it was unjustifiedly assumed that p โ‰ช q). Then If we now square the vertex function and perform the angular integral, we obtain (S D is the surface of the sphere in D dimensions) The integral over q is now straightforward, thanks to Dirac's ฮด function in Eq. (15). We get Hence, already at order 1/ฯ, g 0 (p 2 ) in Eq. (10) is of order p 4 . Had we neglected the p 4 term in confront of the q 2 p 2 term (as done in Ref. [30]), we would have failed in identyfing the g 0 term. The physical reason for which the presence of such a term is mandatory (namely the existence of a Debye spectrum), was discussed in the concluding paragraph of Sect. II. Let us now check that the amplitude A in Eq. (8) vanishes. We merely need to compute the imaginary part of the self-energy at its lowest order in ฮป, namely ฮป Dโˆ’2 2 . A general term of the diagrammatic expansion involves the factor where we have expressed the measure in terms of polar coordinates in D dimensions. Every bare propagator is associated to one or more vertices that are smooth functions of the involved momenta. In fact we can expand the product of such vertices in a Taylor series. Now, the point is that if we want the lowest order in z, we have to exclude all the terms that are proportional to q and we have to take only the zeroth order term of the Taylor expansion. We can obtain this term simply making the following substitutioรฑ whereฤจ m stands for the imaginary part proportional to ฮป (Dโˆ’2)/2 . Theรฑ Imฮฃ (2) It follows that the amplitude A vanishes, because the ฮฃ C contribution is already of order p 4 : In the following, we will show explicitily that the cancellation of the ฮป (Dโˆ’2)/2 p 2 term arises even for a given topological class (quite large) of diagrams at 1/ฯ 3 order, and we will provide an argument that predicts such cancellation at any pertubative order. IV. THE COMBINATORIAL COMPUTATION The first approach to the computation of the resolvent is based on the expansion of Eq. (3) as a power series, Although the final results will only depend on p, in order to develop the formalism it is convenient to reintroduce the dependence on p. A. Organizing the calculation. The bare propagator. Momentum shift: choosing wisely the integration order The R-th term of the expansion Eq. (30) is where the average over the vibrational centers take the form of a multi-dimensional integral with measure As for all such integrals, although the final result is independent of the order in which the individual integrals are performed, the difficulties encountered in a real computation are dramatically smaller if one finds a wise ordering for iterated integrations. Now consider the expression which arises as a factor when introducing the explicit form (Eq. 2) of M into Eq. (31). When dealing with a diagonal term, we shall integrate over the position of the medium particle, x k l ; when dealing with an off-diagonal term, we shall integrate over x i l . For a diagonal term, the integral over the position of the medium particle is easy, if the particle index k l does not appear elsewhere in the chain (even if the index i l is sure to appear at least once more along the chain). For the non-diagonal term, the integral over x i l is very simple if it does not appear later in the chain (even if i l+1 appears twice or more times in the chain, to the right). The two integrals yield Since a term of order R has R such factors, the number of values the index k l (or i l ) can take without violating the non-repetition condition is between N and N โˆ’ R. But both N/V and (N โˆ’ R)/V tend to ฯ in the thermodynamic limit, hence momentum can shift through non-index-repeating elements from left-to-right: Similarly, momentum can shift through non-repeating elements from right-to-left. In that case, one would integrate over Note that a given matrix-element might be considered as non-repeating for momentum shift from right-to-left, but it could be not suitable for the left-to-right momentum shift. At this point, the computation of the leading order is straightforward. If there are no obstacles for momentum shift, we just push to the left exponential e โˆ’ipยทxi 0 to the right until it cancels out with e ipยทxi R , leaving us with (since there are precisely R matrix elements) Then the high-density limit of the sum in Eq. (30) is which is then the bare propagator of the theory, as anounced in Eq. (6). Repeated indices Now consider a situation where we can shift the external momentum p from left-to-right until a particular particleindex (say i l = 1 or k l = 1) is repeated in the chain somewhere to the right, so that we must stop. At this point, we shift the external momentum from right-to-left, until a particle-label repetition i r+1 = 2 or k r = 2 stop us. We depict this situation as At this point we will have Since the very same scheme of particle-label repetitions 1[stuff]2 can be found for all values L, S = 0, 1, 2, . . ., we can sum all those terms to find a contribution We interpret the two factors G 0 (p, z) as the external legs for a Dyson resummation of the self-energy. Clearly particle-label repetitions are going to be very important in what follows, so some terminology will be useful. A generic factor will be called an L-stop if particles k l or i l are repeated somewhere to the right (so that momentum cannot be shifted from left-to-right trough index i l ). Similarly, we shall call it an R-stop if k l or i l+1 are repeated somewhere to the left. We note that a matrix element can be both a L-stop and a R-stop (if k l is repeated both to the right and to the left, or if i l is repeated to the right while i l+1 is repeated to the left). To make momentum flow through an L-stop or R-stops we resort to the so called fake-particle trick. Consider a particle label, say 1, that appears twice (for instance, in an L-stop and in an R-stop to its right). Before carrying out the average over {x i }, we multiply the term by 1 written as Then we can pretend that particle 1 takes two identities, 1 and1, so that there is no repetition. The price we pay for this simplification is that: โ€ข we have an extra integration over the internal momentum q, โ€ข we have to deal with an extra factor e iqx1 at the L-stop, and an extra e โˆ’iqy1 at the R-stop, and โ€ข the fake particle y1 does not bring a combinatorial N factor, or an 1/V from the normalization of the y 1 integral, so that there is a lacking factor of ฯ (this we can ignore if we add compensating 1/ฯ to the final expression). However, the modified momentum-shift formulae are simple enough to justify these inconveniences. Integrating over Similarly, integrating over y1 at the R-stop, we have As a warning on momentum-shift, note that one may shift momentum from left to right as long as there is nothing to the left still needing integration (and similarly for right-to-left shifts). Momentum shift can be visualized as a zip with two heads: one pulls both heads until they meet (and then there are no integrals left to be done). The reduction formula Imagine we face the situation i.e. the leftmost stop is an L-stop at index position l, the rightmost stop is an R-stop at index position r + 1, and the particle that prevents the two momentum shifts is the same at both ends, say i l = i r+1 = 1 or any other possible combination (k l = i r+1 = 1, i l = k r = 1, or k l = k r ). If the particle label 1 does not appear again inside the brackets, a nice reduction formula follows: This can be proved by averaging over x 1 . To adjust the power of ฯ, just recall that there were order N choices for (say) the index coincidence k l = i r+1 = 1. Note that the proof of Eq. (41) involves doing four different integrals. Let us see how the fake-particle formulae (Eqs. (40)) yield the same result effortlessly. The introduction of the fake particle transforms the left-hand side of Eq. (41) to (42) We now merely shift momentum to the right using Eq. (40a) and to the left using Eq. (40b) to obtain Now a change of integration variable q โˆ’โ†’ p โˆ’ q and the second of identities (19) yield Eq. (41). Both ฮฃ (1) and ฮฃ (2) A follow directly from Eq. (41). Also, more general expresions can be found easily from it, as we shall see below. B. Order 1/ฯ If no further particle-label repetition arise, the momentum e โˆ’iqยทxi l+1 in Eq. (41) can be shifted to the right until it is killed by the second exponential e iqยทxi r . We have then a set of contributions of the form composed of the product of three harmonic series that are easily seen to add-up to If we interpret the two factors G 0 (p, z) as external legs of a Dyson resummation, we get which is the first order result anticipated in sec. III. C. Order 1/ฯ 2 The cases with two pairs of repeated indices, or one index occurring three times contribute to the second-order corrections. The contributions separate naturally in three kinds, according to the arrangement of the repeated indices. The nested case: ฮฃ (2) A Take now the scheme of particle repetitions giving rise to Eq. (43), and place it in between an external pair of particle repetitions: Assume that the index 2 happens twice and only twice in the chain. We are thus entitled to use the reduction formula, Eq. (41), for particle 2. The inner momentum q, can then be shifted (from either side) until it hits particle 1, where it produces a contribution such as Eq. (43). The only difference is in that the role previously played by the external momentum p is now played by the internal momentum q. We get, without need for further computation, 2. The interleaved case: ฮฃ The ฮฃ A moment's thought indicates that the leftmost 1 must belong to an L-stop, while the rightmost 2 must be an R-stop. Furthermore, the internal 2 should be an L-stop (otherwise, one would use a fake particle to shift momentum from left-to right over it trivially). For the same reason, the internal 1 should belong to an R-stop. Our previous succes with the reduction formula, Eq. (41) suggests that we try to deal with all such terms at once, by performing the integral 47): the central 2, 1 particles may also collapse onto a single matrix element (necessarily non-diagonal) which is both an R-stop and an L-stop. Such terms will be considered in sec. IV C 3. We now introduce two fake particles,1 and2, to transform the above integral into One then shifts momentum from left-to-right up to i s and from right-to-left again up to i s , to find Eq. (17b) follows after changing integration variables according to 3. The collapse of a L-stop and a R-stop: ฮฃ (2) C As we have remarked, it can happen that the L-stop and R-stop of Eq. (47) actually belong to the same matrix element, necessarily non-diagonal. However, any non-diagonal term should be paired with a diagonal one. As we mentioned in sec. IV A 1, a diagonal term can be both a L-stop and a R-stop if the medium particle is repeated both to the left and to the right. Hence we will be considering here this kind of terms (O:of diagonal matrix element, D: diagonal matrix element): The the terms with an off-diagonal index appearing three times (1 . . . O(1?) . . . 1) do not belong to ฮฃ C , and are considered in sec. IV C 4. Let us start with the case 1 . . . D(1) . . . 1: We now introduce two extra fake particles to substitute particle 1, namely1 and1 via the identity to find Finally we shift momentum from left-to right up to i r , from right-to-left up to i r+1 and integrate overx 1 . We obtain Consider now 1 . . . O (21) . . . 2: Introducing two fake particles,1 and2 we can rewrite it as Shifting momentum left-to-right up to i r and from right-to-left up to i r+1 we are left with so that adjusting the power of ฯ, we get Adding together the two pieces, Eqs. (59) and (55) we finally find which after the change of variables and use of identities Eq. (19) yield Eq. (17c). The Dyson resummtion to order 1/ฯ 2 Recalling Eq. (14), we notice that we have still not identified the pattern of particle-label repetitions that gives rise to the second-order terms appearing in the Dyson resummation the first-order self-energy, (we have not written the irrelevant external legs). The natural candidate is where the sequence is L-stop, R-stop, L-stop, R-stop. This expectation is correct, but it will turn out that the constraint imposed by the matrix-product structure needs extra terms to build Eq. (62). These missing terms will be provided by the pattern 1 . . . O(1?) . . . 1. Let us first compute blindly the term 1 . . . 1 . . . 2 . . . 2, incurring in a quite instructive mistake. We introduce only one fake particle,1: We shift momentum from left to right up to i r as usual. At this point, we still need to push the momentum to the right (this is unusual). We need to perform two integrals, Hence the integrations up to this point yield It seems to be an easy matter to complete the computation: one pushes momentum p to the right up to i s , seemingly yielding a bare propagator G 0 (p, z), and we would be left with 2 . . . 2 (a standard diagram for the self-energy at order 1/ฯ). However, after some reflection it is clear that an R-stop and an L-stop such as . . . 1 . . . 2 . . ., where both particle 1 and particle 2 appear in off-diagonal matrix elements must be separated by at least one off-diagonal matrix element. Hence if there are S matrix elements between the R-stop and the L-stop, when shifting momentum p we will encounter a factor which adding the geometric series means Hence the correct result is We will now show that the second term in Eq. (68) is cancelled by the contribution from In this pattern, the leftmost 1 belongs to an L-stop and the rightmost one to an R-stop. The first observation is that the central 1 in the O(1?) must nessarily appear in an R-stop (because we never find the same particle in any matrix element f (x i โˆ’ x j ), and due to the constraint imposed by the matrix product). The second observation is that there must be, at least, one off-diagonal matrix element between the two R-stops sharing the common particle 1. Introducing fake particles 1 and1, we are left with All that remains is a simple momentum shift, keeping in mind that when going over the factor [. . .] โ€ฒ it will give Thus one finally finds D. Higher orders The argument of sec. IV C 1 is fully general. Consider the contribution of order 1/ฯ n to the propagator, rather than the self-energy (i.e. let us include both the connected and disconnected pieces). We can write this as G 0 (p, z)H (n) (p, z)G 0 (p, z). Let us emphasize that H (n) (p, z) refers to the full contribution to the propagator at order 1/ฯ n , not to a particular topological subset (such as the cactus [17]). We may enclose the scheme of particle label repetitions that generates H (n) (p, z) within an L-stop and an R-stop with equal particle labels that do not appear again along the chain. Under such circumstances, we are entitled to use the reduction formula, Eq. (41), which yields Clearly this is not the full self-energy at order 1/ฯ n+1 , but it is a genuine part of it that automatically verifies In particular if n = 1 this gives the 1/ฯ 2 term ฮฃ 2 A (p, z) discussed above. It is interesting to note that since for q โˆผ 1, and z = ฮป + i0 + , for small ฮป it is expected that (Debye spectrum, see Sect. II) Thus, the vanishing of the amplitude A in Eq. (8) implies that non trivial cancellations occur at all orders in perturbation theory. Since we are presenting arguments for such cancellation, we agree with Ref. [30] in that the ฮฃ (n+1) A terms alone do not reproduce the correct analytic structure of the theory. Towards the self-energy at third order Using the combinatorial rules described above, it is possible to push the perturbative computation to order 1/ฯ 3 order, which has never been attempted before. Here we will limit ourselves to the terms without collapse of an R-stop with an L-stop (i.e. we will retain only the terms with 6 vertex functions). The reason is that the combinatorial computation suggests very simple Feynman rules that can be used to obtain the diagrams, without lengthy computations. The purpose is to check that, at least within this subclass of diagrams, the cancellation of the prefactor of the p 2 ฯ‰ Dโˆ’2 term still occurs. Let us describe the Feynman rules. Take for instance a term such as 5. Associate a bare propagator, G 0 , to each full line. 6. Associate a vertex function to every stop, such that its first argument is always the momentum runing over the dotted line. 7. For an L-stop, the second argument of the vertex will be the momentum running over the full line to its left. 8. For an R-stop, the second argument of the vertex will be the momentum running over the full line to its right. 9. Multiply by 1/ฯ 3 and integrate over the internal momenta. Applying these rules to the patterns without stop collapse, we obtain the following contributions. a. Terms L1 . . . L2 . . . L3 . . . R3 . . . R2 . . . R1 Now we wish to compute (S D : surface of the unit-sphere in D dimensions) the limit and in general, J k , defined from I k (p, z) as the same limiting procedure. The rules to obtain the limit painlessly are simple: 1. Locate a propagator, G(q) whose running momentum is never a second argument of a vertex function V (ยท, q). Apply the simplification For I 1 only l = 0 gives a contribution to J 1 , hence For I 2 one easily realizes that only l = 0 contributes to J 2 : Since V (q โˆ’ k, q) = โˆ’V (k, q), one has J 2 = โˆ’J 1 . Both k = 0 and l = 0 contribute to J 3 (for k = 0 we changed the dummy variable l to k): For J 4 both k = 0 and l = 0 are relevant: The only relevant contributions is now l = 0: J 6 stems both from k = 0 and from l = 0. For the k = 0 contribution, we make the change of variable q โˆ’โ†’ p โˆ’ q to identify the cancellation with J 3 : Only l = 0 contributes to J 7 : Again, only l = 0 matters: i. Terms L1 . . . L3 . . . L2 . . . R3 . . . R1 . . . R2 And, once again, only l = 0 contributes: Here we have a contribution from k = 0 as well as from l = 0: Resummation of the imaginary parts The resummation of the imaginary parts of the previous diagrams is simple. Using the properties of the vertex V (p, q) and changing carefully the integration variables when necessary we can show that and the total contribution to the imaginary part proportional to z (Dโˆ’2)/2 vanishes. V. A FIELD THEORY APPROACH In this section we will introduce a field-theoretical representation for the resolvent G(p, z). Within this formalism one is able to obtain the perturbative computation for the self-energy in a more strightforward way than with previous formulations [15]. Interestingly enough, due to the ultraviolet behaviour of the bare propagator of the field involved, such perturbative expansion yields some divergent terms that can be summed up to zero. The starting point is the following representation for the resolvent: Introducing the fields one has where we have introduced the action and the partition function at a fixed realization of the disorder, given respectively by Now we note that the action Eq. (107) depends on the field ฯ† only through the values that it assumes on the random positions {x i }. In fact, in the action, the field ฯ† is always multiplied by the random field ฯ, which selects the random points of the lattice {x i }. So, we can substitute the discretized functional measure with the continuous one: this is a crucial step. The continuous version of the fuctional integral is invariant under the following transformation of the field ฯ†, which we shall call gauge transformation: This is a local transformation, but we can see that its global version is trivial because the condition Eq. (109) implies that a global transformation can be possible only for h = 0. This local symmetry is not present in other field-theoretic formulations [15]. We now look at the resolvent: it can be written in the form where ยท stands for the average over the action S ฯ [ฯ†]. We immediately see that ฯ(x)ฯ(y) ฯ†(x)ฯ†(y) is gauge invariant. With the change of variables the resolvent can be written with the action where Note that the first term of Eq. (112), when computed in the limit ฮดฯ = 0 of the action (Eq. (113)) yields the bare propagator G 0 (p, z). The fist term then corresponds to the free (gaussian) part of the field theory and the terms involving three and four fields are the interacting part. The latter can be treated perturbatively using standard diagrammatic techniques of field theory. One can easily see from the form of the interacting terms that in such diagrams no loops involving the ฮดฯ field may arise because, at fixed disorder, it acts as an external field while a generic n-loop diagram comes from the average over the disorder and yields a 1/ฯ n contribution to the resolvent. The general expression We may write as well the expression for the arbitrary n-point correlation ฮดฯ(y 1 )ฮดฯ(y 2 ) . . . ฮดฯ(y k ), needed to compute the self-energy to order 1/ฯ 3 or higher in the field theory. To give our result, we shall need some notations. Let We also define P (k) , the set of all possible partitions of {1, 2, . . . , k}. Given a partition ฯ‰, the subsets associated to it will be ฮฉ l,ฯ‰ , with l = 1, 2, . . . , ฯ‰ . We shall need to consider H (k) , a subset of the set of all partitions P (k) . H (k) is made of all partitions ฯ‰ such that ฮฉ l,ฯ‰ > 1 for all l = 1, 2, . . . , ฯ‰ , i.e. partitions in which none of the subsets contains less than two integers. Then the general result is: The proof is given in app. A. To recover Eq. (120) from this formula, note that the set H FIG. 1: Diagrammatic notation The latter expression depends only on two momenta. Thus when the vertex V 4 is involved, one has to make its expression symmetric by joining the ฮดฯ propagators with the two possible external links offered by this vertex. Fig. 1 defines our diagrammatic notation. Note that the vertex V 4 is not symmetric under the interchange of the two ฮดฯ lines. Now we are able to write down the one-loop diagrams. Recalling that the resolvent G(p, z) is given by Eq. (112), we compute the one-loop contribution to ฯ†(x)ฯ†(y) : = 0 (126) The last diagram gives a general result: every tadpole made with a vertex with four fields gives a vanishing contribution due to the form of the vertex. The term with one external ฮดฯ insertion, arising from ฮดฯ(x) ฯ†(x)ฯ†(y) , is given by The last contribution to the self-energy at one loop comes from ฮดฯ(x)ฮดฯ(y) ฯ†(x)ฯ†(y) , and is Note that this last contribution has an ultraviolet divergence since the propagator goes to a finite costant when the internal momentum goes to infinity. Nevertheless, by adding the four diagrams the divergence disappears and one recovers the combinatorial result for the one-loop self-energy. C. Two loops Let us first consider the two-loop diagrams arising from ฯ†(x)ฯ†(y) : The diagram L 7 seems to be already included in the Dyson resummation of the one-loop result. However, since diagrams with one and zero external legs have to be included in the diagrammatic expansion, it also provides a genuine contribution to the two-loop result. Note that in order to obatin L Next we must consider the contribution arising from ฮดฯ(x) ฯ†(x)ฯ†(y) : As before, we have used the disconnected part of the 4-point function, appart from L 10 where we have used the 3-point function. Note also that L (2) 13 arises both in the Dyson resummation and in the two-loop expansion. Finally, we consider the diagrams arising from ฮดฯ(x)ฮดฯ(y) ฯ†(x)ฯ†(y) : We now show how the diagrams can be summed up to give the combinatorial expressions for the self energy. Consider the diagram L where we have defined In the same way we can combine L 2 , L 9 and L 15 ; they have the same topology of ฮฃ B (p, z): where We now add the diagrams L 13 and L 18 because they produce the Dyson resummation of the self energy at one loop that we want to isolate from the other contributions that have to be included in the self energy at two loops. They give where At this point one can check that 10 + L and the combinatorial result is recovered. D. The small p behaviour We will now prove that the prefactor of the term p 2 ฮป (Dโˆ’2)/2 is zero to all orders in perturbation theory. For this purpose, the field theory approach turns out to be very convenient. Consider the vertex V 3 with three fields, It is easy to see that this vertex is symmetric and can be written as Moreover one can check directly that Consider now the vertex with four fields. We see that the Wick contractions between the fields ฮดฯ symmetrize the vertex. In fact in every diagram this vertex appears in the form The important thing is that this vertex vanishes when one of the two G 0 bare propagators carries a null momentum. Consider now a diagram that arises from the expansion of the resolvent G(p, z). At the lowest order in z when the diagram contains some three-field vertices one has to consider only the symmetric part of these vertices. Let us apply the method explained above in order to extract the contribution to the self energy proportional to z (Dโˆ’2)/2 . Apparently, if one sets to zero the momentum of a bare propagator that enters into a vertex then its contribution to the imaginary part vanishes. This seems very strange because from this argument it follows that only L 3 contributes to the imaginary part. Moreover if we consider the two-loop contributions we see that there are no contributions to the imaginary part of the self energy because the diagrammatic expansion L 1 -L (2) 18 contains at least one vertex that vanishes when we set to zero one of the momentum brought by a ฯ†-propagator. However the argument is not complete. Actually, the diagrammatic expansion L since it contains also the Dyson resummation of the one loop self energy. This is the fact that completes the argument and will lead us to prove that a contribution proportional to z (Dโˆ’2)/2 p 2 cannot appear at any order in perturbation theory. We start checking the argument just given at the one-loop level. Let us introduce the notation so that the self energy at one loop can be written diagrammatically as From this expansion and from the above argument one sees that the imaginary part of the self energy (we will refer always to the imaginary part proportional to z (Dโˆ’2)/2 ) may come from the last diagram and is correctly given bแปน which is also the contribution that can be easily calculated from the combinatorial expression. Now consider the expansion at two loops. From the combinatorial expressions of the self energy we immediately see that the immaginary part comes from only ฮฃ C and can be rewritten in the form Consider now the diagrammatic expansion for the two loop self-energy L 1 -L 18 . We have to extract from this expansion the term Now we will do this in a diagrammatic way. Consider the diagrammatic expansion for the above term (Fig. 2). If we want the imaginary part of the self energy proportional to z (Dโˆ’2)/2 we have to consider the term and the diagrams in Fig. 2. When we calculate this contribution we have to set to zero the momentum carried by one internal propagator G 0 so that the contribution coming from ฮ›(p, z) does not matter. We have to calculate only the term coming from the Dyson resummation so that the imaginary part of the self energy at two loops is given bแปน At this point we can give also the analytical argument where we have used the fact that On the same line we can give the imaginary part proportional to z (Dโˆ’2)/2 at three loops because this contribution comes from the Dyson resummation of one and two loops self energy: At this point we can give a general expression for the imaginary part proportional to z (Dโˆ’2)/2 at any perturbative From this expression we can prove by induction that the imaginary part of the self energy proportional to z (Dโˆ’2)/2 cannot appear at any order in perturbation theory. In fact we have seen that it does not appear at one and two loop so we can prove that if it does not appear up to n loops it does not appear to n + 1 loops too. We can see that where ฮณ โ‰ฅ 2 and where we have showed only the term at lowest order in z. Moreover we have where we have neglected the higher order in z. It follows that the generic term in (192) is of order with ฮฒ โ‰ฅ 4 because [G 0 (p, z)] โˆ’1 โˆผ p 2 . This completes the proof. VI. CONCLUSIONS In conclusion, we have given a detailed description of the perturbative high-density computation of the resolvent (and in particular the density of states) of Euclidean random matrices within two different formalisms. The combinatorial formalism of sec. IV results in fewer diagrams and is probably more convenient when the goal is to obtain an expression of the self-energy at a given order. On the other hand, the field-theoretic formalism (sec. V), though producing a higher number of diagrams, has allowed us to analyze the p โ†’ 0 behavior at all orders in perturbation theory. This analysis shows that the immaginary part of the self-energy in the limit of small momenta (which controls the width of the Brillouin peak of the dynamic structure factor) has, in contrast to previous claims [15][16][17][18]30], the structure โˆ’ Im ฮฃ(ฮป, p) = Bฮป where C, B > 0 are amplitudes, and c is the speed of sound. This implies in particular a p 4 scaling for the Brillouin peak width, but it also shows that the structure of the theory is more complex than in the case of scattering from lattice models [29]. The cornerstone of the proof is a general result for the k-point correlation functions of ฯ (rather than ฮดฯ). The sought correlation function, in the thermodynamic limit, is ฯ(y 1 )ฯ(y 2 ) . . . ฯ(y k ) = 1 + Eq. (A1) looks very similar to Eq. (122), yet we note the following crucial differences: โ€ข The partitions ฯ‰ belong to P (k) rather than to the restricted set H (k) . In particular, the term equal to 1 in Eq. (A1) follows from the only partition ฯ‰ with ฯ‰ = k, namely { {1}, {3}, . . . , {k} }, which obviously does not belong to H (k) . โ€ข In the innermost product in Eq. (A1), a subset ฮฉ l,ฯ‰ with just one element, ฮฉ l,ฯ‰ = 1, merely contributes a factor of one. Hence, for all practical purposes, such a subset ฮฉ l,ฯ‰ can be ignored. To establish Eq. (A1), first note that ฯ(y 1 )ฯ(y 2 ) . . . ฯ(y k ) = 1 ฯ k N i1,i2,...,i k =1 where the average is taken with respect to the flat probability measure, Now, for a given assignment of the k particle labels i 1 , i 2 ,. . . ,i k , we declare that all terms with a coinciding particle label i r form a subset ฮฉ l,ฯ‰ . It is then obvious that every assignment of the k particle labels i 1 , i 2 ,. . . ,i k defines a partition ฯ‰ in P (k) . Furthermore, a little reflection shows that all possible partitions in P (k) can be obtained in this way. Eq. (A1) follows from the following three facts about a given partition ฯ‰: 1. There are N (N โˆ’ 1) . . . (N โˆ’ N ฯ‰ ) possible assignments of the k particle labels i 1 , i 2 ,. . . ,i k that yield the partition ฯ‰ (you are given N choices for the particle that appears in the subset ฮฉ 1,ฯ‰ , N โˆ’ 1 for that appearing in ฮฉ 2,ฯ‰ , and so forth).
The MST1R/RON Tyrosine Kinase in Cancer: Oncogenic Functions and Therapeutic Strategies Simple Summary MST1R/RON receptor tyrosine kinase is a highly conserved transmembrane protein present on epithelial cells, macrophages, and recently identified in a T-cell subset. RON activation attenuates inflammation in healthy tissue. Interestingly, it is overexpressed in several epithelial neoplasms with increasing levels of expression associated with worse outcomes. Though the mechanisms involved are still under investigation, RON is involved in carcinogenesis via immune modulation of the immune tumor microenvironment, activation of numerous oncogenic pathways, and is protective under cellular stress. Alternatively, inhibition of RON abrogates tumor progression in both animal and human tissue models. Given this, RON is a targetable protein of great interest for cancer treatment. Here, we review RONโ€™s function in tissue inflammation and cancer progression, and review cancer clinical trials to date that have used agents targeting RON signaling. Abstract The MST1R/RON receptor tyrosine kinase is a homologue of the more well-known MET receptor. Like MET, RON orchestrates cell signaling pathways that promote oncogenesis and enable cancer cell survival; however, it has a more unique role in the regulation of inflammation. RON was originally described as a transmembrane receptor expressed on tissue resident macrophages and various epithelial cells. RON is overexpressed in a variety of cancers and its activation modifies multiple signaling pathways with resultant changes in epithelial and immune cells which together modulate oncogenic phenotypes. While several RON isoforms have been identified with differences in structure, activation, and pathway regulation, increased RON expression and/or activation is consistently associated with worse outcomes. Tyrosine kinase inhibitors targeting RON have been developed, making RON an actionable therapeutic target. Introduction Rรฉcepteur d'Origine Nantais (RON) is a receptor tyrosine kinase (RTK) identified in 1993 by Ronsin et al. as a homologue to cMET and a member of the MET proto-oncogene family [1]. The RON protein is encoded by the MST1R gene, located on chromosome 3 in humans, and is highly conserved across species. RON is translated as a single transmembrane pro-protein which is subsequently cleaved in its extracellular portion by proteases. A 40 kDa-ฮฑ chain, solely extracellular, is released and binds to the remaining transmembrane beta chain of 150 kDa. The subsequent receptor is composed of an extracellular sema domain, a transmembrane domain, and an intracellular portion containing the kinase domain. RON is activated by the binding of the macrophage-stimulating protein (MSP), also known as macrophage stimulating 1 (MST1), the sole ligand to RON that has been identified to date [2,3]. MSP is expressed as a pro-protein in the liver, lungs, adrenal glands, placenta, kidneys, and pancreas and requires a proteolytic cleavage to become active [4]. In addition, aberrant activation of the receptor has been described in many solid tumors. These include pancreas, lung, liver, breast, colon, prostate, bladder, and ovarian cancers, as well as AML and Burkitt lymphoma [13][14][15][16][17][18][19][20][21][22][23][24][25]. RON activation contributes to tumor progression and metastasis by promoting cell proliferation, motility, and inhibiting apoptosis. In the tumor microenvironment (TME), RON is expressed in epithelial tumor cells, tumor-associated macrophages (TAMs), tumor-associated myeloid-derived suppressor cells (MDSCs), and cancer-associated fibroblasts (CAFs) [26]. Overall, RON activity in cancer is the result of a complex cascade of events induced by RON activation leading to direct effects on tumor progression on epithelial cancer cells and an indirect effect through the modification of immune phenotypes in the tumor microenvironment toward one that is tumor-permissive. Evaluation of the role of RON in each individual cell type is needed to better understand the mechanisms by which RON regulates tumorigenesis. In addition, aberrant activation of the receptor has been described in many solid tumors. These include pancreas, lung, liver, breast, colon, prostate, bladder, and ovarian cancers, as well as AML and Burkitt lymphoma [13][14][15][16][17][18][19][20][21][22][23][24][25]. RON activation contributes to tumor progression and metastasis by promoting cell proliferation, motility, and inhibiting apoptosis. In the tumor microenvironment (TME), RON is expressed in epithelial tumor cells, tumor-associated macrophages (TAMs), tumor-associated myeloid-derived suppressor cells (MDSCs), and cancer-associated fibroblasts (CAFs) [26]. Overall, RON activity in cancer is the result of a complex cascade of events induced by RON activation leading to direct effects on tumor progression on epithelial cancer cells and an indirect effect through the modification of immune phenotypes in the tumor microenvironment toward one that is tumor-permissive. Evaluation of the role of RON in each individual cell type is needed to better understand the mechanisms by which RON regulates tumorigenesis. RON Attenuates Inflammation Various disease models have demonstrated that under normal biological conditions, RON attenuates inflammation by decreasing the secretion of pro-inflammatory cytokines. These mechanisms are important to understand as alterations in the immune microenvironment can affect tumor progression. RON activation in macrophages occurs following MSP binding. MSP is secreted as an inactive form and requires activation by serine pro- teases including matripase, hepsin, and HGF-A. Mice harboring a kinase dead mutation RON designated as (TK-/-) have an increased susceptibility to nickel-induced acute lung injury with clusters of cells in lungs producing granzymes and composed of macrophages, lymphocytes, and neutrophils [27]. In a contrasting model, MSP activation of RON led to a decrease in the production of TNF-ฮฑ by alveolar macrophages following and during LPS stimulation. This led to a less severe form of acute lung injury with significant alveolar wall thickening and protein leakage [28]. Activation of RON inhibits LPS-induced degradation of Iฮบฮฒ-ฮฑ, thus inhibiting NF-ฮบฮฒ nuclear translocation [29]. This in turn leads to a reduction in shock mediators, prostaglandin E2 (PGE2) and COX-2 inducible enzyme [29]. In murine alveolar macrophages, RON activation leads to a decrease in NF-ฮบฮฒ activity. NF-ฮบฮฒ is responsible for regulating TNF-ฮฑ at the transcriptional level [29]. In addition, the enzyme ADAM17 cleaves TNF-ฮฑ protein for activation and is found to be downregulated upon MSP binding of RON and increasingly expressed in RON knockdown macrophage lines [30]. In addition to its modulation of shock mediators, RON activation induces gene expression patterns characteristics of anti-inflammatory macrophages such as the induction of Arginase I and the suppression of the pro-inflammatory marker iNOS. This combined effect leads to the conversion of arginine into ornithine in favor of nitric oxide, a free radical and a major resource of oxidative stress. Effects on Arginase I expression are observed only when macrophages are stimulated with LPS, indicating that RON modifies polarization following TLR-4 activation but does not induce an anti-inflammatory state on its own. RON signaling also suppresses the production of pro-inflammatory cytokines after LPS or IFN-ฮณ stimulation [22]. These notions are revisited in the section reviewing macrophage modulation in cancer. The importance of RON regulated inflammation is depicted in a simian immunodeficiency model identifying an inverse relationship between levels of RON expression and damage to the central nervous system secondary to inflammation. Real-time RT-PCR demonstrated a 60%< reduction in RON expression in the brains of animals with CNS lesions compared to those of uninfected controls. Disease progression was also associated with an increase in inflammatory cytokine TNF-ฮฑ and a decrease in immune suppressive Arginase I from tissue samples at progressive time intervals. Not surprisingly, an inverse relationship between viral load and RON was also described [31]. RON is involved in downregulating the damaging tissue-specific effects of unchecked inflammation under normal homeostatic conditions. Overall, increasing levels of RON activity are associated with decreased inflammation, whereas decreasing levels of RON activity promote an inflammatory state. RON Isoforms In the last decade, several functional isoforms of RON have been identified, primarily resulting from post-translational splicing alterations of full-length RON. Several of these are specific to certain tumor types. Some RON isotypes are constitutively active. Other isoforms lack the extracellular domain requiring new drug binding strategies to inhibit kinase activity. Known isoforms include 1254T, 170, 160E2/E3, 160, P5P6, 155, 110, 85, and 55 (sf-SMT1R) [26][27][28]. A subset of isoforms, 140, 155, 160, 165, P5P6, and sfRON are known to exist in a constitutively phosphorylated state with several originally identified in human colorectal adenocarcinoma [23]. Constitutively active RON isoforms have demonstrated up to 90% inhibition of LPS-induced COX-2 protein and mRNA expression even in the absence of MSP [13]. Furthermore, both MSP dependent and MSP independent isoforms 155, 160, and 165 induced scatter phenotypes after plating when transfected into Madin Darby canine kidney (MDCK) cells. These also demonstrated transformative cell properties and anchorage-independent growth in transfected NIH3T3 cells in both in vitro and in vivo colorectal cancer models [29]. In 2013, Moon et al. utilized mutagenesis analysis and stepwise base substitutions to identify enhancers of exon 11 inclusion of RON pre-mRNA and pinpointed the 2-nt RNA of their exon 11 mutant, 11-3. In addition to the wild-type AG sequence, nucleotide pairs GA, CC, UG, and AC enhance inclusion of exon 11, while base mutants UA, GC, UU, and GG obliterate inclusion of exon 11 [32]. Chadekis et al. characterized RON isoform expression in pancreatic cancer cell lines and patient-derived pancreatic cancer xenografts. An increase in isoforms 165, P5P6, and sfRON was noted as the overall RON expression increased. Isoform 165 represented up to 30% of the total RON transcript in high RON expressing xenografts. In addition, these three isoforms constituted 42% of the total RON transcript in the PDX lines evaluated. They were the first to report that both P5P6 and sfRON have in vivo transformative tumorigenic activity in pancreatic cancer models [33,34]. The transcriptome associated with the expression of these variants differed between isoforms, demonstrating the need for additional studies to better understand isoform function. In 2011, sfRON was identified as the predominant phosphorylated RON isoform in primary human breast cancer samples. sfRON was expressed in the breast cancer line MCF7 and associated with PI3K pathway activation leading to increased migratory capacities in vitro and larger primary orthotopic tumors in vivo. Differences in pathway activation were noted between sfRON and wtRON with the former signaling via PI3K activation and MAPK inhibition whereas both pathways were active with wtRON [29]. Recently, Lai et al. indicated for the first time that sfRON is expressed in T-cells and inhibits Th1 differentiation of CD4 + immature T-cells leading to blunting of an antitumor response. Furthermore, sfRON can attenuate recruitment and trafficking of antitumor T-cells from lymph nodes to the TME. Knockout of sfRON essentially obliterated establishment and propagation of metastatic lesions in this breast cancer mouse model [35]. Furthermore, transfer of sfRON knockout CD4 + T-cells to RON WT mice was largely protective against metastatic outgrowth following tumor cell injections. In addition to breast and pancreatic cancer, sfRON is known to be expressed in gastric cancer. Wang et al. demonstrated enhanced cancer cell proliferation via significantly upregulated glucose metabolism intermediates in high sfRON expressing gastric cancer human tissue samples compared to RON overexpressing samples or control samples [36]. In addition, gene set enrichment analysis and qRT-PCR analysis confirmed the upregulation of glucose metabolism intermediates in GTL-16 and MKN-45 gastric cancer cell lines. Further investigation identified SIX1 as the effector of the sfRON/ฮฒ-catenin pathway, which leads to tumor growth, and enhancement of glycolytic genes GLUT1, LDHA, and HK2 ( Figure 2). Greenbaum et al. transfected low RON expressing HEK293 cells with wtRON, isoforms 155, 160, and 165 and found increased motility via scratch assay in all lines relative to control. Again, gene expression analysis of RON expressing cell lines demonstrated isoform specific variation [37]. Differences in downstream signaling induced by RON isoforms has also been shown to differ by cancer type [27][28][29][30][31][32][33][34][35][36][37][38][39][40]. Characterizing RON kinase expression in lung cancer has led to the identification of novel isoforms, including the deletion of exon 18 and 19 in the C-terminus region of RON [31]. Krishnaswamy et al. evaluated numerous SCLC and NSCLC cell lines using cDNA, exon specific primers, and PCA amplification to identify four novel RON isoforms. These included the skipping of exons 15-19, 16-19, 16-17, and 16 alone [41]. These transcript variants consist of exon skipping within the kinase domain. This specific study did not evaluate isoform function. Clearly, it is important to understand both the mechanisms underlying the expression of the many isoforms of RON in addition to the biological impact of different isoforms in the context of specific cancer types in order to generate rational strategies for designing therapies. , binds to hypoxia induced factor (HIF) and b-catenin to form a complex that translocates to the nucleus and drives the expression of b-catenin target genes like c-jun and ca-9. The upregulation of these genes leads to increased cell proliferation and metastatic capabilities that drives tumorigenesis. (B) In response to treatment with chemotherapeutic agents, RON translocates to the nucleus and binds with Ku70 and DNA-PK to form a complex that drives the expression of genes related to Non-homologous endjoining (NHEJ) pathways. This form of dna repair prevents apoptotic events that would normally be activated due to DSB in DNA, making these cells resistant to chemotherapy. (C) A constitutively active form of RON (sf-RON) activates the AKT pathway through phosphorylation. AKT then phosphorylates GSK-3B to inhibit its function. Inactive GSK-3B cannot inhibit B-catenin which is free to enter the nucleus and activate the S1X1 pathway which the drives the expression of glycotic genes. Upregulation of these genes enhances glucose metabolism which increases cell proliferation that is necessary for tumorigenesis. , binds to hypoxia induced factor (HIF) and b-catenin to form a complex that translocates to the nucleus and drives the expression of b-catenin target genes like c-jun and ca-9. The upregulation of these genes leads to increased cell proliferation and metastatic capabilities that drives tumorigenesis. (B) In response to treatment with chemotherapeutic agents, RON translocates to the nucleus and binds with Ku70 and DNA-PK to form a complex that drives the expression of genes related to Non-homologous endjoining (NHEJ) pathways. This form of dna repair prevents apoptotic events that would normally be activated due to DSB in DNA, making these cells resistant to chemotherapy. (C) A constitutively active form of RON (sf-RON) activates the AKT pathway through phosphorylation. AKT then phosphorylates GSK-3B to inhibit its function. Inactive GSK-3B cannot inhibit B-catenin which is free to enter the nucleus and activate the S1X1 pathway which the drives the expression of glycotic genes. Upregulation of these genes enhances glucose metabolism which increases cell proliferation that is necessary for tumorigenesis. RON Alters Macrophage Polarization Macrophages possess broad modulatory and effector repertoires by which they serve two overarching purposes. The first is to protect the host by upregulating production of pro-inflammatory cytokines, reactive oxygen intermediates, tumoricidal activity, and promotion of Th1 cells. Macrophages are also responsible for resolving inflammation and aiding in tissue rebuilding and remodeling following an insult [16]. Macrophages can exist anywhere along this pro-or anti-inflammatory continuum with LPS and IFN-ฮณ being potent stimulators of inflammatory cascades while IL-4/IL-13 induce pathways of immune suppression [17]. Their fate is influenced by signals received during maturation, activation, and immune engagement in the form of molecular, epigenetic, or host specific signaling [17,18]. The dichotomous classification of these largely in vitro-derived extremes refers to them as classical M1 macrophages, known for their pro-inflammatory and anti-tumorigenic properties, whereas alternative M2 macrophages are immunosuppressive with pro-tumorigenic qualities ( Figure 3) [16,19]. RON alters macrophage polarization with implications for cancer biology. RON activation suppresses the anti-tumorigenic M1 macrophage phenotype by inhibiting STAT1 phosphorylation and NF-kB activation induced by IFN-ฮณ and LPS, respectively [20]. Several studies in mice continued to expand our understanding of the tumor-specific changes that occur as well as their implications in tumor regulation. Transcriptional profiling in FVB mice, with M2-biased peritoneal macrophages, demonstrated that MSP stimulation of intact RON can lead to significant downregulation of genes in the IFN-ฮณ pathway and significant upregulation of genes involved in immune tolerance and tissue repair [18]. The IFN-ฮณ pathway is known to have inhibitory effects on tumor initiation and promotion by modulation of innate and adaptive immunity [21]. Increased expression of RON correlates with increased expression of Arginase I, a pro-tumorigenic enzyme characteristic of M2 macrophages [19]. Activation of RON in the tumor microenvironment may facilitate tumor survival by hijacking inflammatory and tissue repair pathways to promote self-survival against host defenses [22]. FVB mice tumor models of papilloma, fibrosarcoma-FVB, and methylcholanthrene demonstrated slowed tumor initiation, and overall outgrowth with decreasing levels of RON. In another mouse model, Gurusamy et al. demonstrated a significant reduction in tumor size and tumor cell apoptosis in tumors from (TK-/-) hosts after transgenic TRAMP-C2R33 prostate cancer cell line orthotopic injections. In addition to altering macrophage phenotype, RON expression results in the modification of macrophage migration ability. Many researchers have noted a decrease in F4/80 + CD68 + macrophages within the TME of RON knockdown models across various cancer types [1]. However, Gurusamy described increased intratumoral macrophage infiltration in RON TK-/-derived tumors. Subsequent in vitro analysis of macrophage migration using the immortalized murine alveolar macrophage line MH-S with RON expression (shNT) or RON knockdown (shRON) revealed significant increases in migration ability in the RON knockdown (sHRON) macrophage cohort. It is consistent that RON function impacts macrophage migration; however, additional work is needed to pinpoint how RON specifically impacts macrophage populations across cancer types. Increased RON macrophage expression is noted to alter cell signaling pathways suppressing CD8 + T-cell activation associated with cancer progression (Figure 2) [2][3][4]. CD8 + T-cell mouse depletion studies negated the benefits of RON knockdown, implying interplay between RON expressing macrophages and T-cell regulation for tumor control. Only recently has RON expression been demonstrated in T-cells with an associated blunting of anti-tumor response [35]. In TK-/-mouse models with increased F4/80 macrophage infiltration, Annexin V/PI staining demonstrated increased cell death compared to WT. Together, these findings highlight the importance and influence host RON status can have on tumor growth, macrophage modification, and immune modifying T-cell interactions [9]. Cancers 2022, 14, x FOR PEER REVIEW 7 of 20 RON in Cancer Cells In humans, RON is overexpressed in up to 50% of breast cancers, 40% of colorectal cancers, over 80% of human pancreatic cancers, and 90% of prostate cancers. Overexpression promotes tumor growth in these and other cancers [2,[10][11][12][13][14]. Furthermore, it is welldocumented that RON is minimally expressed in benign tissue types and increasingly expressed with cancer progression and is typically maintained in metastases [14][15][16][17]. This is clinically significant since increased RON expression has been associated with worse clinical prognosis in breast, colorectal, prostate, and pancreatic cancer [14,15,17,18]. Here, we review what is known about specific human cancers as well as the ongoing investigation in respective animal models. Welm et al. assessed microarray gene data of 295 breast cancer patients from the Netherlands Cancer Institute and noted decreased time to metastasis and decreased overall survival in patients with concomitant overexpression of RON, MSP, and MT-SP1. Furthermore, concomitant overexpression of these genes was an independent predictor for poor outcome and when considered in combination with a 70 gene prognostic signature, was more accurate in predicting five-year metastasis than either alone [13,38]. Animal models evaluating metastasis between RON WT and Ron TK-/-hosts found a significantly decreased lung tumor burden in RON TK-/-hosts leading to an improvement in overall survival by 50% compared to RON WT hosts. A defect in supporting the conversion of seeded metastasis to overt metastatic colonies was reproduced across several cell lines including polyomavirus middle T antigen (PyMT-MSP), polyomavirus MSCV-IRES-GFP (PyMT-MIG) control cells, lung alveolar/bronchiole carcinoma-P0297 (LAP-MSP), and lung alveolar/bronchiole carcinoma-P0297 control (LAP-MIG) [13,21,38,39]. Consistent with these findings, expression of RON increases during the progression of colorectal cancer and is associated with worsened tumor differentiation [14,42]. In pancreatic cancer, human tissue samples demonstrate progressively higher levels of RON RON in Cancer Cells In humans, RON is overexpressed in up to 50% of breast cancers, 40% of colorectal cancers, over 80% of human pancreatic cancers, and 90% of prostate cancers. Overexpression promotes tumor growth in these and other cancers [2,[10][11][12][13][14]. Furthermore, it is well-documented that RON is minimally expressed in benign tissue types and increasingly expressed with cancer progression and is typically maintained in metastases [14][15][16][17]. This is clinically significant since increased RON expression has been associated with worse clinical prognosis in breast, colorectal, prostate, and pancreatic cancer [14,15,17,18]. Here, we review what is known about specific human cancers as well as the ongoing investigation in respective animal models. Welm et al. assessed microarray gene data of 295 breast cancer patients from the Netherlands Cancer Institute and noted decreased time to metastasis and decreased overall survival in patients with concomitant overexpression of RON, MSP, and MT-SP1. Furthermore, concomitant overexpression of these genes was an independent predictor for poor outcome and when considered in combination with a 70 gene prognostic signature, was more accurate in predicting five-year metastasis than either alone [13,38]. Animal models evaluating metastasis between RON WT and Ron TK-/-hosts found a significantly decreased lung tumor burden in RON TK-/-hosts leading to an improvement in overall survival by 50% compared to RON WT hosts. A defect in supporting the conversion of seeded metastasis to overt metastatic colonies was reproduced across several cell lines including polyomavirus middle T antigen (PyMT-MSP), polyomavirus MSCV-IRES-GFP (PyMT-MIG) control cells, lung alveolar/bronchiole carcinoma-P0297 (LAP-MSP), and lung alveolar/bronchiole carcinoma-P0297 control (LAP-MIG) [13,21,38,39]. Consistent with these findings, expression of RON increases during the progression of colorectal cancer and is associated with worsened tumor differentiation [14,42]. In pancreatic cancer, human tissue samples demonstrate progressively higher levels of RON expression during progression from pancreatic intraepithelial neoplasia (PanIN) to pancreatic adenocarcinoma [17,33,34,40]. Furthermore, analysis of the human TGCA pancreatic cohort revealed an association between RON expression and decreased diseasefree survival and overall survival [17,40]. Given that KRAS is mutated in up to 90% of pancreatic cancers, mouse models with the same mutation have been used to study the role of RON [18,33,34,40]. Babicky et al. demonstrated that RON overexpression in mice expressing oncogenic KRAS developed more rapid progression of pancreas-specific acinarductal metaplasia and PanIN lesions compared to age-matched KRAS only mutated mice. In addition, overall survival was strikingly reduced in KRAS LSL-G12D /RON/Cre (KRC) mice compared to KRAS LSL-G12D /Cre (KC) mice. These studies demonstrated that RON promotes both tumor initiation and progression in KRAS-driven pancreatic cancer. Further supporting this are findings that KRAS LSL-G12D /RON TK-/-mice have slower pancreatic cancer onset, progression, and prolonged survival when compared to KC models with physiologic levels of RON [40]. RON is also overexpressed in hepatocellular carcinoma [22]. In this cancer type, expression of cytokines IL-6, TNF-ฮฑ, IL1-ฮฑ, and HGF are associated with increasing levels of RON expression [22,43]. RON's expression across numerous solid tumors as well as its consistent association with worse outcomes would make effective therapeutic strategies applicable across many cancer types. RON Crosstalk and Other RTKs In addition to ligand binding and homodimerization, receptor tyrosine kinase (RTKs) can be activated by heterodimerization. These physical and functional interactions have been shown to play a role in tumor progression and can contribute to treatment failure. Crosstalk between RON and several other proteins has been described and is implicated in tumorigenesis in several solid tumors. RON and EGFR Crosstalk in Cancer Co-expression of RON and EGFR has been reported in several tumor models including lung, colorectal, liver, and breast. Cooperation between RON and EGFR has been previously reported in bladder cancer as they are co-expressed in one third of patients [25]. This coexpression is associated with tumor invasion and decreased survival. In vitro inhibition of EGFR modulates RON activity in bladder cancer cell lines demonstrating the interplay between these two RTKs [44,45]. RON has also been implicated in promoting tumor growth in head and neck squamous cell carcinomas (HNSCCs) [46]. In 2013, Keller at al. observed RON expression in 64% of primary tumors. This expression was associated with both EGFR expression (p < 0.01) and EGFR activation (p < 0.001) [46]. In vitro experiments revealed that RON interacts and synergizes with EGFR to promote cell migration and proliferation. The authors showed that RON and EGFR can transactivate when stimulated by their respective ligand [46]. Another interaction between RON and EGFR occurs in protein complexes that also contain syndecans and integrins at the cell surface. Interaction with EGFR results in the stabilization of this EGFR-RON complex during times of cellular stress, possibly resulting in the prevention of cell cycle arrest via c-Abl in both HNSCC and breast carcinomas [47]. Such control of the cell cycle does not seem to depend on EGFR activity and could explain the high number of tumors refractory to EGFR inhibition. In colorectal cancer, RON and cMET RTKs were activated as a result of treatment with cetuximab, an anti-EGFR monoclonal antibody. This activation conferred treatment resistance [42]. In this publication, Graves-Deal et al. also showed that crizotinib, an inhibitor of cMET and RON, was able to circumvent cetuximab-acquired resistance. This overlap between RTK functions indicates that in CRC, as in many other solid tumors, a broad spectrum of kinase inhibition may be more effective than single target approaches [42]. RON and MET Crosstalk in Cancer RON and MET receptor tyrosine kinases belong to the same subfamily and are 68% homologous. Both receptors can activate signaling pathways such as PI3K/AKT and MEK/ERK and have been implicated in tumorigenesis through the regulation of cell proliferation, apoptosis, metastasis, angiogenesis, maintenance of cancer stem cells, and resistance to chemotherapy cells [48][49][50][51][52][53][54]. RON and MET are often co-expressed in cancer and crosstalk between the two receptors has been demonstrated [52][53][54][55][56][57]. Both receptors can form homo-and heterodimers and engage in transphosphorylation [52][53][54][55][56][57]. RON and MET can be targeted by small molecules, with similar IC50 s for certain drugs. More studies and clinical trials have been conducted with MET. While targeting tumors with MET overexpression is possible, the efficacy may be blunted due to functional redundancy, most notably with RON. Here, we reviewed several studies that were conducted to evaluate crosstalk between RON and MET. It has been reported that in cancers 'addicted' to MET signaling, RON phosphorylation is dependent on the level of expression and activation of MET [52]. Benvenuti et al. showed that RON can be transphosphorylated by MET in gastric and lung cancer cell lines. They also demonstrated that RON activation is sensitive to MET-specific molecular inhibitors and that RON knockdown in MET-addicted tumors affects cell proliferation and tumorigenicity [52]. In prostate cancer, co-expression of RON and MET promotes metastasis though ERK1/2 pathway activation. Targeting them using siRNA or the small molecule inhibitor foretinib suppressed in vitro migration and invasion of prostate cancer cells. This suggests that both receptors are necessary to achieve the full metastatic potential of prostate cancer cells [53][54][55]58]. Similarly, co-overexpression of both receptors was reported in triple-negative breast cancer (TNBC) [56]. Weng et al., using in vitro testing of different kinase inhibitors, demonstrated that targeting both MET and RON reduced cell migration, proliferation, and tumor size in vivo using murine xenografts. Despite the lack of validation by genetic approaches, they showed that targeting both RON and MET can have a more potent effect than targeting RON alone [56]. These results suggest that in TNBC, RON and MET heterodimers could more efficiently activate signaling pathways including MEK/ERK or PI3K/AKT [52][53][54][55][56]58]. It is also possible that RON and MET have partially overlapping functions, though the entire spectrum of activity for both receptors is required to reach the maximum effects. Consequently, targeting both receptors might be necessary to achieve a robust anti-tumor response. The case of pancreatic cancer is of interest as the relative importance of one RTK on the other is controversial. Hu et al. tested several TKI's on pancreatic cancer cell lines in vitro and in vivo. They observed that TKIs targeting both RON and MET led to a reduction in cell proliferation and migration, whereas specific inhibition of MET had no effect [57]. Interestingly, they observed a similar reduction of cell proliferation and migration using Tivantinib: a MET specific agent that binds dephosphorylated MET kinase rather than the kinase domain. This result suggests a possible role for MET that is independent of its kinase activity. RON also has biological functions independent of its kinase activity which have yet to be fully understood. This is evidenced by the fact that RON constitutive knockout mice are embryonic lethal due to a deficit in peri-implantation, while mice harboring a RON kinase-dead mutation are viable [59]. These kinase-independent functions of RON and MET remain to be characterized. It would be interesting to define if their respective kinase-independent activities harbor functional redundancy. Another study of RON and MET in pancreatic cancer reached different conclusions. Vanderwerff et al. conducted an evaluation of transcriptomic signatures following MSP and HGF in vitro stimulation of BxPC3 cells [60]. They observed that both ligands led to enhanced migration and activation of ERK and MAPK pathways. Importantly, they also showed that MSP stimulation led to transcriptomic effects like HGF stimulation but recapitulating only a fraction of effects observed after HGF stimulation. The authors concluded that targeting both receptors might be necessary in pancreatic tumors co-expressing the two receptors. One limitation of this work was that only one cell line was investigated, BxPC3. This cell line is poorly representative of the human disease since it does not carry a KRAS mutation which is present in >90% of human pancreatic cancers. RON has been shown to be a key regulator of KRAS mutant phenotypes [16]; therefore, we can speculate that the relatively small contribution of MSP-RON observed compared to HGF-MET may be explained by the fact that the KRAS-RON axis is not the main oncogenic driver in this cell line. It would be ideal to repeat such transcriptomic analyses on additional cell lines. Finally, MET is expressed by macrophages and stimulates an anti-inflammatory response by modulating cytokine expression [36,57]. In T-cells and B-cells, MET is expressed consequently to TCR and BCR signaling and favors activation and antibody production. In neutrophils, MET is present in granules and is released upon stimulation. In DCs, MET expression renders cells tolerant to the immune reaction. The RON kinase is also expressed in immune cells and regulates the inflammatory response (see RON and immune regulation section). RON and MET have both been described as regulators of macrophages promoting an anti-inflammatory state, and we can then speculate that inhibiting only one receptor would not be sufficient to reverse macrophage polarization if a functional redundancy also exists in macrophages. Further work is required to evaluate the level of crosstalk between RON and MET in immune cells. RON Crosstalk with Other RTKs RON crosstalk has most often been reported to occur with MET or EGFR as discussed above. However, Batth et al. reported crosstalk with an androgen receptor (AR) in prostate cancer [55,58]. Androgen deprivation is an initially effective treatment strategy in localized androgen sensitive prostate cancer, though once refractory, it progresses to metastasis. One mechanism of resistance appears to be the reactivation of androgen receptor signaling controlled by RTKs. RON is highly expressed in castrate-resistant prostate cancer and is activated following androgen deprivation to compensate for the loss of AR expression [55,58]. RON activates the transcription of the anti-apoptotic AR target gene c-FLIP by binding to its promoter region. c-FLIP is not the only AR target gene of importance in prostate cancer, and it will be interesting to evaluate the global impact of RON on AR targets and signaling. These findings suggest that inhibition of RON combined with AR antagonists may be an effective therapeutic approach in advanced prostate cancers, a hypothesis which deserves further evaluation. RON transactivation with PDGFR has been reported in human mesangial cells [58]. Physical interaction with PDGFR receptors allows a ligand-independent activation of RON leading to an anti-apoptotic function. Although such RON activation has been demonstrated in IgA nephropathy and not in solid tumors, it shows that such potential crosstalk is possible and could happen in some of the many PDGFR-altered tumors. In pancreatic cancer, RON has been shown to interact with insulin-like growth factor-1 (IGFR-1) and becomes activated by IGF1-R after IGF1 stimulation [61,62]. This activation modifies IGF1-R-associated transcriptomic signatures and promotes migration induced by IGF1 stimulation. Another study linked RON and IGF1-R, revealing that RON is expressed in rhabdomyosarcomas and Ewing tumors, both childhood sarcomas [62]. The authors identified RON as a key player enabling resistance to IFG1-R inhibition, by serving as an alternative activator of IGF1-R signaling molecules. Finally, Conrotto et al. described the interaction between RON and plexins which are cell membrane receptors for semaphorins [63]. The authors showed that Semaphorin 4D can indirectly activate RON when interacting with Plexin B1 and that this activation promotes an invasive growth response in vitro. Crosstalk between RON and other kinases contributes to tumor progression and, potentially, to treatment failure in various cancer types by way of redundant pathway activation and overlapping functions. Ongoing research is necessary to delineate these relationships to better guide treatment strategies. RON in Metastasis RON was initially found to regulate cellular motility in macrophages. RON stimulation by MSP induces macrophage cell spreading and attachment in culture as well as chemotactic migration [5]. In addition to cell proliferation and apoptosis, RON is known to regulate cell adhesion and motility of cancerous epithelial cells, notably through integrin-related attachment to extracellular matrix (ECM) [64,65]. Epithelial to mesenchymal transition (EMT) is an essential step in the process of metastasis but also takes place during kidney fibrosis as part of the progression of chronic kidney disease (CKD). In addition to promoting the expression of RTKs involved in CKD, such as VEGFR, PDGFR, and IGFR, it was shown that RON is able to promote EMT in normal kidneys and leads to the expression of fibrotic markers, such as N-cadherin, vimentin, and TGFฮฒ [59]. Several studies have sought to decipher the molecular mechanisms by which RON regulates metastasis. In breast cancer, immunostaining of human primary tumors and paired metastases confirmed the association of RON expression within metastatic deposits [64,65]. In a mouse model, orthotopic implantation of breast cancer MMTV-PyMT-derived cancer cells expressing MSP gives rise to metastatic lesions in the lymph nodes, lungs, spleen, and bones [38]. Interestingly another study demonstrated that lung metastases are absent when the cells are injected in a TK-/-host [39]. Eyob et al. further demonstrated that host RON activity is essential for the transition from micro-to macro-metastasis and acts through the suppression of an anti-tumor CD8 + response [39]. Cunha et al. described another mechanism for RON regulation of metastasis. This study conducted on breast cancer xenografts showed that RON promotes metastasis by upregulating the thymidine DNA glycosylase MBD4 [49]. The resulting aberrant DNA methylation profile can be reversed by MBD4 knockdown which in turn blocks metastasis. Similarly, RON inhibitor OSI-296 reverses the methylation profile of genes of the RON/MBD4 epigenetic signature and inhibits lung and lymph node metastasis of patientderived xenografts [66][67][68][69]. Another study conducted on breast cancer cell lines indicated that RON signaling though the PI3K/mTORC1 pathway promotes metastasis [67,69]. Alternatively, inhibition of both mTORC1 and RON delays the progression of metastases [66][67][68][69]. In the 'TRAMP' transgenic mouse model of prostate cancer, the constitutive abrogation of RON kinase activity (TK-/-) or its abrogation in the epithelial compartment completely abolished lung metastasis [15,48]. In gastric cancer, RON seems to mediate metastatic potential via upregulation of UPAR [70] while truncated protein variants of RON seem to play a role in metastasis as well [14,[32][33][34][35][36][37][38][39][40][41]. Brain metastasis in patients with solid tumors is associated with a particularly poor prognosis. Two RON mutations located in the tyrosine kinase domain were described in brain metastases from primary lung cancer [71]. A gene polymorphism, previously described in a gastro-esophageal tumor, was found in brain metastases from lung, breast, melanoma, and ovary primary tumors, indicating that RON may play a role in the dissemination to the brain of many cancers. RON's biological activity in the brain has been reported to modulate regeneration and plasticity by suppressing NO production and acting as a neurotrophic factor [71,72]. In addition, mutations of the MET gene, within the same RON RTK family, were also found in brain metastases and correlated with resistance to radiation therapy in lung cancer [72]. Better characterizing the role of RON in brain-specific metastasis and its effect on the local immune microenvironment could lead to new treatment opportunities in this patient population. CXCR4-CXCL12 has been shown to play a role in tumor growth and metastasis. CXR4 is expressed at the surface of tumor cells and is activated by CXCL12 which is expressed in the stroma or in organs that are preferred sites of metastasis such as lung or bone-marrow. In Ewing's sarcoma, the CXCR4 antagonist Plerixafor induced cell migration and proliferation by leading to the activation of several RTKs, including RON [73]. The authors described this unexpected result as a compensatory mechanism to sustain cell survival and migratory capacities [73]. Interestingly, CXCR4 was found to be expressed at a higher level in pancreatic cancer cell lines derived from metastatic lesions compared to cell lines derived from primary tumors [74]. Moreover, CXCL12 stimulation of CXCR4 expressing cells promoted cell proliferation and migratory capacities. Plerixafor treatment of a high CXCL12 expressor inhibited proliferation but only partially [74]. The authors did not look for RON expression or activation status in the studied cell lines, but we may speculate that a dual RON/CXCR4 inhibition might improve metastatic spreading in pancreatic cancer patients. RON is part of an elaborate network of kinases and proteins involved in the metastatic process. RON's role in pathway regulation, tumor cell activity, and disease progression appear to vary based on cancer type. Increasingly, research supports the role of immune cells in regulating metastasis. Notably, the crosstalk between tumor cells and macrophages influences intravasation and immune evasion which are critical steps in the metastatic process. Given RON's ability to directly and indirectly regulate macrophage function, it is an interesting target to prevent or control metastasis in patients. RON and Adaptation to Cellular Stress Nuclear subcellular localization of receptor tyrosine kinases has been previously reported [74][75][76][77]. Several pertain to nuclear localization of RON kinase and its function in the adaptation to cellular stress. RON can bind to consensus sequences of the genome and behaves as a transcription factor [75]. In 2016, Batth et al. reported RON localization in the nucleus of DU145 and C4-2B prostate cancer cell lines [55]. They showed that RON behaves as a transcriptional regulator of the AR target gene c-FLIP by binding to its promoter region. This regulation allows cells to adapt to the stress generated by androgen deprivation. Dr. Chang's group reported that under serum starvation of bladder cancer cells, RON-EGFR complexes translocate to the nucleus where they promote expression of specific target genes belonging to stress response networks. Further work from this group led to the discovery that in bladder cancer cells, nuclear RON interacts with Ku70 and DNA-PKcs to activate NHEJ whereby RON plays a role in hypoxia-induced chemoresistance ( Figure 3) [76,77]. The association of RON and DNA-PKcs/Ku70 also occurs when cells growing in hypoxic conditions are treated with doxorubicin or epirubicin. This finding raises the possibility that drugs inducing double-strand breaks may be ineffective in patients with RON overexpression. Similarly, DSB repair was reduced upon EGFR inhibition by gefitinib, erlotinib, or cetuximab [75]. Under hypoxic conditions, activated RON binds to HIF1a and translocates to the nucleus of gastric cancer cells where it can activate c-JUN transcription directly at the promoter locus. In turn, c-JUN promotes cell proliferation and migration ( Figure 3) [78]. Recently another group has shown that hypoxia also leads to the binding of HIF1ฮฑ to RON/RONโˆ†160-ฮฒ-catenin complexes; this binding increases nuclear translocation and leads to the expression of transcriptional targets of ฮฒ-catenin which is a downstream effector of RON (Figure 3). RON and its truncated variant RONโˆ†160 are both overexpressed in gastric cancer, and RONโˆ†160 has been shown to promote the growth and migration of gastric cancer cells [78]. The authors showed that binding of HIF1ฮฑ and RON/ฮฒ-catenin complexes are essential for gastric cancer cells to adapt to hypoxic conditions and acquire metastatic phenotypes. Similar observations regarding the role of ฮฒ-catenin in RON induced tumorigenesis were reported in breast cancer [79]. Nuclear localization and subcellular localization of RTKs should be taken into consideration when looking for treatment options. Indeed, nuclear proteins remain accessible to small molecule inhibitors but cannot be targeted by antibodies or targeting peptides. While RON can be translocated jointly with EGFR, one can speculate that other partner RTKs could be involved in similar mechanisms and may vary by cancer type. Thus far, limited information is available regarding RON subcellular localization in cancer. Further work is required to better characterize RON localization and function-specific cancers where targeting RON is of interest. Clinical Trials Pre-clinical studies have demonstrated the therapeutic benefit of RON inhibition using monoclonal antibodies which impede extracellular MSP binding or small molecule inhibitors, which competitively inhibit kinase activation. Several researchers have demonstrated that RON inhibition can sensitize tumors and elicit a profound therapeutic response to a secondary agent [18][19][20]80]. Given this, several early phase human clinical trials are evaluating the safety and efficacy of RON inhibition alone and in combination with other treatment drugs across several cancer types. Here, we reviewed trials whereby RON inhibition is specified in the study details (Table 1). A Phase I, open label, multi-center, dose-escalation clinical trial was conducted from May 2010 to November 2013 and evaluated the safety profile, efficacy, pharmacokinetics, and pharmacodynamics of monoclonal antibody Narnatumab/IMC-RON8/LY3012219 in patients with advanced solid tumors refractory to standard treatment. The antibody targets the ligand binding domain of RON with 8-fold higher affinity than the natural ligand. Thirty-nine patients were treated with escalating IV drug doses from 5 to 15 mg/kg on a weekly basis or 15 to 40 mg/kg on a biweekly schedule. Overall, the drug was well-tolerated with hyperglycemia as the most common grade 3 adverse effect and only one dose limiting toxicity (DLT) consisting of neutropenia. There were no complete or partial responses with 11 of 39 patients demonstrating stable disease. Twenty-one patients had progressive disease within the first two cycles of therapy. However, it is critical to note that only one patient maintained a drug concentration above 140 ug/mL, at which anti-tumor activity occurred in the animal model [81,82]. The program was abandoned due to concerns regarding the inactivity against multiple RON isoforms, particularly short-form RON which lacks the extracellular domains recognized by the antibody. The Phase I study NCT02207504 focused on the maximum tolerated dose (MTD), associated toxicities, and pharmacokinetic profile of crizotinib, a c-MET/RON small molecule inhibitor alone and in combination with standard dosing of enzalutamide in castrationresistant prostate cancer patients. This combination was guided by pre-clinical data demonstrating increasing expression of c-MET/RON in multi-regimen disease failure. Crizotinib MTD was found to be 250 mg PO bid and dosed with enzalutamide 160 mg qd. The results were notable for a significant reduction in systemic exposure of crizotinib by 74% attributed to increased hepatic CYP3A4 clearance by enzalutamide, rendering dosing subtherapeutic [83]. This made the associated side effects and any possible drug benefits difficult to attribute to c-Met/RON inhibition. Five patients had stable disease for 20-36 months while those previously exposed to enzalutamide had a progression-free survival (PFS) l of 2.8 months. It is unlikely c-Met/RON inhibition contributed to disease modification given the hepatic clearance rate [83,84]. A Phase II, non-randomized, three parallel-arm cohort study examined the efficacy of crizotinib in advanced urothelial cancers that either highly express c-MET, RON, or the combination thereof. It launched in 2016 aiming to measure overall response rate, overall survival, and overall progression-free survival. The Pfizer-sponsored clinical trial ultimately closed due to poor accrual. No results have been published to date [85]. NCT02745769 was a Phase Ia/Ib, multi-center, non-randomized, open label study evaluating the use of ramucirumab, a VEGF inhibitor, with another c-MET/RON inhibitor, Merestinib/LY280165,3, in Stage IV colorectal cancer. The total number of patients was 23. A second treatment arm included abemaciclib, a CDK4/6 inhibitor, as well as evaluation of Mantle Cell Lymphoma. However, the latter two were dropped. Results included safe dual treatment administration and a tolerable side effect profile with 43% of patients experiencing Grade 3 or higher side effects. Overall, there were no partial or complete responses. Stable disease was observed in 52% of patients, mPFS was 3.3 months, and mOS was 8.0 months [86]. Merestinib/LY2801653 was evaluated in NCT01285037, a multi-center, open label, Phase I study aimed at dose recommendation as well as safety and efficacy when used in combination with cetuximab, cisplatin, or gemcitabine. The expansion cohorts looked to evaluate safety and anti-tumor activity against colorectal cancer (CRC), uveal melanoma (UM), HNSCC, cholangiocarcinoma (CCA), and gastric cancer. The treatment groups consisted of merestinib alone, merestinib + cetuximab, merestinib + cisplatin +/โˆ’ gemcitabine, and merestinib + ramucirumab, respectively. Overall, 186 patients were treated with 59 (32%) achieving a best response of stable disease, 90 (48%) had disease progression, and 1 death occurred secondary to dyspnea as an adverse event. The median PFSs were 1.7 months (HNSCC), 1.8 months (CRC), 1.8 months (UM), and 1.9 months (CCA). Interestingly, only four PR or CR were observed across all cohorts, and all were within the cholangiocarcinoma treatment groups [87]. Three partial responses were observed in the triple therapy cholangiocarcinoma cohort and one complete response in the dual therapy cholangiocarcinoma group [83]. These findings lead to a subsequent Phase 2 randomized clinical trial in patients with advanced or metastatic biliary cancer. It consisted of two experimental arms. The first included VEGF inhibitor ramucirumab/LY3009806 plus cisplatin and gemcitabine versus placebo plus cisplatin and gemcitabine. In the second arm, merestinib was the designated experimental drug. The aims of the trial were to evaluate for overall survival, overall response, as well as the pharmacokinetics of each experimental treatment. The study was completed February 2018 and while merestinib was well-tolerated and did improve overall response rates, it failed to improve overall survival, progression-free survival, or disease control rate as compared to the standard of care chemotherapy [88]. The study end date has since been extended to December 2022. Conclusions The RON receptor tyrosine kinase is highly conserved across animal species and is involved in orchestrating cell signaling pathways influencing oncogenesis, inflammation, and cancer. RON was originally described as a transmembrane receptor localized to tissuespecific macrophages and epithelial cells that when overexpressed modifies intrinsic cell signaling pathways with subsequent changes in tumor cells, immune interactions, and the microenvironment promoting disease phenotype. While its role in normal biology protects the host by curtailing immune responses, in the tumor microenvironment, RON appears to suppress anti-tumor immune responses. Recent work evaluating RON tumor specific isoforms, RTK crosstalk, and nuclear activities add complexity to the role of RON in various cancer types. Nonetheless, pre-clinical work in various animal models and cancer types identified RON as an intriguing candidate for drug development to potentially target epithelial cancer cells at the primary site as well as the metastatic niche. Doing so may enable induction of an anti-tumor immune response which may subsequently be enhanced using combination immunotherapy strategies. Clinical trials targeting RON have been unsuccessful though very few have been conducted so far. New trials likely await further advances in our understanding of RON's role in specific tumor contexts and perhaps the identification of biomarkers of cancers driven by RON signaling. Funding: This work was supported by NIH CA155620 (AML).
Colorectal Cancer Screening: Stool DNA and Other Noninvasive Modalities Colorectal cancer screening dates to the discovery of pre-cancerous adenomatous tissue. Screening modalities and guidelines directed at prevention and early detection have evolved and resulted in a significant decrease in the prevalence and mortality of colorectal cancer via direct visualization or using specific markers. Despite continued efforts and an overall reduction in deaths attributed to colorectal cancer over the last 25 years, colorectal cancer remains one of the most common causes of malignancy-associated deaths. In attempt to further reduce the prevalence of colorectal cancer and associated deaths, continued improvement in screening quality and adherence remains key. Noninvasive screening modalities are actively being explored. Identification of specific genetic alterations in the adenoma-cancer sequence allow for the study and development of noninvasive screening modalities beyond guaiac-based fecal occult blood testing which target specific alterations or a panel of alterations. The stool DNA test is the first noninvasive screening tool that targets both human hemoglobin and specific genetic alterations. In this review we discuss stool DNA and other commercially available noninvasive colorectal cancer screening modalities in addition to other targets which previously have been or are currently under study. Colorectal cancer screening dates to the discovery of precancerous adenomatous tissue. Screening modalities and guidelines directed at prevention and early detection have evolved and resulted in a significant decrease in the prevalence and mortality of colorectal cancer via direct visualization or using specific markers. Despite continued efforts and an overall reduction in deaths attributed to colorectal cancer over the last 25 years, colorectal cancer remains one of the most common causes of malignancy-associated deaths. In attempt to further reduce the prevalence of colorectal cancer and associated deaths, continued improvement in screening quality and adherence remains key. Noninvasive screening modalities are actively being explored. Identification of specific genetic alterations in the adenoma-cancer sequence allow for the study and development of noninvasive screening modalities beyond guaiac-based fecal occult blood testing which target specific alterations or a panel of alterations. The stool DNA test is the first noninvasive screening tool that targets both human hemoglobin and specific genetic alterations. In this review we discuss stool DNA and other commercially available noninvasive colorectal cancer screening modalities in addition to other targets which previously have been or are currently under study. (Gut Liver 2016;10:204-211) INTRODUCTION Colorectal cancer (CRC) is the third most prevalent cancer both worldwide (1.23 million annual cases) and in the United States (132,700 annual cases). 1,2 While CRC mortality in the United States has been falling since 1985, attributed to both uptake of screening and advancements in treatment, an estimated 49,700 die annually, 3 suggesting the need for continued screening efforts. The basis of CRC screening dates to the discovery of precancerous adenomatous tissue, 4 which led to the understanding of development of CRC through the "adenoma-carcinoma" sequence rather than directly arising from the colorectal mucosa. 5,6 Screening allows for early detection of CRC and removal of precancerous lesions, leading to reductions in cancer incidence and mortality. 7,8 Despite strong evidence for CRC screening, 9 adherence to screening in the United States remains a challenge as only 65% of the eligible U.S. population is up-to-date with screening, while nearly 28% has never been screened. 10 Challenges in increasing adherence have been attributed to patient and provider preferences, available resources, and healthcare infrastructure. 11 Guidelines from several professional organizations, including the U.S. Preventive Services Task Force, Multi-Society Task Force, American College of Gastroenterology, and the National Comprehensive Cancer Network, provide both invasive and noninvasive options for CRC screening. While the American College of Gastroenterology considers colonoscopy to be preferred, other professional organizations recommend all options without preference. [12][13][14][15] Despite these recommendations and patients' preference for noninvasive screening in several studies, providers are more likely to recommend colonoscopy and may not present other options. [16][17][18] Patients who choose colonoscopy report doing so because of its superior single-application sensitivity, as reflected in statistics on its use, 19 while those that do not choose it report many reasons including difficulty in scheduling, cost, and missed work time in addition to concerns of modesty, procedure discomfort, and bowel preparation. 20 Colonoscopy also carries risk, such as bleeding, perforation, and cardiorespiratory complications. Although the risks are low, they are particularly relevant for patients with comorbid conditions 21 and may affect adherence. In an attempt to improve CRC screening and make available additional options without the risks and requirements of colonoscopy, noninvasive screening modalities are actively being explored. In this review, noninvasive CRC screening modalities ( Table 1, Fig. 1) will be discussed with a focus on stool DNA (sDNA). We will discuss the evolution of sDNA from proof of concept to its role in the current screening landscape. We will also briefly discuss other noninvasive screening tests, both established and in development. GUAIAC-BASED FECAL OCCULT BLOOD TEST AND FECAL IMMUNOCHEMICAL TEST Noninvasive CRC screening in the United States started with annual guaiac-based fecal occult blood testing (gFOBT), which was first recommended by the U.S. Preventive Services Task Force in 1996, 22 based on evidence from population-based randomized trials. [23][24][25] gFOBT works by indirectly identifying hemoglobin through a peroxidase reaction. Annual gFOBT reduces CRCs mortality by as much as 33% over a 13-year followup and by 32% in 30-year follow up. 23,26 Despite its mortality reduction, gFOBT has several limitations. CRC sensitivity of a single round of gFOBT is reported as 30% to 40% indicating the need for annual testing which improves sensitivity to 90% over a 5-year period. 27,28 Although annual testing has a high programmatic sensitivity, some of the sensitivity is due to false positive results from other causes of occult bleeding, which leads to serendipitous detection of neoplastic findings. 29,30 In addition to the high false positive rate, gFOBT has the disadvantages of low sensitivity for advanced adenomas, the need for dietary and medication restrictions, and a requirement for the collection of three consecutive stool samples for testing. These limitations led to the development of the fecal immunochemical test (FIT). Through the use of globin-specific antibodies, FIT allows improved stool-based detection of human hemoglobin. 31 FIT exists as both a qualitative and quantitative test with several qualitative tests available in the United States. Given the number of tests and varying cutoff levels for a positive result, interpretation of these tests and comparisons is challenging. A recently published meta-analysis showed FIT sensitivity and specificity for CRC to be 71% and 94%, among studies in which colonoscopy was the reference standard. 32 In a large study in Taiwan, FIT samples were obtained in 4,045 subjects the day prior to colonoscopy. Colonoscopy-based findings were then compared to the previously obtained FIT with findings of FIT sensitivity for nonadvanced adenoma, advanced adenoma, and CRC of 10.6%, 28.0%, and 78.6%. 33 Sensitivity varied based on location of leftsided versus right-sided lesions, with a decrease in the detection of more proximal lesions, although these findings have not held true in other studies. 33,34 Large-scale, population-based trials comparing annual FIT to colonoscopy are in progress in Spain and the United States. 35,36 Despite improved sensitivity in comparison to gFOBT, 37 the need for only a single sample, and no dietary or medication restrictions, 38,39 FIT has limitations, one of which is the decrease in sample reliability with prolongation of time from collection to analysis as positive FIT results decline from 8.7% at 1-4 days to 6% at โ‰ฅ5 days, and 4.1% at โ‰ฅ7 days. 40 Other limitations include poor sensitivity for advanced adenomas 33 and an unclear optimal threshold for hemoglobin detection. 41 STOOL DNA TESTING To further improve noninvasive screening, other methods have been and continue to be pursued. Development of CRC is associated with a series of progressive, cumulative mutations, including inactivation of tumor suppressor genes adenomatous polyposis coli (APC) and P53 and activation of the oncogene K-RAS. 42,43 Identification of specific genetic alterations in CRC tumorigenesis and the knowledge that colonocytes are continuously shed set the stage for development of stool DNA (sDNA) as a screening test for CRC. 44 The proof-of-concept study of sDNA for CRC screening by Ahlquist and colleagues involved 22 patients with known CRC, 11 with adenomas โ‰ฅ1 cm, and 28 with a negative colonoscopy who had stool samples analyzed with a panel of 15 point mutations of K-RAS, p-53, APC, and BAT-26, a microsatellite instability marker. The panel was 91% sensitive for CRC and 82% sensitive for adenomas โ‰ฅ1 cm with a specificity of 93%; analyzable DNA was obtained from all samples. 45 Additional studies further supported use of sDNA for CRC screening by showing that all mutations within stool samples were also present in CRC tissue samples, and that specific tumor markers were no longer detectable in stool samples following surgical resection of CRC. 46,47 Two screening population-based studies were conducted to evaluate first-generation sDNA test performance, both of which compared it to Hemoccult II (a gFOBT) and used colonoscopy as the reference standard. In one study, 2,507 average risk as-ymptomatic patients aged 50 or older were tested with a firstgeneration DNA panel, which included 21 point mutations of K-RAS, APC, BAT-26, and a DNA integrity assay. The sDNA panel had a CRC sensitivity of 52% versus 13% for Hemoccult II; sensitivity for high grade dysplastic adenomas was 32.5% for sDNA versus 15% for Hemoccult II, while respective CRC specificities were 94.4% and 95.2%. 48 A second study using the same DNA panel in 4,482 average risk asymptomatic patients aged 50 or older showed a CRC sensitivity of 25% versus 50% for Hemoccult and 75% for the more sensitive gFOBT Hemoc-cultSensa. 49 Due to poor performance of the DNA panel, a second DNA panel was used during the latter part of the study that included APC and K-RAS mutations and methylation of the vimentin gene. This second panel showed higher sensitivity for CRC (58%) and in particular, for adenomas โ‰ฅ1 cm (46%) when compared with Hemoccult II (10%) and HemoccultSensa (17%). 49 These findings led to sDNA inclusion in screening criteria by some organizations 12 but the test's relatively low sensitivity and high cost resulted in only rare use in clinical practice. 15 Several advances were made to improve both the marker panel and analytical methods to identify mutated DNA, resulting in greater sensitivity for the second generation sDNA tests. Long DNA degrades in storage, up to 75% in 1 day, 50 indicating the need for human DNA preservation for improved detection. Addition of a stabilizing buffer to samples prevented bacterial degradation of human DNA. 50 Identification of new markers 51,52 and improvements in the analytical process including automation 53 and development of advanced DNA stool extraction and mutant DNA detection techniques 52,54,55 resulted in greater sensitivity, setting the stage for a new sDNA panel. The second generation panel included four methylated genes (vimentin, NDRG4, BMP3, and TFPI2), a mutant form of KRAS, the ฮฑ-actin gene (to serve as a control for specimen quality), and a hemoglobin assay. This panel was tested on archived stool specimens. Target gene sequences were identified directly by hybridization with oligonucleotide probes. These probes were identified by Sera-Mag carboxylate modified beads which were then eluted out by use of a magnetic rack. Methylated markers 84 1993 FOBT improves mortality (Mandel, 1993) 23 1978 FIT described (Barrows, 1978) 83 2000 sDNA reported as possible modality ( 1978 Polyp cancer sequence (Hill, 1978) 82 were quantified using the Quantitative Allele-specific Real-time Target and Signal Amplification Assay (QuARTS), a highly sensitive polymerase based DNA amplification process that utilizes invasive cleavage-based signal amplification. 56 A hemoglobin assay was added, which was not affected by stool storage. This new panel was initially tested on 252 patients with CRC, 133 with adenomas โ‰ฅ1 cm, and 293 with normal (i.e., "negative") colonoscopy results served as controls. sDNA testing was 85% sensitive for CRC and 54% sensitive for adenomas โ‰ฅ1 cm with a specificity of 90%. 52 Anatomical location of CRC and adenoma did not affect the panel's sensitivity. 52 Further, sensitivity increased with adenoma size; as evidenced by sensitivity of 54% for adenomas โ‰ฅ1 cm, 63% for adenomas >1 cm, 77% for adenomas >2 cm, 86% for adenomas >3 cm, and 92% for adenomas >4 cm. 52 These findings provided impetus to test this new panel in a large screening trial using a similar marker panel and analytical process. In the screening trial, 9,899 asymptomatic average risk individuals aged 50 to 84 years underwent testing with a multitarget sDNA panel and a comparator commercially-available FIT prior to undergoing screening colonoscopy, which served as the reference standard. 57 The multitarget sDNA panel consisted of K-RAS point mutations, aberrantly methylated NDRG4 and BMP3, ฮฒ-actin as a control indicator of DNA quantity, and a human hemoglobin immunochemical assay. The sDNA panel had a cancer sensitivity of 92.3% in comparison to 73.8% for FIT and a sensitivity of 42.4% for advanced precancerous polyps defined as advanced adenomas (adenomas with high grade dysplasia, with >25% villous histologic features, or measuring โ‰ฅ1 cm) or sessile serrated polyp โ‰ฅ1 cm versus 23.8% for FIT. The multitarget sDNA panel was significantly more sensitive than FIT although less specific (86.6% for sDNA vs 94.9% for FIT). Subgroup analyses (Table 2) showed that sDNA sensitivity did not vary with CRC stage or location whereas FIT sensitivity was lower for proximal cancers. In addition, sDNA was more sensitive than FIT for higher risk advanced precancerous lesions. 57 These favorable results led to approval of this multitarget sDNA panel for CRC screening by the Food and Drug Administration in August of 2014 and its current commercial availability for CRC screening. As expected, additional challenges remain with the uptake of sDNA as a screening test. Although the FDA-recommended test interval is 3 years, there is no direct data from longitudinal studies to support appropriateness of this interval. Studies are in progress to address this important issue. Appropriate management of persons with a positive sDNA test but a "negative" colonoscopy is uncertain and requires clarification. Acceptance of sDNA by patients and providers in clinical practice is yet to be determined. Additional concerns include the cost-effectiveness of sDNA testing every 3 years versus other tests and strategies. Studies have suggested that while sDNA is more cost-effective than no CRC screening, it is less cost-effective when compared to other screening strategies including FOBT, FIT, and endoscopic strategies [58][59][60] However, cost-effectiveness may not be unfavorable if sDNA can capture more of the eligible population and result in improved adherence to CRC screening. 58 Early data shows that as of June 2015, approximately 36,000 patients have been screened with sDNA (data from Exact Sciences Corp.) since it became clinically available, 61 36% of whom were screened for the first time for CRC. There have been nearly 80,000 orders placed by 13,800 physicians with a 73% test completion rate (April to June 2015). 61 Physician and patient selection of sDNA for CRC screening continues to increase as provider and patient education improves and insurance coverage for the test expands. OTHER NONINVASIVE MARKERS Other individual and panels of markers have been explored for their potential use as noninvasive screening tests, including microRNAs (miRNAs), plasma-based DNA, and stool proteins. We discuss these briefly in turn. MicroRNAs MicroRNAs (miRNAs) are short, endogenous, noncoding RNAs that regulate gene expression, thereby affecting various processes in tumorigenesis, including angiogenesis and metastasis. 62 There has been great interest in looking at the expression of various miRNAs for detection of colorectal neoplasia. miR-21 is the most studied oncogenic miRNA that is upregulated in colon cancer. Studies of its test characteristics have been inconsistent, with one study from Japan showing that the miR-21 expression was similar in colonocytes from healthy volunteers as compared to patients with CRC. 63 Another study from 2012, however, showed that stool miR-21 expression was increased in CRC subjects as compared to healthy controls although expression was no different between subjects with adenomatous polyps and those without. 64 Plasma levels of another miRNA, miR-92, were higher in subjects with CRC and the levels were significantly reduced after surgery in 10 CRC subjects. At a cutoff of 240 (relative expression in comparison to RNU6B snRNA), the sensitivity and specificity were 89% and 70%, respectively, in discriminating CRC subjects from controls. 65 Significant issues remain with respect to the optimal miRNA isolation technique, endogenous controls for serum based miRNAs, and the need to obtain test characteristics from a screening population. Plasma-based DNA markers Plasma-based DNA-markers, especially genes with aberrant methylation such as the SEPT9 gene, have been evaluated as potential screening targets for CRC and advanced adenomas. A multicenter prospective trial involving nearly 8,000 screening population subjects showed that the CRC sensitivity and specificity of circulating methylated SEPT9 DNA (mSEPT9) were 48% and 91.5%, respectively, while sensitivity for advanced adenomas was just 11%. 66 In a case-control study comparing the test characteristics of a multimarker test for sDNA and mSEPT9, mSEPT9 was found to have significantly lower sensitivity for detection of both CRC (60%) and adenomas >1 cm (14%) as compared with sDNA (87% CRC sensitivity and 82% adenoma sensitivity). 67 A recent case-control study from China evaluating a plasma-based second generation mSEPT9 assay showed CRC sensitivity and specificity of 74.8% and 87.4%, while advanced adenoma sensitivity was 27.4%. In this study, mSEPT9 showed higher sensitivity than FIT for CRC but not for advanced adenomas. 68 The lower sensitivity of plasma-based tests may be related to the requirement of biomarker release into the bloodstream via vascular invasion in tumorigenesis, which likely happens at a later stage as compared to exfoliation upon which stool-based tests are based. 69 This observation may also explain lower sensitivity of plasma-based tests for detection of advanced adenomas as compared to CRC as vascular invasion occurs at a later stage in tumorigenesis. The place of mSEPT9 in the CRC screening landscape is uncertain at this time. Stool-based proteins Among stool-based proteins, fecal calprotectin and M2 pyruvate kinase (M2-PK, a cancer-related fecal protein) have been the two most studied fecal protein markers for CRC screening. In the Norwegian Colorectal Cancer Prevention trial involving 2,321 asymptomatic subjects, performance of calprotectin was inferior to FIT, with lower sensitivity for CRC (67% vs 75%), high risk adenoma (25% vs 32%), and lower specificity (76% vs 90%). 70 Studies of fecal M2-PK have shown inconsistent results, 71,72 with CRC sensitivity ranging from 68% 73 to 85% 74 among several studies with a cutoff value of 4 U/mL. Results have varied based on the positive cutoff value used, ranging from a sensitivity of 92.1% and specificity of 29.7% for a cutoff of 1 U/mL to a sensitivity of 11.8% and specificity of 97.3% for a cutoff of 30 U/ mL. 71 A recent meta-analysis of eight studies of M2-PK with a cutoff value of 4 U/mL showed a pooled CRC sensitivity, specificity and accuracy of 79% (confidence interval [CI], 73% to 83%), 80% (CI, 73% to 86%), and 85% (CI, 82% to 88%), suggesting that this marker may have potential as a screening test. 75 However, further studies are needed in a screening population to accurately quantify its test characteristics. CONCLUSIONS The past 15 years have seen improvement in the uptake of CRC screening and reduction in CRC incidence and mortality in the United States. While colonoscopy is currently the dominant screening test in the United States, there is considerable interest in the development of accurate noninvasive screening tests, with notable improvements in stool-based tests in particular. Both FIT and sDNA provide viable noninvasive options to colonoscopy for average-risk persons. Both tests provide several advantages over colonoscopy, including ease of completion, low cost, and low risk. Ongoing research of sDNA will quantify its uptake, adherence, cost-effectiveness, and appropriateness of the 3-year testing interval. FIT and sDNA, should be included as options in discussions of CRC screening between provider and patient, with expectation of improved adherence to screening. Continued development of noninvasive tests, improved understanding of optimal screening intervals, and greater ability to risk stratify are likely to improve the efficiency of and adherence to CRC screening. CONFLICTS OF INTEREST James R. Bailey: Author has nothing to disclose. Ashish Aggarwal: Author has nothing to disclose. Thomas F. Imperiale: Grant support from Exact Sciences, Corp. to Indiana University. James R. Bailey: Literature review, manuscript development, composition. Thomas F. Imperiale: Literature review, manuscript development, critical review.
BOOK REVIEWS a brief on the twentieth century. The been to omit it. The material for this is abundant and and Rahikainen to do it justice. On the basis of her material the by the 1970s child labour was a bygone phase in European history but that the 1980s saw a resurgence. In just a few pages, this is explained rather mechanically as a consequence of the ๏ฌ‚exibilization of the labour market. The author is right to point out that many European children do out-of-school work, but conceptually that work cannot be compared with the type of child labour common in Europeโ€™s past. The history of Ouidah is tightly intertwined with that of the Atlantic slave trade. The transformation of African societies as a result of their incorporation into the global market economy of the Atlantic is hotly debated, and for decades has been a ๏ฌeld of growing interest among historians. Robin Lawโ€™s recent book builds on that tradition to examine how Ouidah emerged and grew as Atlantic commerce expanded. Ouidah developed ๏ฌrst as part of the polity of Hueda from the 1660s onward, a period that coincided with the beginnings of European trade in the Bight of Benin (also known as the Slave Coast), and later (from 1727) as part of Dahomey, until it was conquered by the French in 1892. analysis role speci๏ฌcally guerilla ๏ฌghtersโ€™โ€™ ground-breaking survey of the disparities between the documentary and the nationalist on the guerillas. Plainly, since this account makes events and structures interdependent, much hangs on how Sewell conceptualizes structures. He seeks to modify Anthony Giddens's influential notion of the duality of structure, according to which social structures consist in rules and resources that are both the conditions and consequences of actions. 9 Sewell preserves the interrelationship of structure and action but argues that structures must be conceptualized as ''mutually reinforcing sets of cultural schemas and material resources''. 10 This theory of structure effectively privileges the role of culture in social transformations. Thus, Sewell defines agency as ''the capacity to transpose and extent schemas to new contexts. Agency, to put it differently, arises from the actor's control of resources, which means the capacity to reinterpret or mobilize an array of resources in terms of schemas other than those that constitute the array.'' 11 Notice how control of resources is immediately redefined in terms of symbolic skills of reinterpretation and how resources themselves are said to be constituted by schemas. Consistently with the explanatory priority that Sewell gives to cultural schemas, he argues that social change happens because ''[s]ocieties must be conceptualized as sites of a multitude of overlapping and interlocking cultural structures'', which allow actors at moments of crisis to reconfigure existing schemas -as, for example, the storming of the Bastille and its aftermath allowed participants to transform prevailing structures by coming up with ''the new concept of revolution''. 12 How can Sewell square this effective equation of structures with cultural schemas with his concern to find a place for ''socioeconomic determinations''? Only with great difficulty, as becomes clear in the long concluding essay, ''Refiguring the 'Social' in Social Science: An Interpretivist Manifesto''. Here he seeks to escape a narrowly discursive definition of the social by adopting a broader conception of ''semiotic practices'' modelled on Wittgenstein's concept of language game (or, more accurately, on a typical misunderstanding of this concept). 13 This then encourages Sewell to subsume ''socioeconomic determinations'', and indeed ''macro relations'' more generally under semiotic practices. Thus ''the state, in both its military and civil guises, is a network of semiotic practices whose scope is very wide and whose power is very great. In this respect it resembles the collection of language games we call capitalism.'' 14 All that is left, after this outburst of culturalist imperialism, of Sewell's initial concern to open out to the traditional structural concerns of social history is an acknowledgement of the legitimacy of quantitative methods because aspects of ''the language games of capitalism present themselves phenomenally as a complex of quantitative fluctuations in prices'', though we mustn't lose sight of the fact that we are confronting ''a reality ultimately made up of complexly articulated semiotic practices''. 15 Sewell seems, rightly, a bit uncomfortable with all this, since he concludes with a long riff proposing we replace the ''language metaphor'' as a way of understanding the social with that of the ''built environment'', which would encourage us to explore ''the reciprocal constitution of semiotic form and material embodiment''. 16 But this merely restates one of 9. For example, Anthony Giddens, Central Problems in Social Theory (London, 1979 the disabling presuppositions that prevent him from developing a more robust theory of structure -the opposition of the cultural and the physical that is reflected also in his definition of structure as schemas and resources. (Another such presupposition is Sewell's persistent tendency to associate the study of social structures with the use of quantitative methods, one of the points where he seems to have fallen victim to a certain parochialism, since, as he notes, the great Marxist scholars such as Christopher Hill, Eric Hobsbawm, and Edward Thompson who dominated the development of social history in Britain in the 1960s and 1970s ''had none of the [American] new social historians' programmatic enthusiasm for quantitative methods''.) 17 In any case, the slide from resisting linguistic reductionism to baptizing everything social a language game is both unnecessary and disabling. Sewell says semiotic practices are ''any practices that communicate information by means of some sort of signs and are therefore open to all interpretation''. 18 This very abstract definition plainly applies to all human practice: there is no action performed by humans that does not communicate by means of words or other kinds of symbol that of necessity require interpretation. But it doesn't follow that that all human practice is to be understood solely in terms of its symbolic aspect. One critical issue here concerns the relations between both isolated acts and practices. Sewell rightly highlights the problem of the articulation of practices but assumes that this must be conceptualized in semiotic terms. But this is a mistake. He says: ''A moment's reflection makes it clear that the currency futures market is also a language game''. 19 It's true that traders in currency futures have to communicate in order to operate, and that they deal in contracts, that is, in written utterances, ultimately based in complex ways in currencies that are themselves symbols. But this doesn't exhaust the practice of currency futures trading. The interactions of the traders and those between their markets and others, financial and non-financial, unleash chains of consequences that, for example, transfer wealth among different categories of actors and may damage entire economiesthink of the great financial crashes of the neo-liberal era. Sewell acknowledges ''the problem of unintended consequences of action'': it is this context that he asserts that social reality is ''ultimately made up of complexly articulated semiotic practices''. 20 But this remains just an assertion: he offers no account of how the relations and mechanisms constituting financial markets might be reduced to ''semiotic practices''. 21 Reflecting on the relationality of social structures invites us to consider also the problem of power. Sewell defines structures as schemas and resources, but largely ignores the critical question of access to resources. Who gets to play the ''language game'' of currency futures trading? You have either to be very rich yourself or (more typically) to work for an investment bank, hedge fund, or the like. That means that just about everyone is dealt out of the game. Conceptualizing modes of access to resources requires us to think in relational terms because typically some persons are allowed and others are denied access in ways that are non-contingently connected. The Marxist concept of the relations of production (defined by G.A. Cohen as relations of effective power over productive forces) is a paradigmatic example of how analytically to address this kind of issue, but it is by no 17 means the only one -Michael Mann's theory of power networks is another, praised by Sewell. 22 Sewell himself skates very quickly past the question of power in his first article on structures (1992). In the original version of his somewhat later discussion of historical events (1996) he says, in a footnote: ''I would now modify this definition [i.e. of structure in the first article] by specifying modes of power as a constitutive component of structures.'' 23 Bafflingly, this note has vanished in the version of this essay published in the collection under review. But it is hard to take seriously a theory of social structure that does not thematize the question of power: in this respect Sewell's theory represents a step back compared to Giddens's theory of structuration, which, for all its many faults, focused on power and domination. 24 These criticisms are not intended to deny the value of historical and social-scientific research that is sensitive to the ineliminable role played by language and symbolic representation more broadly in human social life; to do so would be to seek to roll back the great ''revolution of language'' that is one of the main achievements of twentieth-century Western culture. But recognizing the significance of language and representation doesn't require one to conclude that they exhaust the social. Sewell's desire to remedy cultural history's occlusion of capitalist economic structures without giving up the ''cultural turn'' is understandable and legitimate, but he needs a much more robust conception of social structure to achieve his goal. He could do worse than engage with the critical realist theorists -among them Roy Bhaskar, Margaret Archer, and myself -who have sought to develop such a conception. 25 This collection is, in any case, evidence of the fertility of the project Sewell has undertaken, even if it doesn't have all the conceptual resources needed to carry it through. of the Enlightenment of much blame. Lacking reliable information about the distant past of their own society, authors such as John Millar, Adam Smith, and Robert Malthus seized on data pouring in from European visitors to non-European worlds as a substitute for genuine historical information. Nevertheless, they launched an enduring and fatally flawed perspective. In this book, Thornton concentrates on subjects from his areas of expertisedemographic and family studies -as he criticizes the application of the traditional-tomodern paradigm. In my opinion, Thornton's reading of the Enlightenment founders is not so much inaccurate as single-minded in its emphasis and vague in its documentation. Perhaps because of his narrow, relentless focus, Thornton does not bother to cite page numbers for his sources. In my view, these writers were not principally concerned with setting forth a historical account of change over time. They were, as they claimed, comparativists, men who were much more precursors of academic disciplines in the social sciences than of history. They sought to develop typologies and laws of society -and the more parsimonious, the better. Thus, Malthus seized on the inherent conflict between the arithmetic path of growth in the food supply and the exponential potential of population increase. Change over time in typologies was more implicit than explicit, as with the stages of societal development: hunting and gathering, pastoral, agricultural, and commercial. Empirical data were cited for heuristic purposes rather than as an analysis of transitions from one stage to another. Second, these writers were generalists and not specialists in any particular field. They discussed a range of topics to illustrate one generalization or another. They certainly did contribute to what Thornton labels the developmental paradigm, which is nearly the same expansive concept as the idea of progress. Recently, Harvard economist Benjamin Friedman has provided a more complex and complete treatment of the sources of belief in the possibilities of progress than does Thornton. 1 According to Thornton, reading history sideways has distorted the historical study of the family in north-western Europe, now recognized by specialists as a unique region in the broad context of world history. Demographically, for example, it is characterized by late marriage, especially for females, with a large proportion never marrying. This nuptiality pattern results from an adherence to a norm of newlyweds forming new households. There are two distinct schools regarding the dating of the ''great family transition'' in north-western Europe. As noted above, Thornton emphasizes that writers of earlier centuries used evidence from the non-Western world to claim that there had been a great transition from extended to nuclear families somewhere in the far distant past. Certainly, this was implicit in their work even, if they did not focus on the dynamics of a shift from, say, an agricultural to a commercial society. Given the absence of empirical data, nothing could really be said about the precise dating of this great transition. By the middle of the twentieth century, social scientists had firmly adopted the expectation that industrialization (better formulated as ''modern economic growth'' by Simon Kuznets) broadly conceived was, or should have been, the cause of great transitions in the family and in all other areas of society. This definitive placement of the hinge-ofhistory in the nineteenth century had several sources. Nineteenth-century social observers were keenly aware of the quickening of the pace of urbanization and industrialization. In the field of economic history, the Industrial Revolution emerged as a key concept in the early twentieth century. After World War II, the need for a ''take-off'' in economic development in what became known as the Third World was a pressing policy matter for governments as well as academics. However, beginning in the 1960s, historians of the family, particularly Peter Laslett and the Cambridge Group for the History of Population and Social Structure, vigorously refuted the notion that any such transition could be discerned in the empirical record of household structure in England or north-western Europe during industrialization or, for that matter, at any time in post-medieval history. Instead, they emphasized the essential cultural distinctiveness of their region. This was something of a straw man in intellectual history, as analysts such as Locke, Smith, Malthus, and Alfred Marshall had concurred with the Cambridge Group before it even existed. In household studies, it also unfortunately steered attention away from the century -the twentieth -in which the most substantial change in living arrangements in recent centuries actually happened. In this book, Thornton reviews and endorses the essentialist perspective of these revisionist historians of the Cambridge Group. In his brief chapter on European fertility decline, Thornton cannot be as critical of the notion of a fertility transition. It is well documented that birth and death rates in European societies have radically declined since the eighteenth century. Women bearing fewer than two children on average by age fifty really is a very substantial difference from having five or six children. So, too, is the difference a life expectancy at birth of eighty years compared to forty. The ''demographic transition'' succeeds better as a factual phenomenon than as a theory. Whether narrowly demographic as a homeostatic model, in which prior mortality decline causes fertility decline, or drawing on the entire list of changes that differentiate modern social history from traditional, the demographic transition fails as a theory, if by that is meant an invariant route from past to present powered by a constant causes or set of causes. The most sweeping assertion of Thornton is that the practice of reading history sideways profoundly influences the thinking of people around the world who are far removed from academia. He lists four propositions that comprise the concept of developmental idealism (p. 136): modern society is good and attainable; the modern family is good and attainable; the modern family is as cause as well as an effect of a modern society; and finally, individuals have the right to be free and equal, with social relations based on consent. These propositions, most notably the last, are all-encompassing. The first two likely should have lower priority than the belief that ''modern economic growth (sustained increases in per capita income) is good and attainable.'' That is, a higher standard of living matters to people more than their familial preferences. The third proposition -''the modern family is a cause as well as an effect of a modern society'' -is the most interesting for the study of the role of the family in history. Overall, Reading History Sideways might have been better left in the shorter form of the PAA address as an inquiry into the sources of the traditional-to-modern framework in family studies. In the twenty-first century, it is hardly novel to criticize modernization and other frameworks of unilinear social evolution. Beginning with anthropologists such as Franz Boas in the early twentieth century, such critiques are commonplace. Since the 1960s, modernization theory has had more detractors than proponents. Stages-of-societies notions are out of fashion, so another such critique can make only a marginal contribution. Unfortunately, the rejection of the modernization approach has led historians in the United States away from trying to understand change, especially over the long run. Instead, they endeavor to capture and evoke the context of past societies. Since major changes, such as sustained declines in mortality and fertility, have occurred, the study of these transitions should not be neglected. Two questions are of particular interest to me. First, can the variation in paths to the present among societies be systematically conceptualized? In the field of economic history, for example, Alexander Gershenkron developed the proposition that the more backward the economy, the greater role major institutions played in industrialization. In England, individuals and family firms created modern economic growth. In Germany, banks played a large role, while in Russia the state played this role. Second, is it possible to distinguish Westernization from modernization? That is, does the fact that nearly the entire population residing in urban areas with most of the adult population working in jobs that require substantial schooling, and so forth, lead to a ''modern'' mentality or family values? Or is the content of a modern world view only a result of these developments first taking place in the West and then imported from that region? In a brief postscript on the use of terms such as ''development,'' Thornton sacrifices such questions because of a desire to avoid concepts that lack scientific rigor and are normatively laden. This is a high price to pay in terms of intellectual content. These days, books on child labour have a problem with definition. The inclusive view regards all children not in school as child labourers, particularly when they are engaged in household work. The exclusive view focuses on those children who are economically engaged and exploited. Rahikainen takes an exclusive view. Boys and girls helping out on the farm and girls helping their mothers in the household are excluded from the purview of this study with the argument, which has some validity, that household chores were a feature of daily family life. Rahikainen argues that extending the concept of child labour to all possible forms of work, including domestic work, would dilute the concept and make it analytically blurred. Child labour has thus been assessed primarily in terms of a labour relationship contributing to economic output, rather than in terms of work that engaged the child for too long at too young an age. In that sense, she deals with only one group of the deprived children of Europe's poor in the past, but it at least enables her to maintain an analytical clarity. Rahikainen combines her vast knowledge of the field with a tendency to polemicize, but without pushing her argument too far. It enables her to hold the reader's attention and stimulates reflection. The author is an opinionated historian in the true sense. At the very beginning of her book, she takes issue with the theories of Ariรจs, which have been widely repeated in studies on the history of childhood and which tend to suggest that child labour was associated with the culture of the poor. That culture, Ariรจs argued, made no distinction between adults and children, between the public and the private. Child labour was natural. Rahikainen is indignant at such assumptions. The bourgeoisie, she argues, took the children of the poor as a resource to be disciplined and mobilized. Whereas the elite were obsessed with the pedagogy of modern education for their children, the children of the poor continued to work and be exploited by their parents. One quote from the early pages of this book (p. 4) is important in understanding the author's basic position: ''Our knowledge of the ideas, attitudes and practices prevailing among the lower orders in past times is sporadic and biased by the perspectives of the middle and upper orders, who produced virtually all of our early modern sources. They were to prove, say, that in ancien rรฉgime France the lower classes were almost by definition the last to show signs of affection for children.'' The author takes exception to such aspersing thoughts and argues throughout the book that child labour was a necessity, a coercive choice on the part of the children and their parents; indeed, sometimes, it was the result of pure coercion. There is a problem with this book, and it lies with the first part of the quotation. There is hardly enough material to write a history of child labour in Europe, covering, as this book aims to do, early household manufacturing, factory labour, the multifaceted urban labour market, and the vast use of children in agriculture and related fields such as herding and fishing. However, it should be added that the title of the book is not pretentious. Rahikainen's aim was to collate experiences of child labour over four centuries and across a geographical region larger than the present-day European Union. In a laudable departure from usual practice, child labour in (for example) Russia and Spain receive as much attention as child labour in the British textile mills. But given the patchy nature of the evidence, conclusions remain equivocal. The author is upfront about this and warns that without proper data many questions will remain unanswered. The problem of child labour was a serious one throughout the European pre-industrial and early industrial periods. But how serious? Rahikainen takes a position reminiscent of the argument introduced by Nardinelli. Episodic evidence, particularly when recorded by commissions investigating the worst cases of child labour, has been at the root of the mainstream view that child labour was the weft and warp of early industrialization, in the same way that child labour today is often seen as the backbone of economies in developing countries. There is quite a lot of evidence in this book that the incidence of child labour was less than often assumed and that employers did not always find it easy to recruit child labourers. In agriculture, farmers, who, apart from engaging their own children for specific activities, had to bring in outside children, had to rely mainly on farmed-out parish pauper children. In nineteenth-century manufacturing industry, demand, it is argued, was concentrated in just a dozen industries and ''in some places pauper children may have been about the only labour force available'' (p. 216). Rahikainen's overview of the history of European child labour shows that the demand for and supply of child labour was not massive, but also that it meant a dreadful existence on the margins of society and the verge of destitution. Many studies have focused on the supply side. The question has always been posed, and it continues to be posed in present-day studies on child labour, as to why parents and children choose a life of labour rather than a childhood of education and leisure. The approach in this book is different. Child labour in agriculture is looked at from the supply side; but in the context of industrial child labour Rahikainen is also interested in the demand side and so includes more evidence on the organization of production, changes in technology, and the shifting relationship between labour and capital. The book ends with a brief chapter on the twentieth century. The author would have been wise to omit it. The source material for this period is abundant and diverse, and Rahikainen fails to do it justice. On the basis of her material the author concludes that by the 1970s child labour was a bygone phase in European history but that the 1980s saw a resurgence. In just a few pages, this is explained rather mechanically as a consequence of the flexibilization of the labour market. The author is right to point out that many European children do out-of-school work, but conceptually that work cannot be compared with the type of child labour common in Europe's past. After the Nazis took power in January 1933, almost half a million Germans were forced to leave their country. Among them were leaders of the Kommunistische Partei Deutschlands (KPD) and the Sozialdemokratische Partei Deutschlands (SPD, or Sopade, as it called itself in exile), as well as activists from small socialist groups, such as the anti-Stalinist KPD Opposition, the Socialist Workers' Party (SAP), and the Neu Beginnen (New Beginnings) group. The refugee activists migrated to Germany's democratic neighbours -mainly Czechoslovakia and France -to continue their struggle against Hitler. The Seventh Comintern in August 1935 called for unity ''from above'' and thus brought about official approval for a Popular Front against fascism. Before the congress, German communists had aimed for a ''Soviet Germany''. Now, they decided to ally with all antifascist social forces in and outside Germany in order to establish a Popular Front. They hoped that a non-fascist section of the German bourgeoisie would become a partner in that Popular Front. Such an alliance would unify the various factions of the labour movement inside Germany and in exile abroad. All revolutionary goals were explicitly discarded: the KPD's official aim became a parliamentary democracy. Prominent among those scholars who have recently examined these political developments is Ursula Langkau-Alex, Senior Research Fellow at the International Institute of Social History in Amsterdam. Since the 1970s she has published many books and essays on the German labour movement in exile. This three-volume work on Popular-Front initiatives within the German labour movement in exile is her magnum opus. The first volume deals with the foundation of the Ausschuss zur Vorbereitung einer deutschen Volksfront (Commission for the Preparation of a German Popular Front); the second discusses the Ausschuss itself. Volume 3 is a collection of documents and includes a bibliography. Few historians outside the small circle of experts in the field probably know that in Germany the term ''Popular Front'' was coined as early as 1932. At that time, the social democrats and other pro-republican forces supported Field Marshal Paul von Hindenburg's re-election as Germany's president against the Nazi candidate, Hitler. In February 1932, Hindenburg's electoral committee adopted a resolution calling for a ''Popular Front'', a broad coalition of non-Nazi and non-communist forces, to support Hindenburg (vol. 1, p. 11). The eighty-five-year-old former military leader was praised as ''the last protective wall against Hitler''. It was only a year later that Reichsprรค sident Hindenburg appointed Hitler German chancellor. Langkau-Alex shows that it was ad hoc initiatives by both the communists and social democrats that marked the start of joint activities in exile, preceding consideration by the KPD and SPD leadership (vol. 1, p. 87). However, the decisive move towards a Popular Front was taken in France, where in July 1934 French communists and social democrats signed a joint pact aimed at combating fascist attempts to take power. Hitler first tested his expansive policy in Saarland, which, after World War I, had been put under French control by the League of Nations. The Nazis orchestrated a campaign that intended to re-integrate this region into Germany. Communists and social democrats in the territory agreed to wage a joint campaign to support keeping Saarland under the administration of the League of Nations. Against the will of the SPD Vorstand (the party's exiled executive committee) in Prague, the Social Democratic Party under Max Braun went as far as to propose joint action committees to the communists. However, the plebiscite of January 1935 restored Saarland to Germany by an overwhelming majority and provided Hitler with his first foreign-policy triumph. On 20 March 1935, the Central Committee of the KPD sent a letter to the SPD Vorstand in Prague urging a joint appeal to German workers. The SPD executive committee argued that, as long as the communists insisted on ''submission'' to unity, joint action of any kind was out of the question. However, two executive members, Siegfried Aufhรคuser and Karl Bรถ chel, demanded that the SPD seek an agreement with the communists -a demand that led to their expulsion from the executive committee. Together with other SPD dissidents, they formed the Revolutionary Socialists, a group that was more open to communist overtures. However, by 1936 the group had begun to disintegrate and most of its members rejoined the SPD. While the KPD in exile maintained its party structure throughout the years of suppression, the SPD split into a multitude of groups, each with different ideas about the conduct of the party's work in exile and about future politics. The party executive felt that its task was to organize and lead a revolutionary movement against Nazism. The leaders of the party, now in exile, had not espoused a revolutionary platform before then. Now, circumstances forced the SPD leadership to become revolutionaries. They believed that the economic and political difficulties would contribute to a spontaneous mass uprising against the Nazi regime. The SPD leadership (as well as KPD officials) had not yet understood that the majority of the German population supported the Nazis. In contrast to that over-optimistic opinion held by both communists and social democrats, the independent leftist group Neu Beginnen exhibited a more pessimistic attitude, as Langkau-Alex shows. Its leader, Walter Lรถ wenheim, assumed that the resistance movement against Hitler inside Germany was doomed. He proposed to suspend most activities inside the country until such time as conditions were more favourable for an anticipated socialist revolution. Most of the Neu Beginnen members rejected this idea and forced him to retire as leader. While parts of Neu Beginnen sympathized with the idea of a United Front, and even with a Popular Front, the group's new spokesman, Richard Lรถ wenthal, was firmly opposed to the idea if it had to be realized under KPD guidance. He envisaged a communist-dominated front soon falling under Stalin's personal tutelage. The issue of a Popular Front overshadowed all other problems dividing the exiled social democrats. The majority of the SPD executive remained opposed to any official agreement with the KPD and characterized the Popular Front idea as a hypocritical manoeuvre to split the ranks of the social democrats. Most SPD leaders insisted that the two elementscommunist and social democratic -of the German labour movement were incompatible. However, in October 1935 the leading social democrat, Victor Schiff, decided to contact Willy Mรผ nzenberg, the KPD's master networker in Paris, who was regarded as more open-minded than the average communist party official. The two of them issued a preliminary appeal (reprinted in vol. 3, p. 14) for the political activities of the KPD and the SPD to be coordinated. In February 1936 a Popular Front Committee was set up in Paris under the chairmanship of the writer Heinrich Mann. He had taken part in the 1935 International Congress of Writers, which supported the idea of a Popular Front against fascism. A skilled orator and writer, even in French, Mann was highly effective. Regular meetings of the Popular Front Committee were held at the Hotel Lutetia. The committee included social democrats, communists, representatives of splinter groups, and liberal intellectuals. Among the leading figures who helped Mann were Mรผ nzenberg, Rudolf Breitscheid, a leading SPD functionary, and the liberal journalists Leopold Schwarzschild and Georg Bernhardt. The committee issued a newsletter Deutsche Informationen. In the second volume of her trilogy, Langkau-Alex shows how the participants sought to draw up a programme that would unite the opposition against Hitler. The electoral victory of the French Popular Front and the installation of a left-wing government under Lรฉon Blum in May 1936 raised hopes among German refugees. However, while the Left was celebrating its triumph in France, the Spanish Civil War was breaking out. Thousands of communists, social democrats, independent Marxists, and liberals went to the aid of the Spanish Republic, whose very existence was being threatened by fascism. While backing the communist volunteers in the International Brigades, the Soviet Union also persecuted -through GPU officers in the background -anarchists, Trotskyites, and KPO members behind the front lines of the war against Franco's forces. These events overshadowed and virtually paralysed the work of the Popular Front Committee. Walter Ulbricht, the key figure among Paris's German communists, made senseless demands, defamed individual members of the committee, and launched a witchhunt against the supposedly ever-present ''Trotskyites''. These practices alienated virtually all the other committee members (vol. 2, pp. 200-205). By the following summer, the committee's activities had run aground. Its newsletter Deutsche Informationen was now published under the auspices of the KPD journalist Bruno Frei. Max Braun and Georg Bernhard published a paper entitled Deutsche Informationen vereinigt mit Deutsche Mitteilungen that criticized the communists and their fellow travellers (vol. 2, p. 425). The break was largely the unwanted result of Ulbricht's work, but even more so of the Stalinist ''purges'' in the Soviet Union and Spain. Despite several attempts to revive the committee, it ceased operations in January 1938. The merits of Langkau-Alex's work are not confined to her detailed narrative of the controversies within the German exile communities. In these three volumes she also analyses these debates within the context of international diplomatic developments, the changing political situation, especially in France, and the relationship between the Communist and the Labour and Socialist Internationals. In short, these three volumes may well become the standard work on the subject for many years to come. Further research should clarify to what extent the underground movement in Germany, but also the Gestapo's apparatus, were able to monitor the work of the Popular Front Committee in Paris. The daughter of the anarchists Federico Urales (the pseudonym of Juan Montseny) and Soledad Gustavo (Teresa Maรฑ รฉ), Federica soon stood out in the libertarian propaganda media. From the pages of the Revista Blanca, published by her parents, and other publications of the Spanish anarchist movement, she began at a very young age to write short novels in which the protagonists, humiliated and offended in Dostoyevskian style, personified the author's ideological principles and concerns. In time, these stories were joined by articles of opinion in Redenciรณ n, Solidaridad Obrera, and El luchador. From these platforms she took part, with an incisive and often virulent style, in the struggles that during the Second Republic became radicalized within the libertarian movement between the more unionist sectors, which supported the growth and strengthening of the National Confederation of Labour (CNT), and the more anarchist sectors, which defended the utopia of libertarian communism. Her support was for the latter; her attacks, against ร ngel Pestaรฑ a and other ''reformists''. The popular reaction against the attempted coup d'รฉtat by general Sanjurjo strengthened her faith in the spontaneity of the people, a category that she chose as the revolutionary subject above the peasants or the working-class masses. Closing the Republican period, she defended, in the Congress that the CNT held in May 1936 in Zaragoza, the tougher, more orthodox, positions, proposing for approval that they should overcome the obstacles of the union's classical decision-making procedures (Lozano, pp. 165-166). Mario Kessler These same procedures, which in ''normalized'' periods were never perfect, became even more burdened with the onset of the Civil War. Federica, a recognized public speaker and propagandist, obtained increasingly high positions on the committees until she was appointed as a minister, together with three companions. The scarce months she spent in the position would be a compendium of contradictions: from the defence of classical anarchist isolationism she changed to moderation and unity against fascism; the call to revolution became a message to overthrow the struggle against internal reaction in May 1937; a woman always against possibilism let herself be carried along by the circumstances, without being able to take charge. With her departure from the Republican government and the defeat in the Civil War, she took refuge in the press bodies of the anarchist exile, leaving the position on the committees to her companion, Germinal Esgleas. Both watched over the legality of the CNT in France, the organization which gave their existence meaning. This immobilism clashed with the conspiratorial work that young people proposed in the 1960s as a tactic to attack the Franco regime. For more than three decades, Montseny and Esgleas held important positions in the organization, either in the press, or on committees or secretariats. Their long terms of office were unprecedented in libertarian circles, as was the fact that they were paid. Irene Lozano writes the complex biography of this controversial figure in a fictionalized style, which provides the text with fluency and agility, but which conditions a specific use of the sources. For example, she excessively reproduces romantic images of memories. Partial reconstructions exaggerate the skills of the character in what is also an exaggerated context, with pictures such as that of a vibrant Barcelona where in 1917 numerous ''assemblies, meetings, gatherings and rallies'' were held (p. 52), or that of committees which decided on the political fate of the country, controlling the electoral abstention or participation of their members (p. 158). 1 This use is more restrained after the initial chapters, giving the study greater soundness in criticizing or correcting these exaggerations. Without leaving the sources, the incorporation of recent research which has expanded on some of the texts used by Lozano on the 2nd Republic, the internal repression against the CNT on the Republican side, and the development of the Civil War itself, would have been welcome. These failings are made good by the use of interviews of around forty people and an exhaustive trawl through the archives in Spain, accompanied by access to two important unpublished collections: that of Helenio Molina and, above all, that of Juan Sans Sicart, deputy of the Egleas-Montseny couple in exile. This is an extensive, interesting, and to a large extent unknown period that Irene Lozano skilfully reveals to us. Exile, however, takes up much less space in Susana Tavera's study. This shortness takes away precision, for example, in the declarations on the militants who remained in the country (p. 266) or in the description of the disavowals prior to the split of the confederation in 1945 (p. 269). Apart from these passages, Tavera's biography is that of a historian with a broad knowledge of the context of Primo de Rivera and the Republic, and includes a very wise use of local history studies to construct genealogies of situations and people. She thus describes the life of the parents and other heroes of Federica, links the creation of the Revista Blanca to similar enterprises of the time (pp. 54-63) and studies in depth the militant infra-litterature and anarchist publicism (ch. 2). She also skilfully dissects the complexity of the anarchist ''affinity groups'' (pp. 117 and ff.) in which Federica will move throughout her life, from the family phalanstery of the early years to the continuance of these closed circles in exile. In addition to the difference of perspective, questions of style and editing separate these two books. Lozano's style is easier to read but conceals the visibility of the author and of the problems encountered on reconstructing the biography. In Tavera's text, denser but also more meticulous, the author appears frequently to acknowledge the insufficiency of the data and the absence of certainty when required. As for the editing, paradoxically, the sources used by Tavera are more difficult to follow and to verify than in Lozano's work. The notes are at the end of the book, the format of the bibliography is confusing and, moreover, it lacks a clear and adequate index. Finally, a few words on perspectives. Lozano's work is more essentialist, as she focuses on the subject, highlighting her exceptional nature in relation to the context. Tavera describes this context with more nuances and details thanks, in part, to the use of the type of genre. This perspective increases our knowledge of the historical and social complexity of a figure who confronted a patriarchal political and militant structure, with well-defined roles and limits. However, the ''woman'' variable occasionally monopolizes the explanation of some actions in which other factors intervene, such as power struggles within the movement or personal ambition. Two examples are her time in the ministry and her relations with the Mujeres Libres (Free Women) anarchist organization. In the former, according to her biography, gender arguments won against her ideological resistance and led her to accept a position that she did not want as a militant. In the latter, her militant loyalty led her to be absent from the media and actions of Mujeres Libres, because this organization was competing with her union and endangered the unity of the anarchist movement. In both situations Federica's ambitions to go down in history together with her childhood heroes (Francisco Ferrer y Guardia, Teresa Claramunt, her own father) are forgotten. An intense desire for historical importance which broadens the possibilities on interpreting the above-mentioned situations: her position as the first female minister would give her a place in history; Mujeres Libres was competing with the CNT, but also with her in her role as a militant woman. The books by Tavera and Lozano are excellent descriptive works and are the richest and most exhaustive biographies of Federica Montseny to date. Our knowledge of this figure can continue to go deeper into her most personal conflicts and experiences, exploring her inner decision-making processes and her struggles for recognition, both on the public and on the private plane: how she handles her condition as a woman and how her intervention modifies the political network, but also how this intervention creates problems in the private circle (affinity groups and family roles). This work could begin by examining the private correspondence with close friends and companions, a good part of which is still in unpublished collections. This task is always difficult, and in the case of Federica Montseny may be impossible, she being a woman who was sure that she was going to go down in history and that history would judge her. It is not too much to suppose that Federica jealously guarded her most intimate thoughts, only letting the social figure, the militant Federica, speak. The history of Ouidah is tightly intertwined with that of the Atlantic slave trade. The transformation of African societies as a result of their incorporation into the global market economy of the Atlantic is hotly debated, and for decades has been a field of growing interest among historians. Robin Law's recent book builds on that tradition to examine how Ouidah emerged and grew as Atlantic commerce expanded. Ouidah developed first as part of the polity of Hueda from the 1660s onward, a period that coincided with the beginnings of European trade in the Bight of Benin (also known as the Slave Coast), and later (from 1727) as part of Dahomey, until it was conquered by the French in 1892. Eduardo Romanos Fraile According to Law, Ouidah was an important nerve centre of the slave trade, surpassed only by Luanda in terms of slave exports. In addition to the considerable human toll of the slave trade, Ouidah's contributions to Atlantic history are immense, and include a place name in Jamaica, a bird genus, and the worship of its principal deity, the goddess Ezili in Haiti. Commemorations of Ouidah's role in Atlantic history take place in several continents, and are multiform, ranging from museum exhibitions, historical novels, monuments, to television programmes. Law focuses on Ouidah's merchant community and the rise and decline of this urban settlement. He guides us through a series of changes in the organization and operation of Atlantic commerce, the impact and magnitude of which are appreciated locally. The classic historical landmarks of these global processes include the illegitimate slave trade, legitimate trade (marked by the development of palm oil and nuts), and the imposition of colonial government. Local manifestations and responses to all these processes highlight variability and offer possibilities of comparison with other settings on the West African coast. According to Law, most studies of West African coastal communities tend to target whole states or city-states and offer only a general and diffuse perspective that does not permit one to unravel the development and functioning of urban life (p. 5). Ouidah's experience in the Atlantic system shows similarities but also fundamental differences from other settings in coastal West Africa. While the concept of a ''middle-men'' community applied to Douala by Austen and Derrick works for Ouidah, Law opines that concepts such as ''enclave-entrepรด t'' used for Elmina by Feinberg, or neutral ''port of trade'' by Karl Polanyi for Ouidah, are not supported by empirical evidence in his case study (p. 6). Like other middle-men communities in West Africa, Ouidah organized overseas trade, mediated between its port and hinterland, and consequently experienced demographic growth. But it was also subject to major political and social changes, and a difficult transition from slave exports to agricultural exports. Unlike many other coastal settlements involved in the Atlantic system, the European presence in Ouidah was successfully controlled and managed, first by Hueda, and later by Dahomey. Part of this strategy involved confinement of European settlements, free trade, and the prohibition of conflicts between European nations (p. 123). This allowed Dahomey to play off the Europeans against each other and to limit their ability to interfere in local affairs. 1 In addition to these measures, the Dahomian monarchy appointed powerful state officials and established a military garrison in Ouidah. Dahomey owed much of its success in slave-trade operations to its martial values as well as its sophisticated administration. 2 Law convincingly demonstrates that Ouidah's merchant community was able, despite state control, to accumulate a tremendous amount of wealth out of the slave and palm-oil trades. Ouidah grew rapidly as an urban settlement, succeeding Savi, the former capital of the Hueda kingdom, after 1727. The town entered a period of decline following the imposition of French colonial rule, which diverted the trade eastward toward Cotonou and Porto-Novo. The urban and economic history of Ouidah was profoundly marked by the changing patterns in Atlantic commerce. Ouidah's commercial success made it a strategic site coveted by Dahomey, whose power base was located 100 kilometres inland but which was in dire need of coastal outlets to exchange its slaves for European luxuries. Law's book is a critical examination of the complex and uneasy relations between the Dahomian monarchy, a growing group of private merchants in Ouidah, and European representatives. Although it included local traders, Ouidah merchant's community was reinforced in the nineteenth century by the arrival of Brazilian and Afro-Brazilian immigrants, including former slaves who returned to play a prominent role in the operation of the Atlantic system as businessmen, but also as vectors of social and cultural novelties. The book is organized in eight chronological chapters, which retrace the historical trajectory and patterns of trade in Ouidah from the mid-seventeenth century to the French conquest in 1892. Chapter 1 explores the history of pre-European settlement, which is related only in contradictory oral accounts. Ouidah is located four kilometres inland at some distance from a complex of lagoons, and it was neither a ''lagoon-side'' nor an ''Atlantic port''. Initially, Ouidah developed as a farming, fishing, and salt production settlement, whose growth paralleled the expansion of European commerce in the Bight of Benin. European cultural influences were limited as their settlements were segregated from the African quarters; they were also under the gaze of Dahomey's military garrison and the powerful state official in charge of relations with the Europeans, the Yovogan. In the next three chapters, on the Dahomian conquest, Dahomian Ouidah, and the operation of the Atlantic slave trade, Law examines the imposition of Dahomian rule over Ouidah. The unsuccessful resistance of Hueda and Popo was followed by the consolidation of Dahomian administration, whose martial ethos was counterbalanced by the community of private merchants. From a mere storage and transit site for slaves and goods under Hueda, Ouidah grew to become an important commercial and urban centre with a heterogeneous population, expanding its political influence over its neighbours. Law explores the changing patterns of trade, including the adoption of the ounce, notes of credit, the arrival of canoe-men from the Gold Coast, and the expansion of business transactions in private residences. He also provides useful insights on the treatment of slaves, the local memories of the slave trade, and how the small but wealthy class of merchants made a fortune from it. Chapters 5 and 6 deal with the local impact of major global changes in the Atlantic economy, including the banning of the slave trade, and the development of the palm-oil trade. This period was marked by the abandonment of European forts, the banning of the slave trade, and the arrival of unofficial agents, including the Brazilian Francisco Felix de Souza, who became a prominent figure in Ouidah's history. A transatlantic community, including Brazilians, Afro-Brazilians, and returned African ex-slaves, developed in Ouidah at this time (p. 187). This community was distinct due to its Christian religion as well as dress styles, craft skills, architecture, and food habits. By the mid-nineteenth century the palm trade was expanding, supplementing the slave trade but also posing new challenges to the Ouidah merchant community as conflicts with the Dahomian monarchy became endemic. This period was marked by the destruction of de Souza power, the emergence of a new generation of traders, the depreciation in the value of cowry shells, increased use of cash in business transactions, the extension of cultivation lands around Ouidah, greater demand for slave labour in the local economy, and accentuated European interventions in Dahomian affairs. The last two chapters, ''Dissension and Decline'', and ''From Dahomian to French Rule'', explore the decline of Ouidah both as a trading centre and as a town in the second half of the nineteenth century. This book is an excellent case study on the formation, growth, and decline of Ouidah's Atlantic merchant community, and of the town itself. However, its title is misleading, as it announces a social history of Ouidah's populations while actually dealing only with the history of its merchant community. This community had wealth, and wealth conveys power. Traditionally marginalized groups, including slaves, are omitted. Although Law draws on oral traditions, European written sources constitute his primary evidence. Yet, these not only dictate perspectives, they also favour European agency, too often charged with Eurocentrism and racism, which inheres in debates on the Atlantic impact, the slave trade, and memories of it in the present -certain aspects of which Law seems ill at ease with (pp. 12-13). The scope of this work is also limited by the lack of consideration of African cultural change as a result of interaction with Europeans, as well as by the dearth of analysis of the internal social dynamics between the different identities and social forces within Ouidah itself. This makes us believe that the European cultural impact on Dahomey's Hueda and Fon cultures was, unlike Elmina, rather limited. 3 Overall, this is a very informative book. It provides a detailed analysis of the formation and growth of Ouidah's trader community and its relations with Dahomey in the era of Atlantic commerce. It also represents a major blow to Karl Polanyi's thesis and an important nuance for perspectives that too often overgeneralize the nature, impact, and consequences of Atlantic commerce in Western Africa. According to the publisher's description, Guns and Guerilla Girls is the first ''comprehensive analysis of the role of specifically women as guerilla fighters'' in the Zimbabwean liberation war of the mid-1960s through 1980. This is not correct. In 2000, a Zimbabwean historian, Josephine Nhongo-Simbanegavi, published a ground-breaking survey of the disparities between the documentary record and the nationalist rhetoric on the treatment of women guerillas. 1 Nhongo-Simbanegavi's book only appears as an entry in the bibliography of Guerilla Girls; one wishes that the two studies could have dovetailed to a greater extent. Presumably the fact that they do not is simply due to the difficulties and delays of publishing. Guns and Guerilla Girls was written by an Australian Africanist, and is based on a year of fieldwork conducted in Zimbabwe in 1996-1997. The obvious issues of the author's position as a white woman and as a foreigner are exhaustively dealt with in the first chapter. Lyons's stance in dealing with these difficult issues is to insist that if her book projects the voices of the women fighters themselves in an era of social silence about their plight, then the privileging of her own authorial voice becomes a moot point. This strikes me as incongruous for two reasons. First, Zimbabwean historiography has been relatively free of tensions between expatriate and indigenous historians -generally there has been friendly and supportive collaboration in many areas of feminist and nationalist history production. Secondly, despite the claims about the exceptionalism of this book, in the end its primary research material is treated quite straightforwardly. The terrain of legitimacy and perspective of oral history projects in women's history has been quite well-mapped, for Zimbabwe and elsewhere, so Lyons's earnest analysis of this matter in her own work seems overdone. One of the strengths of this book is that it emanates from outside the specialist community ranged along the USA-Oxford University-University of Zimbabwe historiographical axis. As such it brings a new voice to a generally competent survey of the intersections of women's history and military violence in Zimbabwe. As such a survey, however, with two exceptions, the book has little to offer to the specialist. A reader seeking an in-depth monograph on gender and the Zimbabwean struggle will not find it here. This is exemplified by the way that Guerilla Girls treats the word gender as if it were synonymous with women. One can certainly legitimately write a history of women and armed struggle but it is important not to regard this as a fully gendered history. For example, on p. 93, Lyons writes, ''It was during the armed struggle that the traditional gender roles between men and women became increasingly blurred. When whole communities are involved in a war, when they are submerged in the depths of turmoil and crisis, often without choice, there is seldom time for gendered distinctions to be made.'' An assertion like this contradicts much of the contemporary understanding of gender and violence: it is surely at such times that gendered distinctions are not only made, but forged; and in fact can become the raw fuel, if not the raison d'รชtre of war itself. Guerilla Girls is divided into four parts. The first, ''Feminism, Nationalism and the Struggle for Independence'', reviews the literature pertaining to Lyons's research methodology and the various rhetorical stances that have been fashioned in different political eras either to explain or explain away women's anti-colonial militance. The second section, ''A Woman's Place is in the Struggle'', rehearses the by now familiar tales of the involvement of African women in anti-colonial violence from the 1890s to the 1980s. A welcome aspect of this section discusses the work of white women in the Rhodesian war effort. Their roles are compared and contrasted with those of black Zimbabwean women. This section, then, does begin to sketch the preliminary outlines of a racially gendered portrait of the war. This is an important contribution to the as-yet fledgling historiography of race, gender, and violence in southern Africa. Halfway through the book, the third section takes up the subject of the book's title: the tale of ''guerilla girls'' in the 1967-1980 war. Like Nhongo-Simbanegavi, Lyons points out that the 1970s rhetoric of Zimbabwe's ''new fighting women'' was produced for the frontline states and other international patrons but was only rarely matched by any reality of equality between fighting men and women in the guerilla armies. In fact, women inside the country and behind the lines continued to bear the paired burdens of unacknowledged labour and the dangers of being stereotyped as either guerilla supporters or Rhodesian collaborators. The fourth section offers an interpretation of the way the histories of fighting and supporting women in the liberation struggle have been represented in the Zimbabwean media and popular culture. The chapter gives a detailed account of the production and local reception of the feature film, Flame, in 1996. Lyons's account, based on her first-hand observations of the film's trajectory during her fieldwork year, will interest historians of African film and of popular culture. The defining moment of Flame is the depiction of the rape of a Zimbabwean woman guerilla by a male army colleague. Breaking the social silence on this issue has ensured the film's notoriety and staying power, as it was the first time that the projection of rhetorical certainty of the probity of the guerilla men was publicly questioned. Lyons provides interesting details on how local controversies around the threatened censorship of the film developed, and of audience reactions to it when shown in Zimbabwe. This section will interest specialists in Southern African film and media studies and of the complexities of public/popular representations of women's rights and violence. Telling the story of Flame also enables Lyons to present other contemporary articulations on the mistreatment of women guerillas at the hands of their male colleagues. This sets the scene for the threatened confrontation in 1997 between senior women in the former army of the ruling party and their struggle comrades over allegations of rape, followed by threats of naming and shaming, and of demands for compensation by the women. The logical market niche for this book is first-or second-year university courses where students are being introduced to some of the empirical studies and debates of recent Zimbabwean historiography. On the whole, Guerilla Girls is a reasonable choice for pairing with others in undergraduate-level study of the Zimbabwean liberation struggle. Brazilian labor laws certainly pose a number of perplexing questions. How, for example, has a corporatist system of evident fascist inspiration, implanted between 1931 and 1943, persisted into the twenty-first century? After all, most such arrangements elsewhere did not long survive the fall of the dictatorships that implanted them, but the Brazilian version has passed through two ''democratic transitions'' and a wide variety of political regimes with its essential features still intact. John French's question is similar: ''What is it about the Brazilian labor law system that simultaneously produce [sic] deep bitterness and cynicism on the part of working-class labor activists as well as an unprecedented hopefulness and utopian militancy?''. This is the first book-length treatment of the history of Brazilian labor law in English and presents a number of important contributions. French's central argument, which accompanies some recent Brazilian scholarship, is that labor law became an important field for struggle as workers sought to oblige the state to enforce its own laws in the face of political hostility and employer intransigence. 1 Or, as French nicely puts it: ''In the end, the labor laws became 'real' in Brazilian workplaces only to the extent that workers struggled to make the law as imaginary ideal into a practical future reality.'' The origins of these measures remain controversial. French thinks that they derive from a variety of sources, and it must be admitted that those of us who see their inspiration as fascist, with few if any qualifications, have yet to locate a smoking gun that would convince skeptics. Nevertheless, all the key elements of the Brazilian trade-union measures appeared initially in the Italian law of 1926, ''On the Juridical Disciplining of Collective Labor Relations''. More interestingly, as Zeev Sternhell and others have shown, important elements of fascist doctrine originated with the political Left. Many of those who drafted the original legislation in Brazil had been active socialists before joining the Vargas government, and French regards this as an argument against any purely fascist inspiration for the Brazilian laws. On the other hand, many interwar socialists defended such corporatist solutions as a way of using state power to control the chaos of the capitalist market and the waste caused by the class struggle. In fact, we know relatively little about the internal disputes behind the Vargas regime's labor policies. 2 Certainly the government encountered difficulties in imposing its system of state-controlled unions, and French's affirmation that factory workers supported the system probably needs qualification. 3 There were some reverses, delays, and changes of direction, but by 1943 the regime codified its various measures in the monumental Consolidaรงรฃ o das Leis do Trabalho (CLT), whose essential features remain in force to the present. Drowning in Laws focuses primarily on the labor court system, designed to reduce both collective and individual disputes to judicial decision-making procedures. French, citing the judgments of a number of militants, labor leaders, and other observers, reaches very critical conclusions regarding the operation of the labor courts. The Vargas project, however, involved a wide variety of social welfare programs as well. The official unions administered some of these services, although public health measures as well as retirement and survivor pensions operated through other parts of the state apparatus. While these programs undoubtedly suffered from numerous defects, they brought major concrete and symbolic benefits for the first time to a substantial part of the Brazilian population. French treats briefly and skeptically some of these social programs. He is particularly critical of the operations of the Pension and Retirement Institute for Industrial Employees (IAPI). However, this Institute may not have been as badly administered as French and his sources claim, and its public housing plan, while including some sinister aspirations for control over workers, provided real improvements for those included in the program. 4 In any case, it is hard to understand the wide appeal and political longevity of the Vargas tradition without detailed attention to the social welfare measures of the regime. Even the labor court system may have functioned somewhat more effectively that its numerous critics (French included) are inclined to recognize. A study of the city of Juiz de Fora shows that workers secured judgments against abuses by major textile firms even during World War II, when the government had suspended much of the CLT in order to increase production. 5 Research carried out in Rio de Janeiro in the mid-1990s discovered that the public had more confidence in the labor courts (6.47 on a scale of 1 to 10) than in the regular judiciary system (5.0). Moreover, among those who had direct experience of the labor courts, the score rose slightly (6.71), while confidence fell in the case of those who had used the regular courts (4.46). Nor did those interviewed regard the system as wholly biased. Of the 1,551 respondents, 39.7 per cent thought the labor courts treated employees more rigorously, but 27.6 per cent replied that employers were treated more rigorously and 26.5 per cent considered that the two were dealt with equally. Perhaps not surprisingly, 55.6 per cent of the employers surveyed thought that the labor courts treated employers more rigorously than employees. While almost half the respondents criticized the labor justice system as slow, 28.8 per cent cited as positive the fact that ''common people have great chances of winning their cases''. (The comparable statistic for the regular courts was 15.5 per cent.) 6 While hardly an unqualified endorsement, such results suggest that, despite all its notorious shortcomings, the labor court system has enjoyed some real legitimacy in the eyes of those directly affected by it. One of the many virtues of Drowning in Laws is that it documents the close relationship between the labor laws and police repression. The problem is to understand why the Brazilian state has directed such intense violence against what has been, for most of its history, a relatively fragile labor movement. 7 French attributes much of the repression to the persistence of attitudes and practices developed during slavery, abolished only in 1888. While political culture probably has much to do with extensive repression, it is far from clear that the level of violence in Brazil exceeded that of countries in Latin America and elsewhere that had little or no experience with slavery. Since capitalist industrialization has proceeded historically under a variety of legal arrangements, the question arises as to what difference the specific features of Brazilian labor law have made for the country's political and economic development. Drowning in Laws provides some elements for an answer. Clearly, by helping to ensure a relatively tractable trade-union movement, the measures provided an inestimable service for employers, though one with complex ramifications. In political terms, social and labor rights have depended less than in many countries on open struggles or on legislative victories led by political parties. Since the Vargas period, much of the Brazilian population has come to regard such rights as central to their notion of citizenship and as an obligation of the State. As Angela Castro Gomes notes, this has not necessarily contributed to the advancement of democracy in Brazil. 8 Industrialists probably benefited from the way the CLT reduced competition among firms by standardizing such matters as working hours, child labor, vacations, and factory conditions. Whether or not the law aided industrialists by explicitly formulating terms for labor relations or by providing significant predictability is difficult to say. In any case, the CLT seems clearly Fordist in its intentions and may have helped the formation of a market for consumer goods. Perhaps surprisingly, the standard of living of the typical workingclass household in Sรฃo Paulo appears to have improved between the mid-1930s and the mid-1970s. 9 In addition, while the law has hardly fulfilled the aspirations of its founders for class harmony, it may have reduced the impact of strike action. Strikes remained illegal for long periods, and even when not formally banned, their incidence was relatively low, since the labor courts handled most disputes, although there have been some periods of considerable strike activity (1945-1946 and 1978-1980 in particular). The labor laws have few open defenders these days. Even so, recent governments, while highly critical, have been unable or unwilling to modify the laws significantly. Many trade unionists and employers, despite their criticisms, remain ambivalent about the abolition of the labor laws since the political and economic consequences seem difficult to predict. The laws provide important guarantees, financial and otherwise, for the unions, at the same time that they restrict their autonomy. Many employers are similarly unenthusiastic about the risks and uncertainties of direct collective negotiations. It seems quite possible that Brazilian workers will continue drowning in laws for some time to come.
Synthetic biology, security and governance The twenty-first century has witnessed an increasing confluence of rapidly advancing science and its embodiment in practical technologies, an extensive global diffusion of the knowledge and capabilities associated with those developments, and a seemingly unending shift in the international security environment. The scope and intensity of these interactions in the life sciences have generated concern about security risks stemming from possible misuse. This lecture focuses on one of the key emerging life science technologies of concern, gene synthesis, and considers how the new risks and challenges it poses for governance can best be managed. Introduction The regulation of unconventional weapons -chemical, biological, radiological and nuclearhas traditionally taken an 'artefact-centric' approach by seeking to control the materials, methods and products involved in misuse. 1 This approach is, however, particularly ill-suited to the life sciences, where the technologies are less about hardware, equipment and tools, and more about people, processes and know-how. Dual-use life science technologies are increasingly diffuse, globalized and multidisciplinary and are often based on intangible information rather than on specialized materials and equipment. This changes the definition of the problem from a material-and equipment-based threat that can be eliminated to a knowledge-based risk that must be managed. If what people know is more important than what people have, then the crucial factor becomes the choice that people will make about how they use the knowledge they have, and this changes fundamentally the kinds of measures to which policymakers must give attention (Moodie, 2012). This lecture considers the emerging field of synthetic biology and focuses on one of its key enabling technologies: the ability to synthesize strands of DNA from off-the-shelf chemicals and assemble them into genes and microbial genomes. When combined with improved capabilities for the design and assembly of genetic circuits that perform specific tasks, synthetic genomics has the potential for revolutionary advances. At the same time, it could permit the recreation of dangerous viruses from scratch, as well as genetic modifications designed to enhance the virulence and military utility of pathogens. The potential misuse of gene synthesis to recreate deadly viruses for biological warfare or terrorism would require the integration of three processes: the automated synthesis of DNA segments, the assembly of those segments into a viral genome, and the production and weaponization of the synthetic virus. Each of these steps differs with respect to the maturity of the technologies involved, the ease with which it could be performed by non-experts and the associated threat. Even with access to synthetic DNA, assembling the DNA segments into a synthetic virus and converting the virus into a deliverable weapon would pose significant technical hurdles. This lecture reviews the security concerns related to DNA synthesis technology and advances a holistic governance approach, encompassing hard law, soft law and informal law, to limit the risk of misuse. Overview of the Technology The synthesis of viral genomes DNA molecules consist of four fundamental building blocks: the nucleotide bases adenine (A), thymine (T), guanine (G) and cytosine (C), which can be linked together in any sequence to form a linear chain that encodes genetic information. A DNA molecule may consists of a single strand of nucleotide bases along a sugar backbone or two mirror-image strands that pair up to form a double helix, with adenine (A) always complementary to thymine (T) and guanine (G) complementary to cytosine (C). A second type of nucleic acid called RNA differs from DNA in the structure of its sugar backbone and the fact that one of the four nucleotide bases is uracil (U), which replaces thymine as the complementary base for adenine. An infectious virus consists of a long strand of single-stranded or double-stranded DNA or RNA, encased in a protein shell. There are at least three ways to synthesize a viral genome de novo (from scratch). The first and most straightforward approach is to order the entire viral genome from a commercial gene-synthesis company by entering the DNA sequence on the company's Website. (A leading commercial supplier, Blue Heron Biotechnology in Bothell, WA, has synthesized DNA molecules up to 52 000 base pairs long.) The genomic sequence would be synthesized in a specialized facility using proprietary technology that is not available for purchase, packaged in a living bacterial cell and shipped back to the customer. The second option would be to order oligonucleotides (single-stranded DNA molecules o100 nucleotides in length) from one or more providers and then stitch them together in the correct order to create an entire viral genome. The advantage of this approach is that one can obtain more accurate DNA sequences, avoid purchasing expensive equipment and outsource the necessary technical expertise. The third option would be to synthesize oligonucleotides with a standard desktop DNA synthesizer and then assemble the short fragments into a genome. This approach would require acquiring a DNA synthesizer (purchased or custombuilt) and a relatively small set of chemicals. Although the chemical synthesis of oligonucleotides up to 120 base pairs is now routine, accurately synthesizing DNA sequences greater than 180 base pairs remains somewhat of an Lentzos art. For this reason, the de novo synthesis of most viruses is still more difficult than stealing a sample from a laboratory or isolating the agent from nature (Epstein, 2008). It is just a matter of time, however, before technological advances further reduce costs and the frequency of errors, making genome synthesis readily affordable and accessible (National Academies of Sciences, 2006). A brief history of synthetic genomics The field of synthetic genomics dates back to 1979, when the first gene was synthesized by chemical means (Khorana, 1979). The Indian-American chemist Har Gobind Khorana and 17 co-workers at the Massachusetts Institute of Technology took several years to produce a small gene made up of 207 DNA nucleotide base pairs. In the early 1980s, two technological developments facilitated the synthesis of DNA constructs: the invention of the automated DNA synthesizer and the polymerase chain reaction, which can copy any DNA sequence many million-fold. By the end of the 1980s, a DNA sequence of 2100 base pairs had been synthesized chemically (Mandecki et al, 1990). In 2002, the first functional virus was synthesized from scratch: poliovirus, whose genome is a single-stranded RNA molecule about 7500 nucleotide base pairs long (Cello et al, 2002). Over a period of several months, Eckard Wimmer and his co-workers at the State University of New York at Stony Brook assembled the poliovirus genome from customized oligonucleotides, which they had ordered from a commercial supplier. When placed in a cell-free extract, the viral genome then directed the synthesis of infectious virus particles. The following year, Hamilton Smith and his colleagues at the J. Craig Venter Institute in Maryland published a description of the synthesis of a bacteriophage, a virus that infects bacteria, called jX174. Although this virus contains only 5386 DNA base pairs (fewer than poliovirus), the new technique greatly improved the speed of DNA synthesis. Compared with the more than a year that it took the Wimmer group to synthesize poliovirus, Smith and his colleagues made a precise, fully functional copy of the jX174 bacteriophage in only 2 weeks (Smith et al, 2003). Since then, the pace of progress has been truly remarkable. In 2004, DNA sequences 14 600 and 32 000 nucleotides long were synthesized (Kodumai et al, 2004;Tian et al, 2004). In 2005, researchers at the US Centers for Disease Control and Prevention used sequence data derived from the frozen or paraffin-fixed cells of victims to reconstruct the genome of the 'Spanish' strain of influenza virus, which was responsible for the flu pandemic of 1918-1919 that killed tens of millions of people worldwide; the rationale for resurrecting this extinct virus was to gain insights into why it was so virulent. In late 2006, scientists resurrected a 'viral fossil', a human retrovirus that had been incorporated into the human genome around 5 million years ago (Enserink, 2006). In 2008, a bat virus related to the causative agent of human SARS was recreated in the laboratory (Skilton, 2008). That same year, the J. Craig Venter Institute synthesized an abridged version of the genome of the bacterium Mycoplasma genitalium, consisting of 583 000 DNA base pairs (Gibson et al, 2008). In May 2010, scientists at the Venter Institute announced the synthesis of the entire genome of the bacterium Mycoplasma mycoides, consisting of more than 1 million DNA base pairs (Gibson et al, 2010;Pennisi, 2010). The total synthesis of a bacterial genome from chemical building blocks was a major milestone in the use of DNA synthesis techniques to create more complex and functional products. A methodological shift Synthesizing a genome from scratch is a significant methodological shift from recombinant DNA technology, which involves the cutting and splicing of pre-existing genetic material. Because chemical synthesis can create any conceivable DNA sequence, it allows for greater efficiency and versatility in existing areas of research, while opening new paths of inquiry and innovation that were previously constrained. Potential for Misuse Only a few viral pathogens have any real military utility. Traditional effectiveness criteria for antipersonnel agents are infectivity (the ability to infect humans reliably and cause disease), virulence (the severity of the resulting illness), persistence (the length of time the pathogen remains infectious after being released into the environment) and stability when dispersed as an aerosol cloud. Early US developers of biological weapons preferred veterinary diseases such as anthrax, tularemia and Venezuelan equine encephalitis, which are not contagious in humans, because they made a biological attack more controllable. The Soviet Union, in contrast, weaponized contagious diseases such as pneumonic plague and smallpox for strategic attacks against distant targets, in the belief that the resulting epidemic would not boomerang against the Soviet population. The choice of pathogen also depends on the intended use, such as whether the aim is to kill or incapacitate, contaminate terrain for long periods or trigger a major epidemic. Of the pathogenic viruses that can be found in nature, some are easier to isolate than others. Filoviruses such as Marburg and Ebola have animal reservoirs that are unknown, poorly understood or accessible only during active outbreaks. As a result, isolating these viruses from a natural source would require skill, good timing and the ability to transport the virus safely from the site of an outbreak. Because it is not easy to isolate natural strains with the desired characteristics, most pathogens developed in the past as biological weapons were either deliberately bred or genetically modified. The increased accessibility and affordability of DNA synthesis techniques could eventually make it easier for would-be bioterrorists to acquire dangerous viral pathogens, particularly those that are restricted to a few high-security labs (such as the smallpox virus), are difficult to isolate from nature (such as Ebola and Marburg viruses) or have become extinct (such as the Spanish influenza virus). In theory, DNA synthesis techniques might also permit the creation of bioengineered agents more deadly and communicable than those that exist in nature, but in fact this scenario appears unlikely. As Tucker and Zilinskas (2006, p. 38) note: To create such an artificial pathogen, a capable synthetic biologist would need to assemble complexes of genes that, working in union, enable a microbe to infect a human host and cause illness and death. Designing the organism to be contagious, or capable of spreading from person to person, would be even more difficult. A synthetic pathogen would also have to be equipped with mechanisms to block the immunological defenses of the host, characteristics that natural pathogens have acquired over eons of evolution. Given these daunting technical obstacles, the threat of a synthetic 'super-pathogen' appears exaggerated, at least for the foreseeable future. Accordingly, the most immediate risk of misuse associated with DNA synthesis technology is the recreation of known viral pathogens rather than the creation of entirely new ones. (As bacterial genomes are generally far larger than viral genomes, synthesizing them is more difficult and time-consuming.) Although the primary threat of misuse comes from state-level biological warfare programs, two possible scenarios involving individuals provide cause for concern. The first scenario involves a 'lone operator', such as a highly trained molecular biologist who is motivated to do harm by ideology or personal grievance. The second scenario involves a 'biohacker' who does not necessarily have malicious intent but seeks to create bioengineered organisms out of curiosity or to demonstrate technical prowess, a common motivation of many designers of computer viruses. As synthetic biology training becomes increasingly available to students at the college and even high-school levels, a 'hacker culture' may emerge, increasing the risk of reckless or malevolent experimentation (Tucker and Zilinskas, 2006, pp. 40-42). Ease of misuse In assessing the potential for misuse of DNA synthesis, it is important to examine the role of tacit knowledge in synthesizing a pathogen at the laboratory bench. The construction of a pathogenic virus by assembling pieces of synthetic DNA requires extensive training in basic molecular-biology techniques, such as ligation and cloning, including hands-on experience that is not 'reducible to recipes, equipment, and infrastructure' (Vogel, 2006, p. 676). This requirement for tacit knowledge is what the US National Science Advisory Board for Biosecurity (NSABB) meant when it noted that '[t]he technology for synthesizing DNA is readily accessible, straightforward and a fundamental tool used in current biological research. In contrast, the science of constructing and expressing viruses in the laboratory is more complex and somewhat of an art. It is the laboratory procedures downstream from the actual synthesis of DNA that are the limiting steps in recovering viruses from genetic material' (NSABB, 2006, p. 4). The World at Risk report, released in December 2008 by the US Commission on the Prevention of WMD Proliferation and Terrorism, recommended that efforts to prevent bioterrorism focus less on the risk of terrorists becoming biologists and more on the risk of biologists becoming terrorists (WMD Commission, 2008). The report failed to emphasize, however, that not all biologists are of concern. Instead, the onus is on those who have worked in state-sponsored biological weapons programs and acquired both expertise and experience in weaponizing pathogens. Accessibility of the technology Synthesizing a virus and converting it into an effective biological weapon would require overcoming several technical hurdles. First, the de novo synthesis of an infectious viral genome requires an accurate genetic sequence. Although DNA or RNA sequences are available for many pathogenic viruses, the quality of the sequence data varies. Genomes published in publicly available databases often contain errors, some of which may be completely disabling whereas others would attenuate the virulence of a synthetic virus. In addition, some published sequences were not derived from natural viral strains but rather from cultures that had spent many generations in the lab and lost virulence through attenuating mutations. Synthetic biology, security and governance A second difficulty with the synthesis of a highly pathogenic virus is ensuring infectivity. For some viruses, such as poliovirus, the genetic material is directly infectious, so introducing it into a susceptible cell results in the production of complete virus particles. For other viruses, such as causative agents of influenza and smallpox, the viral genome itself is not infectious and requires additional components (such as enzymes involved in replicating the genetic material) whose function must be replaced. A third technical hurdle relates to the characteristics of the viral genome. Viruses with large genomes are harder to synthesize than those with small genomes. In addition, RNA viruses with one positive strand are easier to construct than RNA viruses with one negative strand, which in turn are easier to synthesize than double-stranded DNA viruses. Thus, poliovirus is relatively easy to synthesize because it has a small genome made of positivestranded RNA, whereas the smallpox virus is hard to synthesize because it has a very large genome made up of double-stranded DNA. Synthesizing the Marburg and Ebola viruses would be moderately difficult: although their genomes are relatively small, they are not directly infectious and enabling them to produce virus particles would be challenging (WMD Commission, 2008). Despite these hurdles, the risk of misuse of DNA synthesis is expected to increase over time. One analyst has claimed that 10 years from now, it may be easier to synthesize almost any pathogenic virus than to obtain it by other means (WMD Commission, 2008, p. 15). Nevertheless, even the successful synthesis of a virulent virus would not yield an effective biological weapon (Frerichs et al, 2004;Zilinskas, 2006). Once an appropriate viral pathogen has been synthesized, it would first have to be cultivated. Viruses are significantly harder to mass-produce than bacteria because they replicate only in living cells. One low-tech option would be to grow virus in fertilized eggs, but to avoid contamination the eggs would have to be specially ordered -not an easy task without attracting attention. Cultivating infectious virus is also extremely hazardous to the perpetrators and to those living nearby. Disseminating biological agents effectively involves even greater technical hurdles. Whereas persistent chemical-warfare agents such as sulfur mustard and VX nerve gas are readily absorbed through the intact skin, bacteria and viruses cannot enter the body by that route unless the skin has been broken. Thus, biological agents must usually be ingested or inhaled to cause infection. To expose large numbers of people through the gastrointestinal tract, a possible means of delivery is the contamination of food or drinking water, yet neither of these scenarios would be easy to accomplish. Large urban reservoirs are usually unguarded, but unless the terrorists dumped in a massive quantity of biological agent, the dilution factor would be so great that no healthy person drinking the water would receive an infectious dose (Tucker, 2000). Moreover, modern sanitary techniques such as chlorination and filtration are designed to kill pathogens from natural sources and would probably be equally effective against a deliberately released agent. The only way to inflict mass casualties with a biological agent is by disseminating it as an airborne aerosol: an invisible cloud of infectious droplets or particles so tiny that they remain suspended for long periods and can be inhaled by large numbers of people. A concentrated aerosol, released into the atmosphere of a densely populated urban area, could potentially infect many thousands of victims. After an incubation period of a few days (depending on the type of agent and the inhaled dose), the exposed population would experience an outbreak of incapacitating or fatal illness. Nevertheless, the aerosol delivery of a biological agent entails major technical hurdles. To infect through the lungs, the infectious particles must be between 1 and 5 microns (millionths of a meter) in diameter. Generating an aerosol cloud with the particle size and concentration needed to cover a large area would require a sophisticated delivery system, such as an airborne sprayer. There is also a trade-off between the ease of production and the effectiveness of dissemination. The easiest way to produce microbial agents is in liquid form, yet when a slurry is sprayed into the air, it forms heavy droplets that fall to the ground so that only a small percentage of the agent is aerosolized. In contrast, if the microbes are dried to a solid cake and milled into a fine powder, they become far easier to aerosolize, but the drying and milling process is technically difficult. Even if aerosolization could be achieved, the effective delivery of biological agents in the open air would depend on the prevailing atmospheric and wind conditions, creating additional uncertainties. Only under highly stable atmospheric conditions will an aerosol cloud remain close to the ground where it can be inhaled rather than being rapidly dispersed. Moreover, most microorganisms are sensitive to ultraviolet radiation and cannot survive more than 30 min in bright sunlight, limiting effective military use to nighttime attacks. The one major exception to this rule is anthrax bacteria, which can form spores with a tough protective coating that enables them to survive for several hours in sunlight. Terrorists could, of course, stage a biological attack inside an enclosed space such as a building, a subway station, a shopping mall or a sports arena, but even here the technical hurdles would be by no means trivial. Finally, even if a synthetic virus was disseminated successfully in aerosol form, the outcome of the attack would depend on factors such as the basic health of the people who were exposed and the speed with which the public health authorities and medical professionals detected the outbreak and moved to contain it. A prompt response with effective medical countermeasures, such as the administration of antiviral drugs combined with vaccination, might significantly blunt the impact of an attack. In addition, simple, proven methods such as the isolation and home care of infected individuals, the wearing of face masks, frequent hand washing and the avoidance of hospitals where transmission rates are high have been effective at slowing the spread of epidemics. In sum, the technical challenges involved in carrying out a mass-casualty biological attack are formidable. Imminence and magnitude of risk Although the de novo synthesis of viral pathogens is relatively difficult today, rapid improvements in the cost, speed and accuracy of DNA synthesis mean that the risk of misuse of the technology will increase over time -although by how much remains a matter of debate. For the next 5 years, the greatest risk will involve the synthesis of a small number of highly pathogenic viruses that are currently extinct or otherwise difficult to obtain. Access to stocks of the smallpox virus and the Spanish influenza virus is tightly controlled: samples of the former are stored at two authorized repositories in the United States and Russia, while samples of the latter exist only in a few laboratories. Synthesizing the smallpox virus would be difficult because its genome is one of the largest of any virus and is not directly infectious. Although the genome of the 1918 influenza virus is relatively small and has been reconstructed and published, constructing the virus from scratch would be moderately difficult because the genome is not directly infectious (Garfinkel et al, 2007). Contrary to worst-case planning scenarios, in which the aerosol dispersal of militarygrade agents causes hundreds of thousands of deaths, only two bioterrorist attacks in the United States are known to have caused actual casualties. Both incidents involved the use of crude delivery methods: the deliberate contamination of food with Salmonella bacteria by the Rajneeshee cult in Oregon in 1984, and the mailing of powdered anthrax spores through the postal system in 2001. Such low-tech attacks are likely to remain the most common form of bioterrorism. They are potentially capable of inflicting at most tens to hundreds of fatalities -within the destructive range of high-explosive bombs, but not the mass death predicted by many worst-case scenarios (Tucker, 2000). Oversight Current governance measures Much can be done at the national or regional level to manage the risk of misuse of DNA synthesis. The fact that only about 50 companies worldwide currently possess the advanced know-how and technical infrastructure needed to produce gene-length DNA molecules offers a window of opportunity to introduce governance measures. In Europe, concerns about genome synthesis have focused on biosafety, the nature and integrity of life, equity and intellectual property, and public confidence and engagement, rather than on security and deliberate misuse (Schmidt, 2006;Feakes, 2008Feakes, /2009Lentzos, 2009). Typical of this approach is the European Commission's assessment that the most pressing need is 'to examine whether existing safety regulations for the management of engineered microorganisms provide adequate protection against inadvertent release of "synthetic" pathogens. In particular, who is responsible for ascertaining and quantifying risks, and for implementing any clean-up measures that might be undertaken?' (European Commission, 2005). Two European countries, the United Kingdom and the Netherlands, stand out as having considered the biosecurity aspects of synthetic genomics in some detail, and both have concluded that the current regulatory frameworks are adequate to address the risk of misuse. The United States has been far more aggressive in addressing the security dimensions of gene synthesis. In November 2009, the Department of Health and Human Services (DHHS) published a draft 'Screening Framework Guidance for Synthetic Double-Stranded DNA Providers' in the Federal Register, and the finalized guidelines were published a year later, in October 2010 (US DHHS, 2009). These guidelines call for subjecting all requests for synthetic double-stranded DNA to a process of customer and sequence screening. Upon receiving a DNA synthesis order, the supplier should review the information provided by the customer to verify its accuracy and check for 'red flags' suggestive of illicit activity. If the information provided raises concerns, the supplier should ask the customer for additional information. Screening the requested DNA sequence to identify any sequences derived from or encoding Select Agents is also recommended. If the customer or the sequence raises concerns, providers are urged to clarify the intended end-use. In cases where follow-up screening does not resolve the concern, providers are encouraged to seek advice from designated government officials. The US guidance document also recommends that providers retain electronic copies of customer orders for at least 8 years, the duration of the statute of limitations for prosecution. Although adherence to the screening framework is considered voluntary, the guidance reminds providers of their legal obligations under existing export control regulations. Recognizing the security concerns associated with synthetic DNA, a number of genesynthesis companies have already begun screening customers and orders on their own initiative (International Association of Synthetic Biology (IASB), 2008). The IASB, a consortium of mainly German companies, launched its 'Code of Conduct for Best Practices in Gene Synthesis' on 3 November 2009 (Lok, 2009). Like the US government guidance document, the Code of Conduct recommends an initial screen to confirm the customer's bona fides, followed by an automated screen of the sequence order using a computer program to search for similarities between gene sequences. Any hits from the automated screen are then assessed by human experts. If the hits are judged to be real and not falsepositives, follow-up screening is done to verify the legitimacy of the customer before the order is filled (IASB, 2008). Shortly before the IASB Code of Conduct was launched, two companies that had initially been involved in the process dropped out and established their own group, the International Gene Synthesis Consortium (IGSC). This body includes five of the world's leading gene-synthesis companies and claims to represent 80 per cent of the industry (Check Hayden, 2009). Because of its large market share, the IGSC asserts it has the experience to develop workable screening measures and has put forward a 'Harmonized Screening Protocol' to rival the IASB Code of Conduct (Bhattacharjee, 2009). As a result, gene-synthesis companies have been left to decide whether to adopt one of the three competing standards, to devise their own by mixing and matching various elements or to ignore the process altogether. A new governance framework Previous surveys on the effectiveness of voluntary self-governance in the biotechnology industry have highlighted inconsistencies in the way the regimes are implemented (Lentzos, 2006). For example, biotechnology companies vary greatly in how they use Institutional Biosafety Committees (IBCs) to review recombinant-DNA research, including the structure of the committees, the frequency of meetings, the quality of minutes produced, and whether or not the committees approve individual projects or groups of projects. The IBCs also differ in how they interpret their purpose and responsibilities (Lentzos, 2006). Similarly, most providers of synthetic DNA are sensitive to security concerns and would probably agree to implement some sort of screening practices if they are not doing so already, but it is unclear what the minimum standards should be. Who decides if the DNA sequence database used for screening purposes is adequate? Is it sufficient to retain order records in the form of broad statistics, or must the details of each individual order be kept? Is 5 years long enough to retain records, rather than 8? One way to settle such questions is to establish a set of minimum screening standards through statutory legislation rather than voluntary guidance. In devising a governance framework for the de novo synthesis of viral genomes, it is useful to think of regulation as a process that operates through three different mechanisms to influence both formal and informal behavior (Corneliussen, 2001;Lentzos, 2008). 'Hard law' regulates companies by explicitly imposing certain practices through statutory legislation, 'soft law' regulates companies by standardizing particular practices and 'informal law' regulates companies by designating particular practices as necessary for success. Compliance with the three forms of regulation confers organizational legitimacy on companies and helps to ensure their survival. The most effective regulatory frameworks include all three kinds of law reinforcing each other. Much of the discussion on the regulation of gene synthesis has focused on ensuring that the burgeoning gene-synthesis industry does not bear unnecessary burdens. Yet regulatory law can benefit suppliers if it increases public confidence in the technology. This advantage is particularly relevant in the biotechnology field because private biotech companies ultimately depend on social support for the creation of new markets. Moreover, a regulatory regime that leads companies to act in a responsible manner (and to be seen as doing so) may actually be more profitable than a less restrictive regime that generates constant controversy and hostile campaigning. Indeed, Michael Porter has argued that strict environmental regulations, rather than shifting external costs onto companies and burdening them relative to competitors in countries with less stringent regulations, can result in a 'win-win' situation in which the companies' private costs are reduced along with the external costs they impose on the environment (Porter and van der Linde, 1995). Porter concludes, 'Strict environmental regulations do not inevitably hinder competitive advantage against foreign rivals, indeed, they often enhance it'. Thus, the synthetic DNA industry could potentially benefit from a regulatory regime that carefully balances stringency with legitimacy, although this solution would require companies to accept a certain regulatory burden. Arguing for statutory legislation is not meant to imply that voluntary measures have no merit. Self-governance may provide incentives for companies to behave in a responsible way. The reward for adopting screening practices, for example, is inclusion in the 'club' of companies that are seen as reputable and 'doing the right thing', sending a positive signal to customers and investors. In this way, successful companies can help to regulate others by designating screening practices as necessary for economic success. If, however, the screening guidelines are not generally adhered to, then self-governance may discourage other companies from implementing them, especially when there are costs involved. This is a situation where the force of law can be particularly persuasive. Indeed, the gene-synthesis industry has recognized the problem of non-compliance with voluntary guidelines. A workshop report from the IASB notes, 'Ultimately, the definition of standards and the enforcement of compliance with these is a government task' (IASB, 2008, p. 14). Statutory legislation is also important for managing rogue companies. Commenting on the IASB's early efforts to develop a code of conduct, a Nature Editorial argued that they were 'laudable first steps' but that synthetic DNA providers 'still need independent oversight' in the form of statutory legislation. There have been, and will probably continue to be, companies that are not interested in cooperating with any industry group, and that are happy to operate in the unregulated grey area. The ultimate hope is that customers will put economic pressure on those non-compliers to fall in line, or else lose all but the most disreputable business. But that is just a hope. As the recent meltdowns on Wall Street have indicated, industry self-policing can sometimes fail dramatically. When bad business practices can have grave effects for the public, regulators should be firm and proactive. (Nature Editorial, 2008) Another approach is professionalization, which lies between self-governance and statutory measures. In most jurisdictions, all professional practitioners are licensed and belong to an association established by law, which sets the standards of practice for its members in order to align them with the public good. The officers of the association are elected by the members and are expected to be advocates for the profession. This combination of a legislated mandate and collegial self-governance provides accountability for the profession as a whole and for its individual practitioners. Weir and Selgelid argue that the professionalization of synthetic biology would establish educational standards for its members and define normative standards of practice, with the aim of ensuring competence and preventing misconduct. By combining self-governance and legally-authorized governance, this approach avoids the polarization that has characterized much of the debate over the regulation of synthetic biology (Weir and Selgelid, 2009). Conclusion DNA synthesis is a powerful enabling technology that has many beneficial applications but also entails a significant risk of misuse. An optimal strategy to limit this risk would entail applying the three modes of governance (hard law, soft law and informal law) to DNA synthesis so that (i) national governments regulate companies by imposing a baseline of minimum security measures that all providers of synthetic DNA must adopt; (ii) the DNA synthesis community reinforces the statutory legislation through a professional code of conduct that regulates gene-synthesis companies across borders and encourages universal adherence despite differing national assessments of the risk of misuse; and (iii) role-model companies, such as commercial suppliers that have adopted the IASB or IGSC protocols, regulate other companies by designating screening practices as necessary for economic success, much as ISO accreditation and other non-statutory regimes have become accepted as requirements to operate in other fields.
A Salt Overly Sensitive Pathway Member from Brassica juncea BjSOS3 Can Functionally Complement ฮ”Atsos3 in Arabidopsis Background: Salt Overly Sensitive (SOS) pathway is a well-known pathway in arabidopsis, essential for maintenance of ion homeostasis and thus conferring salt stress tolerance. In arabidopsis, the Ca2+ activated SOS3 interacts with SOS2 which further activates SOS1, a Na+/H+ antiporter, responsible for removing toxic sodium ions from the cells. In the present study, we have shown that these three components of SOS pathway, BjSOS1, BjSOS2 and BjSOS3 genes exhibit differential expression pattern in response to salinity and ABA stress in contrasting cultivars of Brassica. It is also noticed that constitutive expression of all the three SOS genes is higher in the tolerant cultivar B. juncea as compared to the sensitive B. nigra. In silico interaction of BjSOS2 and BjSOS3 has been reported recently and here we demonstrate in vivo interaction of these two proteins in onion epidermal peel cells. Further, overexpression of BjSOS3 in corresponding arabidopsis mutant ฮ”Atsos3 was able to rescue the mutant phenotype and exhibit higher tolerance towards salinity stress at the seedling stage. Conclusion: Taken together, these findings demonstrate that the B. juncea SOS3 (BjSOS3) protein is a functional ortholog of its arabidopsis counterpart and thus show a strong functional conservation of SOS pathway responsible for salt stress signalling between arabidopsis and Brassica species. INTRODUCTION Among the various abiotic stresses, plant growth and development is significantly affected by soil salinity [1]. High salt concentration affects the plant, mainly by osmotic and ionic stress [2]. Plants develop various defense mechanisms to maintain appropriate ion homeostasis by actively excluding the ions from the cell or partitioning them into the vacuole [3,4]. Among these ions, sodium ion (Na + ) is the primary source of toxicity in cells, which causes plant growth inhibition. Higher Na + yields low concentrations of K + in the cell (because of its ability to compete with K + ), thereby affecting the activity of many essential enzymes [5]. Plant cells maintain ion homeostasis by maintaining appropriate K + /Na + ratio in the cytoplasm, to provide optimal conditions for biochemical and metabolic activities [6,7]. SOS (Salt Overly Sensitive) pathway, which is responsible for the maintenance of Na + and K + concentration in a cell, comprises of three proteins namely, SOS1, SOS2 and SOS3 [6,8]. It has been reported earlier that the activation of Na + /H + antiporter AtSOS1 by salt stress is controlled via phosphorylation by AtSOS3 and AtSOS2 proteins [9]. The carboxyterminal regulatory domain of AtSOS2 interacts with At-SOS3 through the FISL motif [10,11]. SOS3 is a calcium binding protein and belongs to EF-hand type family of proteins in arabidopsis. This unique family of proteins show similarity with animal ฮฒ-subunit of calcineurin and animal neuronal calcium sensors [12,13]. Co-expression of SOS2 and SOS3 protein along with SOS1 has shown higher tolerance to salt stress in sodium transport deficient yeast mutant than expression of independent SOS2 or SOS3 proteins, thereby indicating the role of SOS2-SOS3 complex in the activation of SOS1 [11]. SOS2-SOS3 interaction in SOS pathway has also been shown by sos3/sos2 doublemutant analysis in arabidopsis [14]. It has also been observed that sos1, sos2 and sos3 mutants are hypersensitive to Na + and Li + ions, thus confirming the importance of these proteins in salinity tolerance in plant [8]. Further, transgenic arabidopsis plants overexpressing different SOS proteins, individually or in combinations, have shown increased tolerance to salinity stress [15,16]. Brassica is an important oil seed crop, which diverged from arabidopsis around 14.5-20.4 million years ago from a common ancestor [17]. In the family Brassicaceae, comparative genetic mapping has revealed co-linear chromosome segments [18,19] with linkage arrangements been reported between arabidopsis and B. oleracea [20]. Rana et al. [21] have reported that the Brassica genome has duplicated or possibly triplicated with the corresponding homologous segments of arabidopsis. Sizes of genomes of various Brassica species are significantly higher than arabidopsis (125Mb) and range from 529-696 Mb for the diploids and 1068-1284 Mb for polyploids [22]. It is also presumed that few novel gene interactions might have evolved potentially by sub-functionalization and/or neofunctionalization of paralogs [23]. Recently, we have reported the transcriptome-based genome assembly of B. juncea var. CS52 -a salt tolerant variety [24]. One of the key findings of this study is that B. juncea has higher transcript abundance of various stress related genes from different metabolic pathway and it is able to tolerate salinity stress by efficient ROS scavenging machinery. However, the detailed investigations about how this homeostasis is maintained, is yet to be discovered at molecular level. In an earlier study, we have shown that SOS pathwayrelated genes show strong correlation with salinity tolerance in Brassica species [25,26]. We have also shown the direct effect of Ca 2+ chelator, EGTA, on stress inducibility of BjSOS3, thus establishing its calcium sensing nature. Recently, in silico interactions of BjSOS2 and BjSOS3 of Brassica juncea have also been shown [27]. In the present study, based on BiFC analysis, we have shown in vivo interaction of BjSOS2 with BjSOS3 by coexpressing these proteins in onion peel epidermal cells. We have also studied qRT-PCR-based transcript abundance of BjSOS1, BjSOS2 and BjSOS3 in contrasting genotypes of Brassica, under salinity stress and ABA treatment. Most importantly, we have shown that BjSOS3 allele isolated from B. juncea could functionally complement the ฮ”Atsos3 mutant, thereby showing the structural and functional conservation of SOS pathway across the plant genera. Plant Growth Seeds of B. juncea var. CS52 and B. nigra obtained from ICAR-CSSRI, Karnal, India were thoroughly washed with de-ionized water and germinated in hydroponic system, filled with half strength Hoagland's medium. The hydroponic setup was kept in the plant growth chamber for 48 h in dark and then exposed to light and kept under control condition 25ยฑ2ยฐC, 12 h light and dark cycles. Brassica seedlings were grown for nine days in hydroponics with continuous air bubbling and renewal of nutrient media after every two days. Stress Treatment, RNA Isolation and cDNA Synthesis For salinity and ABA stress treatment, hydroponically grown nine day old seedlings of Brassica spp. were transferred to ยฝ Hoagland media containing either 200 mM NaCl or 100 ยตM ABA respectively. Shoots of Brassica seedlings (200 mg), in triplicates were harvested at 0 h, 8 h and 24 h of treatment and total RNA was extracted using Tri reagent (Life Technologies, Rockville, USA) as described by Kumar et al. [26]. Total RNA was analysed for its integration and purity, before proceeding for cDNA synthesis. Using cDNA synthesis kit (Fermentas Life Sciences, Burlington, Canada), cDNA was synthesised from 2 ยตg of total RNA from each sample, as described by the manufacturer. qRT-PCR-based Expression Analysis Primers for BjSOS1, BjSOS2 and BjSOS3 genes were designed with Primer Express 3.0 software (Applied Biosystems, California, USA) using default parameters. To ensure high specificity, we have selected the 3'UTR region of these genes for the purpose of primer designing. BLASTn using the KOME and NCBI databases confirmed the uniqueness of each primer pair to amplify selective genes. The PCR mixture contained 5 ยตl of cDNA (10 times diluted), 10ยตl of 2ร— SYBR Green PCR Master Mix (Applied Biosystems, California, USA) and 4 nM of each gene-specific primer in a final volume of 20ยตl. The real-time PCR was performed employing StepOne TM Real-Time PCR System (Applied Biosystems, California, USA). All the PCR reactions were performed under the following conditions, 10 min at 95ยฐC and 40 cycles of 15 s at 95ยฐC, 1 min at 58ยฐC in 96-well optical reaction plates. The specificity of amplification was tested by dissociation curve analysis and agarose gel electrophoresis. To minimize any error, two biological samples with three technical replicates each, were taken for the expression analysis of each gene. The expression of each gene in different RNA samples was normalized with the expression of internal control (actin gene from Brassica; BjAct). Fold change in transcript abundance for each gene in different samples was calculated relative to its expression in B. nigra control seedlings using 2 -ddCt method [24]. For statistical significance, student's t-test was performed using STA-TISTICA Data Miner software (Version 7, StatSoft, Oklahoma, USA). Results are represented as mean +/-Standard Error (SE). Cloning of BjSOS3 in Plant Transformation Vector and Raising of Transgenic Arabidopsis To raise arabidopsis plants overexpressing BjSOS3, the full length BjSOS3 gene was cloned at NcoI and SpeI sites of pCAMBIA1304 under the control of 35SCaMV promoter. Positive clone was confirmed by colony PCR as well as restriction digestion. For complementation assay, construct carrying the 35SCaMV-BjSOS3 was transformed into Agrobacterium strain GV3101 and transformation of arabidopsis mutant line ฮ”Atsos3 (CS3869 procured from ABRC) was carried out (Fig. S2). Putative transgenic plants (T1) harboring BjSOS3 were screened on Murashige and Skoog (MS) agar medium containing 30 mg l -1 hygromycin. To reconfirm the stable insertion of the gene in the genome of plant, tissue PCR was performed in these lines using gene specific primers. Seeds form homozygous plants (which showed positive for tissue PCR of transgene) were further selected (T2) and analyzed for transgene expression. Seeds from homozygous T2 plants were harvested and T3 seedlings were used for complementation assay. Complementation Assay for Salinity and ABA Response For complementation assay, T3 arabidopsis seeds were sterilized and vernalized for four days at 4ยบC. Seeds were plated on MS agar media and kept in plant tissue culture room for 2 days at 22ยฑ2 ยฐC for germination. After germination, four days old seedlings were transferred to MS media or MS media supplemented with NaCl (100 or 200 mM) and root bending assay was performed as described by Zhu et al. [8]. After 5 days, seedling growth and survival percentage were compared between BjSOS3-complemented lines with the ฮ”Atsos3 mutant and wild type plants (WT). For ABA response, four days old seedlings were transferred to MS media or MS media supplemented with 25 ยตM ABA and photographed after 7 days. Three independent experiments were performed and results are represented as mean +/-Standard Error (n = 15). Plant Growth Analysis and Measurement of Na + and K + Content For root growth and total biomass accumulation analysis under salt stress, 4 days old seedlings were transferred on MS plate containing 0 or 100 mM NaCl, and after 8 days, root growth and total biomass observations were compared. For estimation of sodium (Na + ) and potassium (K + ), seedlings were harvested from the plates and briefly rinsed with distilled water, and dried at 65ยฐC for 24 h. The seedlings were weighed and digested with HNO 3 , and K + and Na + concentrations were determined with flame photometer as described by Xu et al. [28]. Three independent experiments were performed for each analysis and data are shown as mean +/-SE. Statistical Analysis For assessing the statistical significance of data, student's t-test was performed using STATISTICA Data Miner software (Version 7, StatSoft, Oklahoma, USA). BjSOS1, BjSOS2 and BjSOS3 Transcripts are Induced in Response to Salinity and ABA Stress in Contrasting Genotypes of Brassica In order to analyze the constitutive and salinity induced transcript abundance for SOS components, Brassica seedlings were subjected to 200 mM NaCl or 100 ยตM ABA for early (8 h) or late (24 h) duration. Phenotypically, no significant differences were observed after 8 h (early) of salinity or ABA stress in contrasting Brassica genotypes. However, at late (24 h) duration of salinity stress, leaf rolling and seedling drooping was observed in B. nigra (salt sensitive) genotype (Fig. 1a). Similarly, at late (24 h) duration of ABA stress, B. juncea seedlings appeared more healthy and robust than B. nigra seedlings (Fig. 1a). Further, transcript levels for BjSOS1, BjSOS2 and BjSOS3 were analyzed in shoot tissues by qRT-PCR. It was observed that, constitutive expression of BjSOS1, BjSOS2 and BjSOS3 was higher in salt tolerant cultivar B. juncea as compared to salt sensitive cultivar B. nigra. BjSOS1 expression in B. nigra was found to be induced in response to salinity stress. Under ABA stress, BjSOS1 expression was induced upto 3 folds higher as compared to the control in B. nigra. In B. juncea, constitutive expression of BjSOS1 was relatively higher than B. nigra. Upon NaCl and ABA treatment, the expression pattern was similar to B. nigra (Fig. 1b). Constitutive expression of BjSOS2 in B. nigra was relatively lower than B. juncea, but got induced upon salinity and ABA treatment. However, no significant induction in BjSOS2 expression was observed under ABA treatment in the latter (Fig. 1c). Transcript abundance for BjSOS3 was found to be induced by ~2 folds in response to both salinity and ABA again, in B. nigra. (Fig. 1d). In response to salinity stress, B. juncea also showed almost 2.5 fold induction. In summation, contrasting genotypes of Brassica spp. showed differential regulation of BjSOS genes in response to NaCl and ABA stress, with the tolerant genotype maintaining higher transcript levels under both late and early duration of various stress applications, as compared to the sensitive genotype. BjSOS3 Interacts with BjSOS2 In Vivo For investigating the predicted protein-protein interactions, bimolecular fluorescence complementation (BiFC) assay was performed. For this purpose, we have employed pSAT series of vectors (pSAT-1A-neYFP-N1 and pSAT-1A-ceYFP-N1) wherein two putative interacting proteins are tagged with n-terminal or c-terminal enhanced YFP fragment, respectively. Both full length BjSOS3 and BjSOS2 were cloned at EcoRI and BamHI sites of pSAT vector pSAT-1A-neYFP-N1 and pSAT-1A-ceYFP-N1 respectively. Fig. (1). Growth of seedlings of contrasting Brassica genotypes in response to high salinity and ABA, and transcript abundance for various SOS pathway members. Nine days old hydroponically grown Brassica seedlings were subjected to salinity (200 mM NaCl) or ABA (100 ยตM). qRT-PCR was performed using the cDNA prepared from total RNA isolated from shoots of the seedlings subjected to salinity stress (200 mM NaCl); or ABA (100 ยตM) for 8 h and 24 h time period. (a). Representative photograph of seedlings of Brassica genotypes after 24 h of salinity (200 mM NaCl) or ABA (100 ยตM). Relative expression of (b) BjSOS1, (c) BjSOS2 and (d) BjSOS3. Bar graphs were plotted between stress duration (X-axis) and log 2 -ddCt value in number (Y-axis). Gene expression data was normalised with the plant reference gene 'actin' as an internal control. Relative expression of genes was plotted against the expression of B. nigra control. The values represented are the mean of two biological and three technical replicates, standard error is shown above the bar. For statistical significance, student t-test was performed and asterisk above the graph means significant differences from their respective control (Con) at P โ‰ค 0.05. BjSOS3 Could Rescue the ฮ”Atsos3 Mutant of Arabidopsis Under Salinity Stress Arabidopsis seeds with knock out AtSOS3 gene (Mutant stock CS3869) were procured from ABRC and used for the complementation assays. Full length BjSOS3 gene cloned in plant expression vector, pCAMBIA1304 at NcoI and SpeI sites was used for transformation of the ฮ”Atsos3 mutant (Fig. S2). Root bending assays were performed in T3 homozygous seedlings and phenotypes of BjSOS3-complemented lines with the ฮ”Atsos3 mutant and WT arabidopsis seedlings were compared. Morphological observation showed that WT, BjSOS3-complemented lines as well as the ฮ”Atsos3 mutant seedlings grew well and appeared healthy on control Murashige-Skoog (MS) medium without NaCl supplement (Fig. 3a). In contrast, on the media containing 100 mM NaCl, the growth of the ฮ”Atsos3 mutant was substantially compromised while the complemented lines grew very well (Fig. 3b). On plates with MS agar media containing 200 mM NaCl, the growth of the ฮ”Atsos3 mutant was drastically compromised and chlorophyll bleaching was observed. However, seedlings of the ฮ”Atsos3 mutant complemented with BjSOS3, showed higher tolerance to salinity and survived on MS agar media supplemented with 200 mM NaCl (Fig. 3c). It was also found that on the plates containing MS agar media supplemented with 0 or 100 mM of NaCl, all the seedlings of WT, the ฮ”Atsos3 mutant as well as the ฮ”Atsos3 mutant complemented with BjSOS3 could survive for 5 days. However, in the presence of higher salinity (200 mM NaCl), the ฮ”Atsos3 lines showed a significant reduction in survival, where only 6% seedlings could survive in contrast to 80% survival for the WT seedlings. Interestingly, seedlings of the ฮ”Atsos3 mutant complemented with BjSOS3 showed a response similar to that of the WT where 80% survival rate was recorded under similar stress conditions. The assay was further extended to the analysis of root bending, which is quick and fast way to determine the relative tolerance in seedlings of arabidopsis towards salinity stress. On plates containing the normal MS media, all the genotypes showed significant growth in the primary root after the plates were kept in inverted position, thereby indicating healthy status of these seedlings. In contrast, on saline growth media (100 mM NaCl), the roots of the ฮ”Atsos3 mu-tant could not grow beyond 1 mm in inverted position, while the WT seedlings showed a growth up to 4 mm in the primary root after the plates were kept in inverted position (Fig. 3d). This observation clearly shows that 100 mM NaCl is detrimental for the growth of WT arabidopsis seedlings, but it is more so for the ฮ”Atsos3 mutant seedlings. Interestingly, the ฮ”Atsos3 mutant lines complemented with BjSOS3 gene showed a response similar to that of the WT seedlings, where the primary root could grow up to 4mm in length (Fig. 3d). Under severe salinity stress, i.e. on media containing 200 mM NaCl, none of the seedlings could grow their roots after the plates were kept in the inverted position indicating thereby that these conditions are not physiological for arabidopsis seedlings. Thus, these results conclusively prove the requirement of SOS3 protein for survival under salinity stress conditions. These results also showed that AtSOS3 and BjSOS3 are functionally conserved. The ฮ”Atsos3 Mutant Complemented with BjSOS3 Show Mutant Phenotype Reversion by Maintaining Ion Homeostasis Under Salinity Stress For root growth and total biomass accumulation analysis under salt stress, we transferred 4 days old seedlings of WT, the ฮ”Atsos3 mutant and the ฮ”Atsos3 mutant complemented with BjSOS3 on MS plate containing 0 or 100 mM NaCl, and observed their growth parameters for 8 days. Under control conditions, wild type plants, the ฮ”Atsos3 mutants and BjSOS3 complemented lines displayed relatively uniform growth phenotype. However, when treated with 100 mM NaCl, growth of the ฮ”Atsos3 mutant was drastically compromised, as compared to BjSOS3 complemented lines. Thus, expression of BjSOS3 in the ฮ”Atsos3 mutant plants substantially alleviates the growth retardation imposed by moderate salt stress, hence rescuing the mutant phenotype. Analysis of growth parameters such as root length and fresh weight of seedlings showed that all the rescued mutants generally displayed a near-wild-type level of salt tolerance, although with differential extents between different lines (Fig. 4a). Total fresh weight of seedlings of ฮ”Atsos3 transformed with BjSOS3 was found to be higher than the ฮ”Atsos3, but comparable to WT (Fig. 4b). This data shows that the Brassica SOS3 protein can substitute for the corresponding arabidopsis counterpart. Maintenance of K + /Na + homeostasis is an important requirement for salt tolerance. (Fig. 4c) shows that the K + content in the ฮ”Atsos3 plants complemented with BjSOS3 was equal to the mutant and WT under control conditions. However, under salinity stress, the seedlings of ฮ”Atsos3 transformed with BjSOS3 accumulated the same level of K + as the WT plants, but more K + than the ฮ”Atsos3 plants. Since the outcome of the functional SOS pathway operative in plants is the efficient ion homeostasis, we decided to check the levels of Na + in these plants, in response to salinity stress. These results indicated that the ฮ”Atsos3 accumulated high levels of Na + under salinity stress. On the other hand, Na + contents in the ฮ”Atsos3 plants complemented with BjSOS3 were not different from those of the wild type plants (Fig. 4d), thereby proving the functional conservation of SOS3 across the species. Fig. (3). Complementation of the ฮ”Atsos3 mutant with BjSOS3 gene. Four days old seedlings were transferred to MS media supplemented with (a) 0, (b) 100 or (c) 200 mM of NaCl. Plates were placed upside down to allow seedlings to grow in inverted position for 5 days and pictures were taken. (d). Measurement of root growth after bending of the ฮ”Atsos3 mutant complemented with BjSOS3, the ฮ”Atsos3 mutant and WT seedlings shown above. Error bars represent the standard deviation (n = 15). For statistical significance, student's t-test was performed and asterisk above the graph means significant differences from WT at P โ‰ค 0.05. WT, wild type; BjSOS3C12 and BjSOS3C17 are the two representative of the ฮ”Atsos3 mutant complemented with BjSOS3; ฮ”Atsos3, arabidopsis sos3 mutant. The ฮ”Atsos3 Mutant Complemented with BjSOS3 Show Enhanced Tolerance to ABA To see the role of BjSOS3 in ABA response, BjSOS3 complemented arabidopsis mutants were transferred to MS agar plate supplemented with 25 ยตM ABA and phenotype of these seedlings was observed. BjSOS3 transcript accumula-tion in complemented arabidopsis lines BjSOS3C12 and BjSOS3C17 at seedling stage showed constitutive expression of BjSOS3 gene under control conditions (Fig. 5a). WT and the ฮ”Atsos3 mutant arabidopsis does not show any transcript corresponding to BjSOS3 under similar growth condition. Expression of arabidopsis native AtSOS1 and AtSOS2 in Fig. (4). Growth and ion accumulation in seedlings of the ฮ”Atsos3 mutant complemented with BjSOS3 under salinity. Four days old seedlings were transferred to MS agar plates supplemented with 0 or 100 mM NaCl and allowed to grow for 8 days before the observation were taken. (a). Primary root length. (b). Fresh weight of each seedling. (c). K + content in seedlings. (d). Na + content in seedlings. For primary root length per seedling and fresh weight per seedling, results are the average from three independent replicates. For K + and Na + estimation, three independent experiments were performed and results are mean +/-SE. Error bars represent the standard deviation (n = 15). For statistical significance, student's t-test was performed and asterisk above the graph means significant differences from WT at P โ‰ค 0.05. WT, wild type; BjSOS3C12 and BjSOS3C17 are the two representative of the ฮ”Atsos3 mutant complemented with BjSOS3; ฮ”Atsos3, arabidopsis sos3 mutant. WT, BjSOS3 complemented lines and the ฮ”Atsos3 mutant was also observed using qRT-PCR (Figs. 5b and 5c). No significant difference was observed in the ฮ”Atsos3 mutant, complemented mutant and WT arabidopsis seedlings as far as the transcript abundance of AtSOS1 and AtSOS2 is concerned. Four days old seedlings of ฮ”Atsos3 mutant complemented with BjSOS3 were transferred to MS agar media and observed for the next 7 days. These seedlings showed no phenotypic difference as compared to the ฮ”Atsos3 mutant and WT seedlings (Fig. 5d). Analysis of root growth indicated no significant difference between complemented and mutant arabidopsis seedlings. However, leaf bleaching and root growth inhibition was observed in the ฮ”Atsos3 mutant grown on MS media supplemented with 25 ยตM of ABA (Fig. 5e). Under ABA stress, BjSOS3 complemented ฮ”Atsos3 mutant seedling regain root growth, as compared to the WT or the ฮ”Atsos3 mutant seedlings (Fig. 5f). DISCUSSION AND CONCLUSION Salinity is known to be a major impediment to achieve potential yield of a crop plant. Additionally, soil salinization of cultivable agricultural land at alarming rate is of great concerns for fulfilling the food requirement of expanding world population. Genetic diversity in the plant kingdom provides the treasure for selecting salinity tolerant genotypes [29]. It is more useful to search the tolerant plants within same genotype as compared to finding in another genotype [25]. Additionally, study of diploidy and polyploidy within the genera is another convenient strategy for screening tolerant genotypes [30,31]. In most of the cases, polyploid plants are reported to possess higher salinity tolerance than their respective diploids [25,32,33]. SOS (Salt Overly Sensitive) pathway comprising of three major proteins namely, SOS1, SOS2 and SOS3, is an important known mechanism involved in maintenance of Na + and K + homeostasis in plants [6,8]. In terms of gene sequences, this pathway has been shown to be highly conserved, in terms of the gene Fig. (5). Transcript abundance of SOS3 members in BjSOS3 complemented transgenic arabidopsis and their ABA stress response. Transgenic arabidopsis seedlings were grown on MS agar plate and samples were harvested for expression analysis. qRT-PCR was performed using the cDNA prepared from total RNA isolated from whole seedlings. Relative expression of (a) BjSOS3, (b) AtSOS1 (c) AtSOS2. Bar graphs were plotted between stress duration (X-axis) and log 2 -dCt value in number (Y-axis). Gene expression data was normalised with the plant reference gene 'actin' as an internal control. The values represented are the mean of two biological and three technical replicates, standard error is shown above the bar. For statistical significance, student t-test was performed and asterisk above the graph means significant differences from WT at P โ‰ค 0.05. Four days old seedlings were transferred to MS media supplemented with (d) 0 or (e) 25 ยตM of ABA. After 7 days of growth, pictures were taken and (f) root length was measured. Error bars represent the standard deviation (n = 15). For statistical significance, student's t-test was performed and asterisk above the graph means significant differences from WT at P โ‰ค 0.05. sequences, among various plant species [26]. However, it remains to be seen if the members of the family are functionally conserved among plant species. In the present work, we have made an attempt to dissect out such functional conservation between the two members of Brassicaceae family i.e. Arabidopsis and Brassica. Gene expression analysis has revealed that constitutive transcript abundance of BjSOS1, BjSOS2 and BjSOS3 were higher in B. juncea (salt tolerant) as compared to B. nigra (salt sensitive) cultivar (Fig. 1). Under salinity stress, these three SOS genes BjSOS1, BjSOS2 and BjSOS3 members showed differential regulation in contrasting genotypes of Brassica species. Expression dynamics presented in this paper are consistent with the previous study in Brassica seedlings under salinity stress [26,27]. However, these results are based on qRT-PCR and hence more confirmatory. Under ABA stress, expression of SOS genes increased with increase in stress duration in both the contrasting genotypes. This analysis showed that, these SOS genes are also induced by ABA. Several reports suggest the co-expression of many stress responsive genes under both salinity and ABA [34]. Shi et al. [15] have shown that one of the salt stress responsive SOS members SOS5 showed increased expression under 100 ยตm ABA treatment in arabidopsis. In SOS pathway, SOS2 interacts with SOS3 and subsequently SOS2-SOS3 complex phosphorylates the Na + /H + antiporter SOS1 [9] to stimulate its Na + /H + exchange activity at the plasma membrane [11,35]. Using in silico tools, Kushwaha et al. [26] have shown that BjSOS2 interacts with BjSOS3. We did the interaction study by coexpressing BjSOS2 and BjSOS3 in onion peel epidermal cells and found that they are interacting with each other in vivo (Fig. 2). In a similar study, it was observed that PtSOS2 and PtSOS3 of Populus showed in vivo interaction in yeast and plant cell [36]. In-depth study of SOS pathway establish the function of SOS3, a calcium sensor with four EF-hand domains and a N-myristoylation signal peptide, senses the salinity induced increase of intracellular calcium [12,37,38], and then activate SOS2, a Ser/Thr protein kinase [10,14,39,40]. Further, we have complemented the ฮ”Atsos3 mutant in arabidopsis with BjSOS3 and performed rootbending assay. Root-bending assay have shown that BjSOS3complemented lines as well as the ฮ”Atsos3 mutant seedlings grew relatively well and appeared healthy on MS agar media. However, on MS agar plate supplemented with 100 or 200 mM NaCl, BjSOS3 complemented lines showed higher survival and more growth in root after bending as compared to the ฮ”Atsos3 mutant (Fig. 3). Root-bending assay is a powerful method to measure sensitivity of plants for salinity in arabidopsis sos mutant [7,8]. Shi et al. [9] complemented the sos5 arabidopsis mutant with SOS5 gene isolated from WT and observed similar functional regain and increased tolerance to salinity in root-bending experiments. We have further performed salinity stress tolerance assay on MS agar plate supplemented with 100 mM of salinity, which showed higher plant vigor and K + content in BjSOS3 complemented arabidopsis plants as compared to the ฮ”Atsos3 mutant. Complemented lines accumulate relatively very less Na + as compared to the ฮ”Atsos3 mutant (Fig. 4). These results clearly showed that BjSOS3 of Brassica juncea var. CS52 is capable of functionally complementing the ฮ”Atsos3 in arabidopsis. Our findings are in corroboration with the findings of many other researchers showing lesser survival rate and reduced growth with lesser K + content and higher Na + accumulation in the ฮ”Atsos3 mutant in Arabidopsis [8,9,41]. SOS3 is the first protein in SOS pathway and we investigated the expression of other downstream SOS members i.e. SOS1 and SOS2 in the ฮ”Atsos3 mutant under control conditions. A relatively lower constitutive transcript abundance of SOS1 was observed in the ฮ”Atsos3. However, almost same level of SOS1 expression in WT and BjSOS3 complemented line was observed under control conditions (Fig. 5). Our data is corroborated with similar findings showing very low constitutive expression of SOS1 in shoot and root of sos3-1 mutant under control condition [9]. However, induction in SOS1 expression was reported under salinity stress in root tissue but expression was unchanged in the shoots. SOS2 expression in the ฮ”Atsos3 mutant was equal in magnitude in WT and complemented transgenic lines. In support of our expression analysis, complementation study of SOS2 in arabidopsis sos3-1 mutant have revealed the low expression levels for SOS2 in arabidopsis sos3-1 mutant and higher accumulation in complemented arabidopsis seedlings under control conditions [40]. Regaining of original phenotype in the ฮ”Atsos3 mutant after complementation with BjSOS3 under ABA stress showed the functional capability of Brassica SOS3 protein with the SOS machinery in arabidopsis. In Aeluropus lagopoids, salinity and ABA individually or in combination or with signaling molecule Ca 2+ leads to higher transcript accumulation of SOS pathway genes which supports our stress tolerance assay in arabidopsis seedling [42]. Regaining of near original phenotypes in BjSOS3 complemented arabidopsis may also be linked to the divergence of arabidopsis and Brassica from a common ancestor [17]. Supporting the divergence, comparative genetic mapping has also affirmed co-linear chromosome segments [18,19] in the family Brassicaceae and linkage arrangements between arabidopsis and Brassica oleracea [20]. In Popular, PtSOS3 complements the ฮ”Atsos3 and restore the SOS pathway in arabidopsis [36]. Based on these studies, it can be concluded that SOS pathway components are functionally conserved in Brassica species and arabidopsis. We hope that this study will open up new avenues for understanding the salinity tolerance mechanisms operative in diverse plant species, as mediated via the SOS pathway. ETHICS APPROVAL AND CONSENT TO PARTICI-PATE Not applicable. HUMAN AND ANIMAL RIGHTS No Animals/Humans were used for studies that are base of this research. CONSENT FOR PUBLICATION Not applicable.
Network Integration Analysis and Immune Infiltration Analysis Reveal Potential Biomarkers for Primary Open-Angle Glaucoma Primary open-angle glaucoma (POAG) is a progressive optic neuropathy and its damage to vision is irreversible. Therefore, early diagnosis assisted by biomarkers is essential. Although there were multiple researches on the identification of POAG biomarkers, few studies systematically revealed the transcriptome dysregulation mechanism of POAG from the perspective of pre- and post-transcription of genes. Here, we have collected multiple sets of POAGโ€™s aqueous humor (AH) tissue transcription profiles covering long non-coding RNA (lncRNA), mRNA and mircoRNA (miRNA). Through differential expression analysis, we identified thousands of significant differentially expressed genes (DEGs) between the AH tissue of POAG and non-glaucoma. Further, the DEGs were used to construct a competing endogenous RNA (ceRNA) regulatory network and 1,653 qualified lncRNA-miRNA-mRNA regulatory units were identified. Two ceRNA regulatory subnets were identified based on the random walk algorithm and revealed to be involved in the regulation of multiple complex diseases. At the pre-transcriptional regulation level, a transcriptional regulatory network was constructed and three transcription factors (FOS, ATF4, and RELB) were identified to regulate the expression of multiple genes and participate in the regulation of T cells. Moreover, we revealed the immune desert status of AH tissue for POAG patients based on immune infiltration analysis and identified a specific AL590666.2-hsaโˆ’miRโˆ’339โˆ’5p-UROD axis can be used as a biomarker of POAG. Taken together, the identification of regulatory mechanisms and biomarkers will contribute to the individualized diagnosis and treatment for POAG. INTRODUCTION Glaucoma is the main cause of irreversible blindness, which includes several subtypes such as primary, secondary, angle-closure glaucoma and open-angle glaucoma (Weinreb and Khaw, 2004;Youngblood et al., 2019). Among them, primary open-angle glaucoma (POAG) is the most common. The clinical manifestations of POAG include optic nerve damage and loss of retinal ganglion cells, and high blood pressure and increased intraocular pressure are risk factors for POAG. Since the symptoms of POAG appear at a relatively late stage and the blindness caused by it is irreversible, early diagnosis is necessary (Weinreb et al., 2014). The identification of biomarkers is helpful for the early diagnosis of POAG patients (Kokotas et al., 2012). Although there were many studies on the identification of biomarkers of POAG, the limitations of the data have caused the limitations of the experimental results. For example, although Liu et al. found that hsa-miR-210-3p can be used as a biomarker of POAG from peripheral blood , the gene transcript also includes mRNA and long non-coding RNA (lncRNA) and did not reveal the pathogenesis of hsa-miR-210-3p. With the progress of scientific research, the function of non-coding RNA has been unveiled. LncRNA and miRNA, as the two main types of non-coding RNA, have been shown to play an important role in the chromatin reprograming (Anastasiadou et al., 2018) and regulation of gene transcription through the competing endogenous RNAs (ceRNA) regulatory mechanism (Salmena et al., 2011). In the ceRNA network, mRNA and lncRNA act as a miRNA sponge to participate in ceRNA regulation determined by miRNA response elements (MREs) (Zhang et al., 2021). The ceRNA regulatory mechanism plays a role at the posttranscriptional level and is an important method to explain the dysregulation of transcript expression in diseases. For POAG, the construction of ceRNA regulatory network will assist in revealing its pathogenesis. Over the past decade, the immune microenvironment has been a hot area of biological research, which includes immune infiltration, antigen presentation, immune cell exhaustion and immune cell communication. The immune microenvironment is composed of a variety of lymphocytes, such as T cells, B cells and macrophages, etc. Previous studies have shown that the neuroinflammatory response in POAG patients was thought to be caused by a defective immune response (Vernazza et al., 2020). For example, M1 polarization of macrophages enhances the antigen presentation ability and tissue inflammatory response (Yunna et al., 2020). Therefore, it is necessary to reveal the immune landscape of POAG to reveal its neuroinflammatory response mechanism. In this study, we collected multiple sets of transcription profiles of aqueous humor (AH) tissues for POAG patients. Through the integrated analysis of the ceRNA competition network and the transcriptional regulatory network, we revealed the mechanism of the transcriptome dysregulation of POAG and the physiological functions that it affects. Immune infiltration analysis revealed the immune landscape of POAG. Additionally, potential biomarkers of POAG were identified based on machine learning algorithms. Data Acquisition and Pre-Processing The mRNA and long non-coding RNA (lncRNA) expression profiles of primary open-angle glaucoma (POAG) were downloaded from the Gene Expression Omnibus (GEO) database (accession number: GSE101727 (Xie et al., 2019), platform: GPL21827, Agilent-079487 Arraystar Human LncRNA miarray V4, Table 1). The raw annotation of GPL21827 only supported the sequence data format and not the gene symbol. Therefore, we mapped the sequences of GPL21287 probes to the human genome annotation file release GRCH37 in GENCODE (Harrow et al., 2012) using the R package "Rsubread" (Liao et al., 2019). Next, the average of standardized signal intensities was used to indicate mRNA/ lncRNA expression intensity when multiple probes were mapped to the same mRNA/lncRNA. The miRNA expression profile was also obtained from the GEO database (accession number: GSE105269 (Drewry et al., 2018), platform: GPL24158, NanoString nCounter Human v3 miRNA Assay). The normalized miRNA expression matrix was used directly for the analyses (Table 1). Constructing the POAG Associated Competing Endogenous RNA Network To identify the POAG associated ceRNA relationships, we first recognized the candidate targets of DEmiRNAs based on the experimentally validated miRNA interaction relationships in lncACTdb v2.0 (http://www.bio-bigdata.net/LncACTdb/) , mirtarbase v2020 (http://miRTarBase. cuhk.edu.cn/) (Chou et al., 2018), and starbase v3.0 (https:// starbase.sysu.edu.cn/) (Li et al., 2014). Next, according to the differentially expressed levels, the opposite changing trends between the expression levels of DEmiRNA-DEmRNA/ DElncRNA pairs were retained in the AH (down-regulated miRNAs and up-regulated mRNAs/lncRNAs or up-regulated miRNAs and down-regulated mRNAs/lncRNAs). Furthermore, we calculated the rho between the expression levels of DElncRNAs and DEmRNAs. The raw p-values (P r ) were adjusted by multiple hypotheses using a permutation method. For each mRNA, the expression value was held consistently, and 1,000 random lncRNAs were used to perform the same Spearman's correlation test, generating a set of 1,000 permutation p-values (P p ). Finally, an empirical p-value (P e ) was corrected using the following formula (which introduces a pseudo-count of 1), i.e. P e num P p โ‰ค P r + 1 1001 (1) The mRNA-lncRNA pairs with the rho > 0.6 and P e < 0.01 were treated as the co-expressed mRNA-lncRNA pairs. Finally, we constructed the ceRNA triplets relationships in POAG by integrating the miRNA-mRNA/lncRNA pairs and the coexpressed mRNA-lncRNA pairs (Wang et al., 2015). ceRNA network was visualized using the Cytoscape (Shannon et al., 2003). Network-Based Prioritization of POAG-Related ceRNA Relationships Discovery To identify the hub nodes in our ceRNA network, we employed the random walk with restart (Kรถhler et al., 2008). The POAGrelated genes contained in the DisGeNet (Piรฑero et al., 2020) were considered as the seed genes. Performed random walk on the ceRNA network, with a restart probability of 0.7 using the function random walk in the R package RWOAG (Kรถhler et al., 2008). The nodes with top 30 visitation probabilities were treated as the hub nodes of the network. The ceRNA triplets consisting of hub nodes were considered as the critical ceRNAs relationships. Construction of Transcriptional Regulatory Network First, the immunosuppressive-related genes were collected from DisGeNET (Piรฑero et al., 2017) (http://www.disgenet.org) and HisgAtlas v1.0 (Liu et al., 2017) (http://biokb.ncpsb.org/ HisgAtlas/). In addition, we searched for the keyword "immunosuppressive agents" in the Drugbank (Wishart et al., 2018) database (https://www.drugbank.ca/) and obtained 311 immunosuppressive-related drugs. In total, the 1,332 immunosuppressant-related genes were obtained from the above three databases. Next, the immunosuppressiverelated genes in differentially expressed protein coding genes (PCGs) were extracted for the construction of transcriptional regulatory network. Moreover, the regulation data of transcription factors (TF) and target gene for human were downloaded from the TRRUST v2.0 (Han et al., 2018) (https://www.grnpedia.org/trrust/) and ORTI (Vafaee et al., 2016) databases (http://orti.sydney.edu.au/about.html). The TF target gene relationship pairs related to the immunosuppressiverelated DEmRNAs were extracted. Further, the Spearman's correlation coefficient (rho) between the genes of each pair was calculated and the cutoff of the p-value and rho were set to 0.05 and 0.5. Then, we constructed the TF-target network using Cytoscape software. We then analyzed the topological properties of the network and extracted the top 3 genes of degree as key drive factors. Functional Enrichment Analysis To annotated the potential biology functions of differentially expressed genes and ceRNA triplets, we performed functional enrichment analysis on the mRNAs using Metascape (http:// metascape.org/gp/index.html) (Zhou et al., 2019). For the mRNA list, pathway and process enrichment analysis have been carried out with the following ontology sources: KEGG Pathway, GO Biological Processes, Reactome Gene Sets, and Canonical Pathways. Immune Infiltration Analysis The pre-processed expression matrix of PCGs for the GSE101727 series was used for immune infiltration analysis using CIBERSORT (Newman et al., 2015). CIBERSORT is a method to characterize the cellular composition of complex tissues from gene expression profiles. Statistical Analysis The ROC curves were performed using the R package pROC. The gene sets enrichment analysis using the Fisher's exact test. All statistical analysis was performed using the R (v 3.6.2). Differential Expression Analysis Depicts the Transcriptional Features of POAG In the regulation of gene expression, transcription is an initial step and one of the most critical steps (Prieto and McStay, 2005). To explore the changes in gene expression of POAG patients at the transcriptional level, the limma algorithm was used to identify genes that were significant differentially expressed in the AH of POAG compared to non-glaucoma. For GSE101727 series, 789 mRNAs were recognized to be down-regulated and 1,487 mRNAs were recognized to be up-regulated in AH ( Figure 1A). In addition to considering the expression of protein-coding gene (PCG), the expression of non-coding gene that has been proven to play an important role in the activities of cell (Bridges et al., 2021) was also our focus. Further, there were 576 down-regulated and 614 up-regulated lncRNAs in AH ( Figure 1B), which identified in GSE101727 series. Differentially expressed genes were used as features to identify non-glaucoma and POAG samples. We found that non-glaucoma samples and POAG samples can be distinguished and there are significant differences between the two groups ( Figure 1C), indicating that the identified DEmRNAs and DElncRNAs can be used as the signature of POAG patients. In the ceRNA competition mechanism, miRNA is a crucial part (Smillie et al., 2018). Therefore, we specifically collected GSE105269 series data to identify DEmiRNA in AH of POAG. We found that 8 Frontiers in Cell and Developmental Biology | www.frontiersin.org December 2021 | Volume 9 | Article 793638 miRNAs (3 down-regulated and 5 up-regulated) were significantly differently expressed in AH of POAG ( Figure 1D). Moreover, it is necessary to explore which physiological mechanisms these differentially expressed genes affect. The PCGs up-regulated in AH of POAG were used for functional enrichment analysis by Metascape tool. We found that the up-regulated PCGs are significantly enriched in protein synthesis and immune regulation ( Figure 1E). Among them, the function of the top enrichment is the regulation of the expression of SLITs and ROBOs, which has been shown to be involved in the migration and positioning of neuronal precursor cells and the growth of neuronal axons (Tong et al., 2019). All these indicate that the neurons and immune microenvironment of AH in POAG patients have been changed. The ceRNA Network Reveal the Mechanism of Gene Expression Variations The ceRNA regulatory mechanism plays an important role in the post-transcriptional regulation of genes. Using the ceRNA network to reveal the regulatory mechanisms of differentially expressed genes in AH tissue was conducive to the pathogenesis of POAG. Through the screening of genes involved in ceRNA regulation (see methods), we have identified 4 miRNAs that can bind to 13 lncRNAs and regulate their expression. Furthermore, the 4 miRNAs can regulate the expression of 333 mRNAs and then constitute 1,653 lncRNA-miRNA-mRNA regulatory units ( Figure 2A). Through functional enrichment analysis, we found that the genes involved in ceRNA regulation are significantly enriched in cellular responses to stress, regulation of mRNA metabolic process and mRNA catabolic process ( Figure 2B), indicating that the ceRNA mechanism in AH tissue of POAG could affect specific physiological mechanisms by regulating gene expression. Further, the hub nodes in the ceRNA network were identified by restarting the random walk algorithm (see methods). The ceRNA triad composed of hub nodes constitutes two ceRNA subnets. The ceRNA subnet_1 consists of 7 lncRNAs, 2 miRNAs and 10 mRNAs ( Figure 2C). Among them, hsa-miR-21-5p as a ceRNA has been proven to play an important role in multiple complex diseases (Xiong et al., 2020; Figure 2D). We found that FBN1 regulated by hsa-miR-339-5p is the causative gene of a variety of genetic diseases, including fibrinopathy and Marfan syndrome (Sakai et al., 2016). The genetic polymorphism of HSPA1B was closely related to the disorder of neuroregulation (Bosnjak Kuharic et al., 2020). Taken together, these suggest that the ceRNA regulatory mechanism plays an important role in AH tissue of POAG. Key Factors Driving the Progress of POAG The TFs regulate the initiation and intensity of transcription of specific genes, which is an important driving factor in life activities (Lambert et al., 2018). To identify the driving factors that play an important role in POAG, a transcriptional regulatory network was constructed based on differentially expressed PCGs in GSE101727 series. By combining the previous research data and the correlation analysis of gene expression, 181 TF-target gene units were identified and the transcriptional regulatory network was constructed ( Figure 3A). The network contained 73 TFs and 116 target genes. Further, functional enrichment analysis was used to explore the physiological mechanism involved in this transcriptional regulatory network. We found that the genes in this transcriptional regulatory network are significantly enriched in the regulation of myeloid cell differentiation and cell proliferation ( Figure 3B), indicating that the specific expression of TF drives the expression of target genes and affects the immune microenvironment of POAG. Moreover, we identified the top 3 TFs (FOS, ATF4, and RELB) of degree as a key driver in the transcriptional regulatory network (Figures 3C-E). The FOS and RELB were significantly down-regulated and ATF4 was significantly up-regulated ( Figure 3D) in AH of POAG. Studies have shown that RELB plays a key role in the development of T cells and controls the proliferation of T cells, indicating that changes in RELB expression may be related to variations of the immune microenvironment for POAG . Besides, we found that CDKN1A has the highest correlation with RELB at the transcript level ( Figure 3F), and CDKN1A encodes cyclin which is regulated by kinase inhibitors (El-Deiry, 2016), indicating that RELB may control the proliferation of T cells by regulating the expression of CDKN1A. All these indicate that there are several driving factors that play an important role in the changes in the physiological mechanism of POAG. Immune Infiltration Characteristics of POAG The dynamics of the immune microenvironment is an important feature of the occurrence and development of diseases (Makowski et al., 2020). Identifying the immune characteristics is conducive to enriching the exploration of the pathogenesis of POAG. Therefore, the CIBERSORT tool was used to calculate the immune cell composition of each AH samples of POAG and non-glaucoma samples through the deconvolution algorithm. After preprocessing the immune cell fraction matrices, the consensus clustering algorithm was used to identify the distance between samples including AH of POAG and non-glaucoma samples. We found that POAG and nonglaucoma individuals can be distinguished by immune cell components and POAG individuals are in an immune desert state ( Figure 4A). Further, the statistical test was used in the analysis of the difference between the immune cell components of the AH of POAG and non-glaucoma sample. From the perspective of the fold changes of immune cell components, the CD8 + T cell, CD4 + memory T cell, monocytes, macrophages, M1 and dendritic cell components of POAG individuals are significantly different from those of nonglaucoma individuals ( Figure 4B). Besides, the Wilcoxon rank sum test was used to test the significance of differences in immune cell components between the two groups. We found that monocytes, ฮณฮด T cells, Tregs, CD8 + T cells and memory B cell components are significantly different between POAG and non-glaucoma individuals ( Figure 4C). Monocytes, as a kind of myeloid cells, play an important role in presenting antigens in the organism and their fraction was significantly down-regulated in POAG, which may be an important sign of POAG patients. POAG is a neurodegenerative disease and neuroinflammation occurs during its pathogenesis (Weinreb and Khaw, 2004;Evangelho et al., 2019), which may be related to the lack of Treg. Taken together, these suggest that the AH of POAG is in an immune desert state and the significant down-regulation of specific immune cell components can be used as the marker of POAG. Biomarkers of POAG The identification of biomarkers of POAG is helpful for its clinical diagnosis and treatment. Therefore, we collected important genes in the ceRNA regulatory subnet and drive factors identified above. For the two ceRNA regulatory subnet, the genes in each lncRNA-miRNA-mRNA unit were used as features to distinguish POAG from non-glaucoma individuals and the ROC curve was used to evaluate the stability of the feature. The AL590666.2- Frontiers in Cell and Developmental Biology | www.frontiersin.org December 2021 | Volume 9 | Article 793638 hsaโˆ’miRโˆ’339โˆ’5p-UROD axis was recognized to be able to stably distinguish between POAG and non-glaucoma individuals. Among them, UROD has the highest AUC value of 0.98 compared to 0.77 of AL590666.2 and 0.78 of hsaโˆ’miRโˆ’339โˆ’5p ( Figures 5A-C). Uroporphyrinogen decarboxylase encoded by UROD was an important element in hemoglobin synthesis, which is significantly up-regulated in POAG ( Figure 5D). Studies have shown that patients with POAG have red blood cell backlog and high plasma specific viscosity (Wang et al., 2004;Xu et al., 2020), indicating that the upregulation of UROD may be an important cause of blood deformation in POAG patients. Further, we found that AL590666.2 and UROD have a strong correlation ( Figure 5E), suggesting that AL590666.2 may be an important biomarker of POAG. For the top three TFs identified above, the AUC values of ATF4, FOS, and RELB were 0.91, 0.91, and 0.74, respectively ( Figure 5F). All these suggest that these genes can be used as biomarkers of POAG for clinical diagnosis and treatment. DISCUSSION In this work, we have integrated multiple sets of transcript data (mRNA, lncRNA, miRNA) and revealed important functional subnets and driving factors of POAG through ceRNA Frontiers in Cell and Developmental Biology | www.frontiersin.org December 2021 | Volume 9 | Article 793638 competition network and transcriptional regulatory network analysis. Through statistical testing, thousands of genes (DEmRNA, DElncRNA, DEmiRNA) differentially expressed in POAG's AH tissue have been identified. We have identified 1,653 lncRNA-miRNA-mRNA regulatory units and two functional subnets in AH tissue, which will help reveal the pathogenesis of POAG. Further, the transcriptional regulatory network was constructed based on differentially expressed genes and 3 TFs were recognized to play an important role in the transcriptome disorder of the POAG's AH tissue. We have used the CIBERSORT tool and transcription profile of AH tissue to reveal the immune landscape of POAG. We found that the components of immune cells in AH tissue of POAG were globally down-regulated compared to non-glaucoma. Additionally, a ceRNA regulatory axis (AL590666.2hsaโˆ’miRโˆ’339โˆ’5p-UROD) and 3 TFs (ATF4, FOS, and RELB) have been identified as potential biomarkers for POAG patients. POAG is the most common form of glaucoma disease, which is a disease of the optic nervous system and causes irreversible blindness (Li et al., 2016). The identification of POAG's biomarkers is the direction of the efforts of many researchers. For example, in peripheral blood as a biomarker of POAG based on miRNA expression profile . However, polygene dysregulation and interaction were the inherent causes of POAG. We have revealed the regulatory relationship of dysregulated genes and the pathogenesis of POAG through multi-network integration analysis. The hsa-miR-21-5p and FBN1 were the key genes in the ceRNA regulatory subnet that we have identified, which play an important role in multiple complex diseases. For our identified driving factor ATF4, it has been confirmed in previous studies that it can cause glaucoma by promoting ER client protein load (Kasetti et al., 2020) and regulating trabecular meshwork remodeling (Zhao et al., 2020). In addition to common transcriptome analysis, Liu et al. identified F-box protein (FBOX) and vaccinia-associated kinase 2 (VRK2) that may interact with tumor protein p53 (TP53) to regulate apoptosis and play a negative role in POAG from the perspective of genetic lineage (Liu et al., 2012). Among them, FBOX and TP53 were target genes of the key TF ATF4 and FOS identified in this work. Moreover, several potential biomarkers of POAG were revealed through integrated network analysis in this work. Among identified protein-coding genes that are significantly differentially expressed in the AH tissue of POAG, RELB and CDKN1A were a pair of important transcriptional regulatory units. The RELB has been confirmed in previous studies to regulate the proliferation of T cells and the expression of its target gene CDKN1A was closely related to the cell cycle (El-Deiry, 2016). Therefore, the down-regulation of CDKN1A expression mediated by RENL may be an important reason for the decreased level of T cell infiltration. Red blood cell backlog and high plasma specific viscosity were an important physiological manifestation of POAG. This trait may be related to the up-regulation of UROD expression. CONCLUSION In conclusion, we integrated multi-network analysis to identify important functional subnets and driving factors, which will help advance the research on the pathogenesis of PAOG. Immune infiltration analysis and feature recognition reveal the immune desert state and the biomarkers of POAG. Taken together, our research provides theoretical guidance for the clinical diagnosis and treatment for POAG. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: GSE101727 and GSE105269 from the Gene Expression Omnibus (GEO) database. AUTHORS' CONTRIBUTIONS HS, XC, and LW conceived and designed the experiments. LW and TY performed analysis and wrote the manuscript. TY and XZ collected the data. All authors read and approved the final manuscript.
Validity and reliability of an online self-report 24-h dietary recall method (Intake24): a doubly labelled water study and repeated-measures analysis Online self-reported 24-h dietary recall systems promise increased feasibility of dietary assessment. Comparison against interviewer-led recalls established their convergent validity; however, reliability and criterion-validity information is lacking. The validity of energy intakes (EI) reported using Intake24, an online 24-h recall system, was assessed against concurrent measurement of total energy expenditure (TEE) using doubly labelled water in ninety-eight UK adults (40โ€“65 years). Accuracy and precision of EI were assessed using correlation and Blandโ€“Altman analysis. Testโ€“retest reliability of energy and nutrient intakes was assessed using data from three further UK studies where participants (11โ€“88 years) completed Intake24 at least four times; reliability was assessed using intra-class correlations (ICC). Compared with TEE, participants under-reported EI by 25 % (95 % limits of agreement โˆ’73 % to +68 %) in the first recall, 22 % (โˆ’61 % to +41 %) for average of first two, and 25 % (โˆ’60 % to +28 %) for first three recalls. Correlations between EI and TEE were 0ยท31 (first), 0ยท47 (first two) and 0ยท39 (first three recalls), respectively. ICC for a single recall was 0ยท35 for EI and ranged from 0ยท31 for Fe to 0ยท43 for non-milk extrinsic sugars (NMES). Considering pairs of recalls (first two v. third and fourth recalls), ICC was 0ยท52 for EI and ranged from 0ยท37 for fat to 0ยท63 for NMES. EI reported with Intake24 was moderately correlated with objectively measured TEE and underestimated on average to the same extent as seen with interviewer-led 24-h recalls and estimated weight food diaries. Online 24-h recall systems may offer low-cost, low-burden alternatives for collecting dietary information. Information on dietary intakes of individuals and populations is important in determining diet-disease associations, identifying deficiencies and excesses of nutrients, and evaluating the impact of interventions. The majority of methods for assessing the diet of individuals involve an interview with trained personnel, manual coding of foods, calculation of portion sizes, and matching to food composition tables. Therefore, such methods tend to be costly and time-consuming. With traditional methods, such as the weighed food diary, issues of compliance and under-reporting of habitual energy intake (EI) (1,2) , participant selection bias and recording bias (3) are a significant concern. Recent advances in technology and the ubiquity of Internet access in many countries have led to the development of webbased systems for collecting information on dietary intake remotely. These include online dietary 24-h recall systems (4,5) , online food diaries (6) and online FFQ (7) . These systems can be completed at a time and place convenient to the participant, without the need for a trained interviewer, and this may reduce the respondent burden and reduce barriers to participation. Intake24 is an online dietary recall system (https://intake24. co.uk/) which can be completed by participants remotely. Originally designed for use by people aged 11-24 years it was subsequently extended for the general adult population and tested with people aged 11-88 years (8) . The system is based on the multiple-pass 24-h recall (9) and contains a database of over 2500 foods linked to food composition codes (10) . Versions are available for the UK, Portugal, Denmark, New Zealand and the United Arab Emirates, with versions for India and Australia under development. A series of food photographs are used for portion size estimation. These have previously been criterion-validated in a feeding study and also evaluated for convergent validity against weighed food diaries with children aged 18 months to 16 years and their parents (11,12) . Intake24 was developed through four cycles of user-testing and feedback (13) . Convergent validity testing of Intake24 against interviewer-led 24-h recalls found that the two methods yielded comparable estimates (14) ; however, the instrument has not yet been criterion-validated against objective measures of energy, nor has reliability been examined. The doubly labelled water (DLW) method is considered the reference standard to estimate free-living total energy expenditure (TEE) (15,16) ; one of its uses has been to validate dietary EI instruments. The underlying assumption is that if participants are in energy balance, then over a period of time, total EI should be equivalent to TEE (17) . These comparisons have led to the observation that underestimation of food intakes is a common problem in dietary surveys (3,18) . To establish the validity and reliability of the system for use in UK adults, we aimed to: (1) assess the validity of self-reported EI using Intake24 in a cohort of adults against concurrent objective measures of energy expenditure using DLW; and (2) test the reliability of estimates of energy and key nutrients using pooled data from studies for participants completing four or more recalls. Methods Validation of Intake24 reported energy intake against doubly labelled water measured energy expenditure Study population and recruitment. We recruited fifty men and fifty women across three age categories (40-49 years; 50-59 years; 60-65 years) across a wide range of BMI from the Fenland Study, an ongoing population-based study in the Cambridgeshire area, UK (19) . A sample size of 100 participants was recruited on a first-come, first-served basis when fulfilling age/sex/BMI category eligibility and asked to attend two clinic visits. This size of sample allows estimation of the 95 % CI about ยฑ0ยท34s (where s is the standard deviation of the differences between measurements by the two methods) (20) . Travel expenses were paid but participants did not otherwise receive any monetary incentive for taking part. Data collection for this component of the study was carried out between November 2015 and September 2016. See Supplementary Fig. S1 for the participant flow chart for the validation study. The present study was conducted according to the guidelines laid down in the Declaration of Helsinki. Ethical approval for the study was obtained from Cambridge University Human Biology Research Ethics Committee (reference no. HBREC/2015.16) and all participants provided written informed consent. Doubly labelled water administration. Participants attended their first clinic visit with a (baseline) urine sample collected at least 1 d prior to their visit (sample bottles provided in their appointment letter). During this visit, a second baseline (fasting) urine sample was collected. Participants were then asked to drink a body weight-specific dose of DLW (deuterium oxide-18; D 2 18 O) and collect daily (post-dose) urine samples for the next 9-10 d. The dose used was 174 mg/kg H 2 18 O and 70 mg/kg 2 H 2 O. (Oxygen 18 was supplied by Sercon Ltd; deuterium was supplied by Goss Scientific Instruments Ltd, the UK distributor for Cambridge Isotope Laboratories.) The method of Schoeller was followed which fixes the space ratio to a value of 1ยท03 16 . Participants were provided with labelled sampling bottles and a recording sheet and were instructed to collect one urine sample every day, at a similar time of day, at any time apart from the first void of the day. Participants were asked to record the date and time of each sample and keep the samples refrigerated until returning them at the second clinic visit following the free-living observation period. A final post-dose urine sample was obtained during the second clinic visit. All participants provided enough pre-and post-dose samples for calculation of TEE (see below). Height (cm) and weight (kg) were measured using standardised anthropometric procedures and BMI was calculated (kg/m 2 ). Intake24 administration. Participants were asked to complete Intake24 at least twice and ideally on three occasions during the DLW measurement period but the days on which to complete the recall were not specified. At the first clinic visit, each participant was issued with a unique username and password and provided with the URL (i.e. web address) with which they could access Intake24. If the participant had not completed at least two instances of Intake24 during the measurement period, or did not have Internet access, they were asked to complete Intake24 at the second clinic visit. Two participants did not complete Intake24 remotely. One of whom provided dietary data on paper at the second visit; these two individuals were excluded from this analysis. Doubly labelled water sample analysis. Urine samples were analysed, in duplicate, for 18 O enrichment using the CO 2 equilibration method of Roether (21) . Briefly, 0ยท5 ml of sample was transferred into 12 ml vials (Labco Ltd), flush-filled with 5 % CO 2 in N 2 gas and equilibrated overnight whilst agitated on rotators (Stuart, Bibby Scientific). Headspace of the samples was then analysed using a continuous flow isotope ratio mass spectrometer (AP2003; Analytical Precision Ltd). For 2 H enrichment, 0ยท4 ml of sample was flush-filled with H 2 gas and equilibrated over 6 h in the presence of a platinum catalyst. Headspace of the samples was then analysed using a dual-inlet isotope ratio mass spectrometer (Isoprime; GV Instruments). All samples were measured alongside secondary reference standards previously calibrated against the primary international standards Vienna-Standard Mean Ocean Water (vSMOW) and Vienna-Standard Light Antarctic Precipitate (vSLAP) (International Atomic Energy Agency). Sample enrichments were corrected for interference according to Craig (22) and expressed relative to vSMOW. Analytical precisions are better than ยฑ0ยท62 % for ฮด 18 O and ยฑ0ยท5 % for ฮด 2 H. Please see Supplementary Calculation S1 for full details. Data analysis. The method of Bland & Altman (20) was used to examine accuracy (mean bias) and precision (root mean square error and 95 % limits of agreement) of reported EI by Intake24 against TEE measured using DLW. The ratio of reported daily EI (based on the first 24-h recall, the mean of the first two 24-h recalls and the mean of the first three 24-h recalls) to energy expenditure was calculated. As the data were not normally distributed, they were log-transformed. We define absolute validity by the log-ratio (log(EI/TEE)), where a negative log-ratio represents under-reporting and a positive log-ratio indicates over-reporting of EI. The ratio of the arithmetic mean is also presented along with the geometric mean to allow comparison with previous studies. We also examined the correlation between reported EI and energy expenditure to quantify ability of the instrument to rank individuals. In addition, we examined the role of intraindividual intake variation in these correlation coefficients using data for participants who had reported at least 3 d, as described by Rimm et al. (23) . To assess whether the validity of Intake24 depended on demographic characteristics, we applied a mixed-effects model to account for multiple observations per individual, in which the dependent variable was the log-ratio, and the covariates included age, sex, height (in cm) and BMI. Assessment of Intake24 reliability Study population and recruitment. The repeatability of measures of EI and key nutrients was examined using datasets from three previous surveys. These were a comparison of Intake24 against interviewer-led recalls in 11-to 24-year-olds (survey 1; n 129) (14) , comparison of Intake24 against interviewer-led recalls in adults aged 24-68 years (survey 2; n 46) and a field test of Intake24 in the Scottish Health Survey population with people aged 11-88 years old (survey 3; n 133) (8) . See Supplementary Fig. S2 for the participant flow chart for the repeatability study. Only the initial mode of contact differed between the surveys. For survey 1, 11-to 16-year-olds were recruited from secondary schools in Dundee and Newcastle upon Tyne. The 17-to 24-year-olds were recruited by a recruitment agency who approached potential participants in the street. For survey 2, posters and leaflets were displayed in locations around Newcastle including the University campus, local shops, fitness centres and childcare facilities. Recruitment for survey 3 was conducted in collaboration with ScotCen Social Research; 1000 participants who had previously taken part in the Scottish Health Survey were sent an introductory letter and followed up by telephone. All participants were required to give written consent (written assent and parental consent were obtained for those under the age of 18 years) before participating in the research. Ethical approval for these surveys was granted by Newcastle University's Faculty of Medical Sciences Ethics Committee (reference no. 00706/2013, survey 1; no. 01018/2016, survey 2; no. 00875/2015, survey 3). Intake24 administration. For all three surveys included in the reliability analyses, participants were asked to complete Intake24 on 4 d over a 10 d period, including both week and weekend days. Participants were not aware of their scheduled days in advance but were sent an email on the day of each recall with the URL and log-in details asking them to complete a recall for the previous day's food intake. Data analysis. For individuals completing four or more recalls using Intake24, we assessed both test-retest reliability of a single recall and reliability of a single-repeat recall; the latter was done by comparing the average of the first two recalls (pair 1) against the average for the following two recalls (pair 2). For both methods, intra-class correlation coefficients (ICC) and their 95 % CI were calculated using a two-way mixed-effects model for absolute agreement; this included evaluation of the influence of age and sex on reliability. Reliability is classified as poor, moderate, good or excellent based on the CI of the ICC as recommended by Koo & Li (24) . In addition, the method of Bland & Altman was used to assess agreement (with 95 % limits) between the first and second recall, and between the average of first two recalls and the average of following two recalls. The ratios of reported intakes were calculated for energy and key nutrients. As the data were not normally distributed they were log-transformed. The values presented are the ratios of the geometric mean. All analyses were conducted with SPSS (SPSS Statistics for Windows, version 22.0; IBM Corp.) or R (v3.4.3) (25) statistical software packages. P values were considered statistically significant at the ฮฑ = 0ยท05 level. Results Validation of Intake24 against doubly labelled water measures of total energy expenditure A total of ninety-eight participants (fifty women and forty-eight men) completed at least one 24 h recall (Table 1). Demographic data for participants completing two recalls and three recalls varied only slightly. Participants ranged in age from 40 to 65 years, and had a mean BMI of 26ยท6 (range 20-37) kg/m 2 , with no significant change in weight over the recording period. The mean weight change for the participants was +0ยท09 (SD 0ยท80) kg, eight participants lost between 1 and 1ยท5 kg while twelve participants gained between 1 and 2ยท2 kg. The remaining seventy-five participants had a weight difference of less than 1 kg between the beginning and the end of the data collection period. The mean of EI from three recalls and DLW-based TEE were 9240 (SD 4008ยท5) and 11 670 (SD 2279ยท8) kJ/d, respectively, indicating under-estimation of self-reported EI by 25 % and almost twofold greater variation of self-reported EI in the population (SD 4008ยท5 v. 2279ยท8 kJ/d). Although reporting accuracy of the population averages did not appear to change markedly with increasing number of days recalled, the precision, as evidenced by the width of the limits of agreement (Fig. 1), improved with number of recalls. The Bland-Altman plots (Fig. 1) show a range of underestimation and over-estimation of EI reported using Intake24 amongst the individuals in this validation study. There is some evidence of systematic bias with an increased tendency to under-report at lower levels of intake and to overreport at higher levels of intake when EI based on a single recall is considered ( Fig. 1(a)), but this pattern is no longer apparent when the mean of three recalls is used (Fig. 1(c)). The mixed-effects model indicated no significant pattern for under-or over-reporting across BMI and sex (Table 2 and Supplementary Table S1). Age was positively associated with EI/TEE, indicating that older participants tended to underreport to a lesser extent. On average people of 40 years of age were found to under-report their EI by 42ยท6 % whereas in people of 60 years of age EI was under-reported by 18ยท7 %. Scatterplots showing EI against TEE, on both original and log scales, are provided in Supplementary Fig. S3. The correlations were 0ยท31, 0ยท42 and 0ยท31 for the first, first two and first three recalls (Table 2), respectively, and generally stronger in normal-weight individuals (0ยท69 for first two recalls) than in over-weight (0ยท23) and obese (0ยท17). The deattenuated correlation coefficients after log-transformation are 0ยท31 for the first recall, 0ยท47 for the first two recalls and 0ยท39 for the first three recalls, showing slight improvement after accounting for intra-individual variation. Assessment of Intake24 reliability As data for the reliability analysis are pooled from several separate studies, the number of participants in each age and sex group were not balanced (Table 3). For most nutrients, considering the mean of a pair of recalls increased the reliability compared with a single recall administration. Pairs of two recalls produced similar population averages for energy and the macronutrients, as evidenced by mean ratios ranging from 0ยท99 to 1ยท10. Slightly poorer reliability was seen for non-milk extrinsic sugars (NMES), alcohol and vitamin C. The limits of agreement were wider for those nutrients for which intakes tend to vary more day to day. The very large limits of agreement for alcohol were due to the fact that most recall days did not include any alcohol and most participants drank on only one of the four recall days, if at all (Table 4). Summaries of each nutrient for each age group and for all participants are given in Supplementary Table S2. The columns 'lower' and 'upper' refer to the lower and upper quartiles, respectively, of the corresponding nutrient. Supplementary Table S3 shows how agreement varied by age group. The pairs of 24 h recalls gave values within 10 % of each other for energy and macronutrients, with the exception of alcohol where there was an 8 % difference for the 11-to 16-year-old group ranging up to a 76 % difference for the 16-to 24-year-old group. ICC showed poor to moderate agreement in nutrient intakes. ICC for repeatability of a single recall are lower than for two recalls considered together, indicating large intra-individual variation. For example, for reported EI, ICC for a single recall was 0ยท347 and this increased to 0ยท516 when the repeatability of 2 d of recall was considered (Table 4). Alcohol is not included in the ICC analysis due to the large numbers of non-consumers; 86 % of the study population did not consume alcohol during the recording period. Splitting the data by age group had little influence on the ICC. Supplementary Tables S4 and S5 report the ICC by age group, for a single recall and paired recalls, respectively, according to the model with sex as the only covariate, with 95 % CI, for each nutrient and each age group. ICC show poor to moderate agreement between single recalls for the majority of nutrients, with slight improvement when paired recalls are considered. Repeatability was best in the 65 years and over age group where ICC for intakes of NMES and Fe were moderate to excellent (0ยท863 and 0ยท857, respectively) and good to excellent (0ยท526 and 0ยท530, respectively), for paired recalls and single recalls, respectively. Discussion In comparison with TEE measured in a cohort of UK adults aged 40-65 years, over 10-14 d using DLW, self-reported EI by Intake24 was underestimated by 25 % on average. The level TEE, total energy expenditure; REI, reported energy intake; EI, energy intake. * Data are nested with respect to number of recalls (first recall results for everyone, first two recall results for everyone with at least two recalls, and so on). โ€  The ratio is the reported mean daily energy intake divided by the total energy expenditure as measured by doubly labelled water. The ratio equal to 1 would indicate exact agreement; <1, underestimation; and >1, overestimation. โ€ก Derived from ยฑ2 SD of log-transformed ratios. ยง P = 0ยท11 for the association of BMI with the ratio of reported EI to TEE; P = 0ยท91, sex difference; and P = 0ยท003, age. of under-reporting was similar for men and women but was found to vary significantly with age, with older people tending to under-report to a lesser extent. Comparing the reported EI from a single recall with that from 2 or 3 d of recall, accuracy did not improve markedly with an increased number of days; however, the precision of estimates did improve, particularly with the second recall. Although accuracy of reporting improved with age, the intra-individual variation in underestimation was constant across age groups. There was some evidence of systematic bias, with an increased tendency to under-report at lower levels of intake and to over-report at higher levels of intake, when reported EI from a single recall is considered, but this pattern disappeared when the mean of three recalls was used. This may be indicative of the day-to-day variation in EI and the need to collect data on multiple days. Under-reporting of habitual EI may be due to underreporting of food intakes during the recording period, undereating during the recording period or a combination of the two. This has been examined using covert observation of individuals recording their food intake where a reduction in EI (under-eating) of 8 % in men and 3 % in women and underreporting of 9 % by men and 12 % by women combined to give an under-estimate of around 15 % of EI (26) . Average levels of under-reporting of total EI using Intake24 were similar to traditional dietary assessment methods implemented in surveys in the UK, the USA and elsewhere. The UK National Diet and Nutrition Survey Rolling Programme collects dietary data using a 4-d estimated weight food diary with interview. EI reported using this method was validated in a sub-sample of the population aged 4 years and over (n 371) against TEE assessed using DLW. EI was underestimated on average across all age groups. The lowest levels of under-reporting were seen in the 4-to 10-year-old group where EI was under-estimated by 22 % on average. For participants aged 16 years and over mean under-estimates of EI ranged from 25 to 36 % (27) . Lopes et al. (28) compared EI estimated by three interviewer-administered 24-h dietary recalls with TEE measured by DLW in eighty-three adults aged 20-60 years in Brazil. They found EI to be under-estimated by 23 % in men and by 40 % in women. A pooled analysis of five validation studies comparing 24-h dietary recalls with TEE measured by DLW found EI to be under-reported by 15 % on average, ranging from an under-estimate of 28 to 6 % for individual studies (29) . Few studies have reported the validity of EI assessed using online dietary systems against DLW-measured TEE. Reported EI based on six 24-h recalls completed using the web-based system DietDay was validated against DLW in 233 adults aged 21-69 years and EI was found to be under-reported by 10 % on average (30) . Comparison of a 4-d web-based food record with DLW-measured TEE in forty middle-aged adults in Sweden found that men under-reported EI by 24 % on average whereas women under-reported by 16 % (31) . EI reported using the online dietary recall system ASA24 ยฎ(5) was compared against DLW-measured TEE in older adults (mean age 62 years for women and 64 years for men) (32) . The average under-estimation of EI was 17 % in men (n 485) and 15 % in women (n 472), comparable with ICC, intra-class correlation coefficient; NMES, non-milk extrinsic sugars. * Test-retest reliability of energy and nutrient intakes was assessed using data from three further UK-based studies where participants aged 11-88 years completed Intake24 a minimum of four times, as described in the Methods section. โ€  Reliability was estimated by linear mixed model. the 18ยท7 % under-estimation in our sample of 60-year-olds. Validation of EI against TEE using DLW assumes that participants are in energy balance. TEE is measured over a relatively short period, often 10-14 d and the weight change from a 500 kcal/d (2092 kJ/d) deficit over this period would only be around 500 g. Given the proportion of the population who are over-weight or obese, many participants may be making efforts to reduce their EI. Therefore, an EI:TEE ratio lower than 1ยท0 could represent accurate EI estimates to some extent. In a study of 627 adults aged 50-70 years the reproducibility of a single dietary recall reported using ASA24 ยฎ was low, with ICC for energy and protein of 0ยท28 and 0ยท25, respectively (33) , slightly lower than the repeatability of a single recall using Intake24 (0ยท347 and 0ยท344). This difference may be due to the longer time between recalls in the ASA24 ยฎ study where recalls were repeated at 3-month intervals. Assessing the reliability of measures of EI and nutrients via repeated 24-h recalls is complicated by the genuine day-to-day variation in individual food intakes. A way to address this is to collapse results of multiple recalls into pairs and test their reliability. Results of the Bland-Altman analysis showed good agreement between recalls 1 and 2 and recalls 3 and 4 for energy and macronutrients, but greater variability for alcohol and NMES. This may reflect greater day-to-day variability in intake of these nutrients, indicating that more than two recalls are required for accurate estimation of usual intakes for some nutrients (34) . Better reliability of intakes reported using Intake24 was observed in men and those aged 64 years and over, possibly suggesting less day-to-day variability in these individuals' diets. Study limitations and strengths We have conducted a detailed analysis of misreporting of EI including how this varies by age, sex and BMI; however, the findings are not generalisable to people of ethnic minority groups, or from different socio-economic backgrounds, for whom the extent of misreporting may differ. Although the sample size for the DLW validation of Intake24 is relatively large for such studies, there was no consistency in the range of week and weekend days for which participants completed their recalls. The small number of days recalled is also a limitation but may reflect common choices in study designs. Energy expenditure was assessed over a 10-14 d period; for logistic reasons participants were free to choose the days to complete Intake24 and so may have avoided completing recalls on days they considered their intake to be unhealthy or too complicated to report, which would be likely to be days that EI was high. TEE estimated using DLW requires a number of assumptions and inferences. These include that the individual is weight stable and that the levels of background isotope intake remain constant (35,36) . In this study, two pre-dose samples were taken which will reduce the associated error assigned to the variation in natural abundance (36) . Furthermore, as DLW only measures CO 2 production and not directly O 2 consumption, some knowledge of the energy equivalent of CO 2 is needed for TEE estimation. This can be highly variable and macronutrient dependent. In the absence of a measurement of the respiratory quotient (RQ) which allows determining macronutrient oxidation, RQ was assumed to be 0ยท85, it being the average RQ of a standard Western diet. The RQ based on the dietary intake reported by our participants was 0ยท849 on average (SD 0ยท021) and given the known issues with under-reporting of food intakes rather than make any assumptions around the nutritional composition of 'missing foods' the fixed RQ was used. In total, the error associated with our calculations (2ยท04 ยฑ 0ยท76 %) is well within the 2-8 % error deemed acceptable using the DLW method (37) . Assessment of the repeatability of any short-term measure of dietary intake is complicated by the true day-to-day variation in individual intake. In our study we did not directly determine how much of the variation would be due to measurement error or true variability of food intakes. What the reliability results do indicate is the degree to which 2 d of recall is sufficient to obtain an estimate of habitual intake for a particular nutrient. At the population level, reported intakes of energy and macronutrients from one pair of non-consecutive 24 h recalls were within 10 % of those reported in a further pair of recalls completed by the same individual. At the individual level, however, there is much greater variation as evidenced by the wide limits of agreement and low ICC. However, week and weekend days were not balanced across the recall pairs, and therefore reliability estimates may be slightly attenuated for this reason. The repeatability analysis is conducted on pooled data from three studies that covered different age groups; while this is a strength in terms of generalisability of results, this also increases between-individual variation by which pooled ICC may be overestimated. Conclusions Under-reporting of EI is a consistent finding when using dietary assessment methods which rely on self-reports of food and drink intake. We report that EI reported using Intake24 were under-estimated by around 25 % compared with TEE and were only weakly correlated. From the reliability study, our findings indicate that 2 d of recalls using Intake24 are sufficient for the assessment of habitual intake of energy and macronutrients at the group level. More days are likely to be required for food components where day-to-day variation is greater, especially alcohol. The underestimation of EI using Intake24 is comparable with more intensive methods of data collection such as interviewer-led 24 h recalls or estimated food diaries. As data are collected remotely, without the need for trained interviewers, and participants can complete recalls at a time convenient for them, the system offers a reduced cost and burden alternative for collecting dietary intake information. Future work should focus on whether the validity of self-reported methods such as Intake24 can be improved by combining these with image capture-based methods (38)(39)(40) and/or mathematical modelling (41,42) . Supplementary material The supplementary material for this article can be found at https://doi.org/10.1017/jns.2019.20.
Characteristics of Gaseous/Liquid Hydrocarbon Adsorption Based on Numerical Simulation and Experimental Testing Hydrocarbon vapor adsorption experiments (HVAs) are one of the most prevalent methods used to evaluate the proportion of adsorbed state oil, critical in understanding the recoverable resources of shale oil. HVAs have some limitations, which cannot be directly used to evaluate the proportion of adsorbed state oil. The proportion of adsorbed state oil from HVA is always smaller than that in shale oil reservoirs, which is caused by the difference in adsorption characteristics of liquid and gaseous hydrocarbons. The results of HVA need to be corrected. In this paper, HVA was conducted with kaolinite, an important component of shale. A new method is reported here to evaluate the proportion of adsorbed state oil. Molecular dynamics simulations (MDs) of gaseous/liquid hydrocarbons with the same temperature and pressure as the HVAs were used as a reference to reveal the errors in the HVAs evaluation from the molecular scale. We determine the amount of free state of hydrocarbons by HVAs, and then calculate the proportion of adsorbed state oil by the liquid hydrocarbon MD simulation under the same conditions. The results show that gaseous hydrocarbons adsorptions are monolayer at low relative pressures and bilayer at high relative pressures. The liquid hydrocarbons adsorption is multilayer adsorption. The adsorption capacity of liquid hydrocarbons is over 2.7 times higher than gaseous hydrocarbons. The new method will be more effective and accurate to evaluate the proportion of adsorbed state oil. Introduction Shale oil is crude oil impregnated into the layers of shale rock, silt, and impermeable mudstone [1]. It will, therefore, exist either in an adsorbed or free mobile state [2]. Adsorbed oil is found on the surface of organic matter and minerals, while free oil is mainly found in the center of pores and fractures, and is not affected by any restraints, existing in a complete bulk liquid state [3]. The free oil does not experience the effects of surfaces, remaining highly mobile and can be recoverable through natural elastic energy from fracturing [4]. The low-porosity and low-permeability of shales, combined with the high density and viscosity of the oil, means poor flow, and, as a result, low recovery of oil [5]. Therefore, the effectiveness of shale oil extraction is not dependent on the total amount of shale oil, but rather on the amount of moveable fraction. Theoretically, the maximum amount of movable oil is equivalent of the amount in the free state. Hence, the evaluation and prediction of proportion of adsorbed vs. free oil states is critical. Currently, six methods were used to Although the interaction of hydrocarbon molecules with pores and surfaces coated with kerogen-like materials has been more extensively studied, conceptually shale consists of two parts: organic matter (kerogen) and inorganic matter (minerals). The inorganic part of shale mainly contains quartz, calcite, feldspar, and clay minerals. Each kind of mineral makes up a certain volume fraction of a lacustrine/marine shale and plays an important role in shale systems through presenting intra-and interparticle pore networks that may hold hydrocarbons. Studying the interface of inorganic pores with oil is challenging. Kaolinite often forms surface coatings in the inorganic pores of shale reservoirs, as well as forming pore filling aggregates and presents interparticle and intraparticle pore surfaces [20]. Kaolinite has unique physicochemical properties. Unlike smectite clays, it is non-swelling, but has both a hydrophobic siloxane surface and hydrophilic aluminol surface. Studying the adsorption of hydrocarbons on kaolinite surfaces will further assist in the understanding of the adsorption characteristics on both polar and non-polar surfaces. Therefore, kaolinite was chosen as the object of study. This paper seeks to evaluate the proportion of adsorbed state shale oil by HVA and MD. In Section 2.1, the information of samples and the process of HVA testing were introduced. In Section 2.2, the flows of MD simulation of gas/liquid hydrocarbons are shown. In Section 3.1, the adsorption characteristics of gaseous and liquid hydrocarbons are compared. In Section 3.2, the results of HVA experiments are interpreted at the molecular level. A correction method for HVA is proposed for the evaluation of the proportion of adsorbed state shale oil. The kaolinite samples were milled into powder particles through 40-60 mesh (250-425 ยตm) by an agate mortar. The low-temperature nitrogen adsorption/desorption (LT-N 2 A/D) were measured over relative pressures ranging from approximately 10 โˆ’5 to 0.995 using an Autosorb-iQ-Station-1 instrument at 77 K to obtain the pore size distributions and specific surface areas of the kaolinite samples [29,30]. Figure 1 shows the pore volume distribution determined by Barrett-Joyner-Halenda (BJH) method [31] (with total pore volume of 0.136 cm 3 /g) and surface area distribution determined by BJH with a peak of 12.27 m 2 /g, which is in an agreement with the reported 13.1 m 2 /g determined by BET [32]. Hydrocarbon Vapor Adsorption Methodology The hydrocarbon vapor adsorption experiments were performed with the 3H-2000 PW multi-station weight method vapor adsorption instrument [33,34] (from Bayside Instrument Technology Co., Ltd., Beijing, China). The instrument includes the evacuation system, constant temperature system, measurement chamber, liquid distillation, and purification system (see schematic on Figure 2). High purity sorbent extraction. We connected tube A with n-pentane and tube B to the liquid distillation purification system. We then connected tube A to the evacuation system, and kept heating the distillation tube A to remove the low boiling point impurities. Hydrocarbon Vapor Adsorption Methodology The hydrocarbon vapor adsorption experiments were performed with the 3H-2000 PW multi-station weight method vapor adsorption instrument [33,34] (from Bayside Instrument Technology Co., Ltd., Beijing, China). The instrument includes the evacuation system, constant temperature system, measurement chamber, liquid distillation, and purification system (see schematic on Figure 2). Hydrocarbon Vapor Adsorption Methodology The hydrocarbon vapor adsorption experiments were performed with the 3H-2000 PW multi-station weight method vapor adsorption instrument [33,34] (from Bayside Instrument Technology Co., Ltd., Beijing, China). The instrument includes the evacuation system, constant temperature system, measurement chamber, liquid distillation, and purification system (see schematic on Figure 2). High purity sorbent extraction. We connected tube A with n-pentane and tube B to the liquid distillation purification system. We then connected tube A to the evacuation system, and kept heating the distillation tube A to remove the low boiling point impurities. High purity sorbent extraction. We connected tube A with n-pentane and tube B to the liquid distillation purification system. We then connected tube A to the evacuation system, and kept heating the distillation tube A to remove the low boiling point impurities. After removing the low boiling point impurities, we connected tube A to tube B. We heated tube A to evaporate the adsorbate to a vapor state and then condensed it in tube B under a liquid nitrogen environment. Then, the remaining small amount of adsorbate reagent (mostly high boiling point impurities) of distillation tube A was replaced with a clean distillation tube A. Finally, by repeated distillation (A and B), a high purity adsorbate was obtained from distillation tube A. Remove gas impurities from the sample and the device. After loading the sample into the sample tube, the sample tube was heated to 110 โ€ข C. The adsorbates such as air, water, and hydrocarbons that were initially present in the pores of the device and the sample were removed by the evacuation device. Buoyancy correction by helium method. At 313 K, helium of different pressures is passed into the test chamber. Assuming helium to be a non-adsorbing gas, the observed change in the sample mass as a function of pressure, P, can be attributed solely to the buoyancy factor [35]: where V c is collective volume, m m is mass of measured, R is the gas constant, M is molar mass of helium, Z is the compressibility factor, ฯ g is the density change of helium, and m a ex is excess adsorption amount. The adsorption isotherm was measured. The temperature of the experiment was set to 313 K by the thermostat control system. Then, the pressure was gradually increased, and the weight change of the sample before and after adsorption under a certain relative pressure P/P 0 was weighed by a microbalance. The isothermal adsorption curves were obtained by recording the data. Data Analysis The saturation vapor pressure (P 0 ) of n-pentane is calculated by the Clausius-Clapeyron equation. The Clausius-Clapeyron equation enables the determination of the vapor pressure of a liquid at different temperatures if the enthalpy of vaporization and vapor pressure at a specific temperature is known. For this purpose, the linear equation can be expressed in a two-point format. If the vapor pressure is P 1 at temperature T 1 , and P 2 at temperature T 2 , the corresponding linear equation is: where L is specific latent heat, also known as enthalpy of vaporization, and R is the gas constant. Molecular Structures The kaolinite clay has a very small number of isomorphic substitutions, therefore, for this study, the unit cell was assumed to be Al 2 Si 2 O 5 (OH) 4 ( Figure 3a). The atomic positions of the initial crystals were taken from the American Mineralogist Crystal Structure Database [36], with the dimension of 0.5148 nm ร— 0.892 nm ร— 0.63 nm, and angles of 90 โ€ข , 100 โ€ข , and 90 โ€ข [37]. The kaolinite mineral slab was composed of three stacked layers, each periodic in the xy-plane and comprising of 12 ร— 7 unit cells. The total mineral model, therefore, is made up of 252 (12 ร— 7 ร— 5) unit cells, and creating a slab with dimensions of 6.178 nm ร— 6.244 nm ร— 1.89 nm (Figure 3b). In the simulation setup, the kaolinite slab is placed in the region 0 < z < 1.9 nm, which fluctuates slightly during the simulation. The pore region above the mineral surface is set to 8 nm, resulting in the simulation box of 6.178 nm ร— 6.244 ร— 9.9 nm. This slit-pore setup is representative of the nanoscale pores identified in shale reservoirs, where pores at 8~20 nm occupy a considerable (21.5~47.9%) proportion [20,38]. Molecules 2022, 27, x FOR PEER REVIEW 6 of 16 ร— 9.9 nm. This slit-pore setup is representative of the nanoscale pores identified in shale reservoirs, where pores at 8~20 nm occupy a considerable (21.5~47.9%) proportion [20,38]. As in the experiment, n-pentane (n-C5H12) was used to perform both gaseous and liquid hydrocarbon adsorption simulations. In the gaseous hydrocarbon adsorption simulation, the model was loaded with 10, 25, 50, 100, and 200 molecules of n-pentane at a fixed pore volume, in order to represent different relative pressures of the system. For the liquid hydrocarbon adsorption simulations, 2000 molecules of n-pentane were loaded into the pore, at which point the volume was then allowed to maintain a pressure of 100 bar (the calculation process is as shown in [39]). The density of 2000 n-pentane molecules in the box is the same as the density of n-pentane at 100 bar. The Packmol [40] program was used to insert n-pentane molecules into the kaolinite pore model. The initial model is shown in Figure 3. Force Field Parameters Kaolinite clay was modelled with ClayFF [41] force field and n-pentate with the Charmm36 force field [42], assigned with Cgenff [43]. Both force fields use the Lorentz-Berthelot mixing rule for the non-bonded interactions. Previous studies have confirmed the reliability of the use of these two force fields together [4], and that the results are not only consistent with the ab initio molecular simulations, but also with the results of X-ray diffraction experiments [44]. As in the experiment, n-pentane (n-C 5 H 12 ) was used to perform both gaseous and liquid hydrocarbon adsorption simulations. In the gaseous hydrocarbon adsorption simulation, the model was loaded with 10, 25, 50, 100, and 200 molecules of n-pentane at a fixed pore volume, in order to represent different relative pressures of the system. For the liquid hydrocarbon adsorption simulations, 2000 molecules of n-pentane were loaded into the pore, at which point the volume was then allowed to maintain a pressure of 100 bar (the calculation process is as shown in [39]). The density of 2000 n-pentane molecules in the box is the same as the density of n-pentane at 100 bar. The Packmol [40] program was used to insert n-pentane molecules into the kaolinite pore model. The initial model is shown in Figure 3. Force Field Parameters Kaolinite clay was modelled with ClayFF [41] force field and n-pentate with the Charmm36 force field [42], assigned with Cgenff [43]. Both force fields use the Lorentz-Berthelot mixing rule for the non-bonded interactions. Previous studies have confirmed the reliability of the use of these two force fields together [4], and that the results are not only consistent with the ab initio molecular simulations, but also with the results of X-ray diffraction experiments [44]. Simulation Protocol The simulations were carried out using the GROMACS 4.6.7. engine [45]. First, every setup system was energy minimized using the steepest descent algorithm, with the convergence criterion being the maximum force on the atom being less than 100 kJ/mol/nm. All simulations were performed using real-space particle mesh Ewald (PME) electrostatics, and van der Waals cut-off was set to 1.4 nm. The simulation step size was 1 fs, and all H- bonds were restrained. The temperature of the MD simulations was kept at an experimental 313 K, using a velocity-rescale thermostat with the temperature coupling constant of 0.1 ps. The gas adsorption simulations were carried out in the canonical (NVT) ensemble to ensure that the volume of the interlayer is constant. The systems were equilibrated for 0.5 ns, then a production run of 30 ns was performed. Liquid adsorption simulations were performed in the isothermal-isobaric (NPT) ensemble. The pressure of 100 bar was applied semi-isotopically, ensuring the decoupling the xy-plane from the pore space in the z-direction. The pressure of 100 bar was applied semiisotopically, ensuring the decoupling of the xy-plane from the pore space in the z-direction. The pressure was coupled at 1ps using the Berendsen barostat. After 0.5 ns equilibration, a production run of 30 ns was performed. Data Analysis The trajectory from the last 10 ns of the production run was used for analysis. Linear mass density, ฯ(z), was computed using GROMACS tools [46]. The density is defined as a mass, m, per volume. In the case of a linear density, it is sampled with 1000 equivalent windows along the z-coordinate. Each window having a volume, V window , which is determined by the length of the window, โˆ†Z, and the area of the surface, A surf , in xy-direction. A surf can be calculated by the open source wave function software Mut and the molecular visualization software VMD jointly [47,48]. The density profile is calculated as an average for the 10 ns trajectory. The linear density of the pentane system is shown on Figure 4. Simulation Protocol The simulations were carried out using the GROMACS 4.6.7. engine [45]. First, every setup system was energy minimized using the steepest descent algorithm, with the convergence criterion being the maximum force on the atom being less than 100 kJ/mol/nm. All simulations were performed using real-space particle mesh Ewald (PME) electrostatics, and van der Waals cut-off was set to 1.4 nm. The simulation step size was 1 fs, and all H-bonds were restrained. The temperature of the MD simulations was kept at an experimental 313 K, using a velocity-rescale thermostat with the temperature coupling constant of 0.1 ps. The gas adsorption simulations were carried out in the canonical (NVT) ensemble to ensure that the volume of the interlayer is constant. The systems were equilibrated for 0.5 ns, then a production run of 30 ns was performed. Liquid adsorption simulations were performed in the isothermal-isobaric (NPT) ensemble. The pressure of 100 bar was applied semi-isotopically, ensuring the decoupling the xy-plane from the pore space in the z-direction. The pressure of 100 bar was applied semi-isotopically, ensuring the decoupling of the xy-plane from the pore space in the zdirection. The pressure was coupled at 1ps using the Berendsen barostat. After 0.5 ns equilibration, a production run of 30 ns was performed. Data Analysis The trajectory from the last 10 ns of the production run was used for analysis. Linear mass density, ฯ(z), was computed using GROMACS tools [46]. The density is defined as a mass, m, per volume. In the case of a linear density, it is sampled with 1000 equivalent windows along the z-coordinate. Each window having a volume, Vwindow, which is determined by the length of the window, ฮ”Z, and the area of the surface, Asurf, in xydirection. Asurf can be calculated by the open source wave function software Mut and the molecular visualization software VMD jointly [47,48]. The density profile is calculated as an average for the 10 ns trajectory. The linear density of the pentane system is shown on Figure 4. The adsorption capacity, CAds, is defined as adsorbed mass, mads, per unit area, Asurf, here kaolinite surface. The adsorption capacity, C Ads , is defined as adsorbed mass, m ads , per unit area, A surf , here kaolinite surface. Here, the adsorbed mass is calculated from linear mass density of adsorbed layers, ฯ Ads , spanning the space of the length, L, between z = L1 and z = L2: The free phase density, ฯ gas , is determined as an average of the linear density on the gaseous phase, G, spanning from z = G1 to z = G2. The adsorption density profile of n-pentane in the gaseous adsorption model can be obtained from Equations (3) and (4) (Figure 4), and the average free phase density can be determined. The relative pressure corresponding to the free phase density of n-pentane at 313 K can be found in the NIST database ( Figure 5). From this, the relative pressures can be calculated for loading various molecular number models. Here, the adsorbed mass is calculated from linear mass density of adsorbed layers ฯAds, spanning the space of the length, L, between z = L1 and z = L2: The free phase density, ฯgas, is determined as an average of the linear density on th gaseous phase, G, spanning from z = G1 to z = G2. The adsorption density profile of n-pentane in the gaseous adsorption model can b obtained from Equations (3) and (4) (Figure 4), and the average free phase density can b determined. The relative pressure corresponding to the free phase density of n-pentane a 313 K can be found in the NIST database ( Figure 5). From this, the relative pressures can be calculated for loading various molecular number models. Comparison of Adsorption Characteristics of Gaseous and Liquid Hydrocarbons The adsorption densities of n-pentane systems can be obtained from the partial den sity profiles, as shown in Figure 6. Table 2 gives the obtained values for the densities o the adsorbed layers and the bulk and the associated relative pressure of the systems. Comparison of Adsorption Characteristics of Gaseous and Liquid Hydrocarbons The adsorption densities of n-pentane systems can be obtained from the partial density profiles, as shown in Figure 6. Table 2 gives the obtained values for the densities of the adsorbed layers and the bulk and the associated relative pressure of the systems. Here, the adsorbed mass is calculated from linear mass density of adsorbed layers, ฯAds, spanning the space of the length, L, between z = L1 and z = L2: The free phase density, ฯgas, is determined as an average of the linear density on the gaseous phase, G, spanning from z = G1 to z = G2. The adsorption density profile of n-pentane in the gaseous adsorption model can be obtained from Equations (3) and (4) (Figure 4), and the average free phase density can be determined. The relative pressure corresponding to the free phase density of n-pentane at 313 K can be found in the NIST database ( Figure 5). From this, the relative pressures can be calculated for loading various molecular number models. Comparison of Adsorption Characteristics of Gaseous and Liquid Hydrocarbons The adsorption densities of n-pentane systems can be obtained from the partial density profiles, as shown in Figure 6. Table 2 gives the obtained values for the densities of the adsorbed layers and the bulk and the associated relative pressure of the systems. It can be seen in Figure 6 that gaseous hydrocarbon will first adsorb as a monolayer on both surfaces, and with the increasing relative pressure, the peak density and adsorption thickness increase. It is worth noting that the peak density of the adsorption layer can be even smaller than liquid hydrocarbon when the relative pressure is lower than 0.46. The characteristics of the first adsorption layer of gaseous hydrocarbons on the silicon-oxygen and aluminum-oxygen surfaces are almost identical. As the relative pressure increases from 0.27 to 0.97, the peak adsorption density of the surface increases from 100.79 to 1128.88 kg/m 3 , and the adsorption thickness increases from 0.32 to 0.72 nm. When the relative pressure is 0.97, the second adsorption layer forms. The second adsorption layer has the same adsorption thickness as the first adsorption layer, yet the adsorption density is much lower than the first adsorption layer. Table 2. Densities of the adsorbed layers and the bulk for gaseous and liquid n-pentane system, and the associated relative pressure. The partial density curve (green line) of liquid n-pentane shows the characteristics of multilayer adsorption ( Figure 6). The density of the first adsorption layer in the liquid is higher than that in the gas systems. The adsorption peaks on the silicate and aluminol surfaces differ in density. The light red region between the curve of liquid n-pentane adsorption density (green line) and the curve of relative pressure of 0.9732 gas n-pentane adsorption density is the larger part of liquid adsorption than that of gas adsorption. Table 3 summarizes the adsorption capacity calculated directly from the partial densities and derived for the relative pressures of n-pentane. The adsorption capacity of gaseous n-pentane increases with the increase in relative pressure, and the maximum is 0.305 mg/m 2 . In comparison, the liquid is 0.80 mg/m 2 on the aluminol surface and 0.82 mg/m 2 on the silicate. The adsorption capacity of liquid is 2.7 times of gaseous at 313 K. It can be seen that the gaseous hydrocarbon adsorption (P/P 0 โ‰ˆ 1) obtained by the vapor method experiment is hardly representative of the liquid state adsorption of shale oil in the geological situation (P/P 0 = 1). The final results need to be corrected if the shale oil adsorption is evaluated using the vapor method experiment. A model for calculating gaseous hydrocarbon adsorption is presented (P/P 0 = 0.1-1), based on the results of molecular dynamics calculations. The monolayer adsorption was performed at a relative pressure less than 0.8. The adsorption thickness and adsorption density have a linear relationship with the relative pressure simultaneously, so a quadratic polynomial was used to fit the relationship between the relative pressure and the adsorption capacity. It is double-layer adsorption at relative pressures 0.97. When the relative pressure increases, the adsorption thickness no longer changes, and only the adsorption density rises linearly. Therefore, a linear equation was used to fit the relationship between relative pressure and adsorption capacity at this stage. According to the fitting results in Figure 7, the adsorption amount for any relative pressure can be calculated from the model. System A model for calculating gaseous hydrocarbon adsorption is presented (P/P0 = 0.1-1), based on the results of molecular dynamics calculations. The monolayer adsorption was performed at a relative pressure less than 0.8. The adsorption thickness and adsorption density have a linear relationship with the relative pressure simultaneously, so a quadratic polynomial was used to fit the relationship between the relative pressure and the adsorption capacity. It is double-layer adsorption at relative pressures 0.97. When the relative pressure increases, the adsorption thickness no longer changes, and only the adsorption density rises linearly. Therefore, a linear equation was used to fit the relationship between relative pressure and adsorption capacity at this stage. According to the fitting results in Figure 7, the adsorption amount for any relative pressure can be calculated from the model. Comparison of Experimental and Simulated Gaseous Hydrocarbon Adsorption Characteristics The results of the HVA are shown in Figure 8. HVA adsorption consists of adsorbed and condensed hydrocarbons. Thus it refers to total hydrocarbons Qt. Under 0.55 P/P0, the adsorption amount of HVA grows slowly, varies 3.8 to 4.2 mg/g. Between 0.55 and 0.81 P/P0 the adsorption amount of HVA increased exponentially from 4.2 to 11.7 mg/g, which may be due to the massive formation of capillary condensed hydrocarbons. Although the MD adsorption showed a relatively slow linear increase from 0.02 to 1.61 mg/g, under 0.81 P/P0. In this work, the HVA interpretation model established by Li [2] was used, and the results of gaseous hydrocarbon MD were taken as critical parameters to calculate the adsorption/free oil proportion. Comparison of Experimental and Simulated Gaseous Hydrocarbon Adsorption Characteristics The results of the HVA are shown in Figure 8. HVA adsorption consists of adsorbed and condensed hydrocarbons. Thus it refers to total hydrocarbons Q t . Under 0.55 P/P 0 , the adsorption amount of HVA grows slowly, varies 3.8 to 4.2 mg/g. Between 0.55 and 0.81 P/P 0 the adsorption amount of HVA increased exponentially from 4.2 to 11.7 mg/g, which may be due to the massive formation of capillary condensed hydrocarbons. Although the MD adsorption showed a relatively slow linear increase from 0.02 to 1.61 mg/g, under 0.81 P/P 0 . In this work, the HVA interpretation model established by Li [2] was used, and the results of gaseous hydrocarbon MD were taken as critical parameters to calculate the adsorption/free oil proportion. A model for calculating gaseous hydrocarbon adsorption is presented (P/P0 = 0.1-1), based on the results of molecular dynamics calculations. The monolayer adsorption was performed at a relative pressure less than 0.8. The adsorption thickness and adsorption density have a linear relationship with the relative pressure simultaneously, so a quadratic polynomial was used to fit the relationship between the relative pressure and the adsorption capacity. It is double-layer adsorption at relative pressures 0.97. When the relative pressure increases, the adsorption thickness no longer changes, and only the adsorption density rises linearly. Therefore, a linear equation was used to fit the relationship between relative pressure and adsorption capacity at this stage. According to the fitting results in Figure 7, the adsorption amount for any relative pressure can be calculated from the model. Comparison of Experimental and Simulated Gaseous Hydrocarbon Adsorption Characteristics The results of the HVA are shown in Figure 8. HVA adsorption consists of adsorbed and condensed hydrocarbons. Thus it refers to total hydrocarbons Qt. Under 0.55 P/P0, the adsorption amount of HVA grows slowly, varies 3.8 to 4.2 mg/g. Between 0.55 and 0.81 P/P0 the adsorption amount of HVA increased exponentially from 4.2 to 11.7 mg/g, which may be due to the massive formation of capillary condensed hydrocarbons. Although the MD adsorption showed a relatively slow linear increase from 0.02 to 1.61 mg/g, under 0.81 P/P0. In this work, the HVA interpretation model established by Li [2] was used, and the results of gaseous hydrocarbon MD were taken as critical parameters to calculate the adsorption/free oil proportion. Figure 9 illustrates the occurrence characteristics of n-pentane in different pores (A, B, C) of kaolinite. Firstly, the hydrocarbon forms adsorption layers on pore surfaces. The pores of zone A (pore diameter r < d a ) are entirely filled with adsorbed hydrocarbons. According to the Kelvin equation, n-pentane coalesces in the B zone (2 h < r < dk) and becomes a liquid hydrocarbon on the surface of the adsorbed hydrocarbon. Eventually, the pores in the B zone are entirely filled with adsorbed hydrocarbons and condensed hydrocarbons. In contrast, n-pentane does not condense in the C region (dk < r< dmax) and remains in the gaseous state at the adsorbed hydrocarbon surface. r k = โˆ’ 2ฯƒV L ln(P/P 0 )RT (10) where h is the adsorption thickness; n is the number of adsorption layers; ฯƒ is the surface tension, 24.3 dyn/cm; V L is the molar volume, m 3 /mol; T is the temperature, 313 K; R is the gas constant, 8.314 Pยทm 3 /mol/K. Figure 9 illustrates the occurrence characteristics of n-pentane in different pores (A, B, C) of kaolinite. Firstly, the hydrocarbon forms adsorption layers on pore surfaces. The pores of zone A (pore diameter r < da) are entirely filled with adsorbed hydrocarbons. According to the Kelvin equation, n-pentane coalesces in the B zone (2 h < r < dk) and becomes a liquid hydrocarbon on the surface of the adsorbed hydrocarbon. Eventually, the pores in the B zone are entirely filled with adsorbed hydrocarbons and condensed hydrocarbons. In contrast, n-pentane does not condense in the C region (dk < r< dmax) and remains in the gaseous state at the adsorbed hydrocarbon surface. Figure 9. Occurrence of n-pentane in various pores of kaolinite by hydrocarbon vapor adsorption. da, dk, and dmax divide the pores of kaolinite into three zones (A, B, and C). da is twice the thickness of the adsorption layer. dk determines the interval in which vapor coalescence occurs, which can be calculated from the Kelvin radius rk. dmax is the maximum pore size of kaolinite. = 2โ„Ž (8) where h is the adsorption thickness; n is the number of adsorption layers; ฯƒ is the surface tension, 24.3 dyn/cm; VL is the molar volume, m 3 /mol; T is the temperature, 313 K; R is the gas constant, 8.314 Pยทm 3 /mol/K. The model for calculating the proportion of adsorbed state oil r was established by the amount of adsorbed hydrocarbons Qa and condensed hydrocarbons Qc. Qa can be calculated from Equation (12). Due to the influence of pore morphology, npentane cannot achieve complete theoretical adsorption, and a coefficient k is proposed to correct this inaccuracy. The model for calculating the proportion of adsorbed state oil r was established by the amount of adsorbed hydrocarbons Q a and condensed hydrocarbons Q c . Q a can be calculated from Equation (12). Due to the influence of pore morphology, n-pentane cannot achieve complete theoretical adsorption, and a coefficient k is proposed to correct this inaccuracy. Q a = kC ave. S Q c is calculated by Equation (13). The condensed hydrocarbon volume V con . is the difference between the effective volume ฮฒV B and the adsorbed hydrocarbon volume V ab . Equation (14). Owing to the non-homogeneity of the kaolinite surface, only part of the volume in the B zone effectively contributes to hydrocarbon occurrence. The effective pore volume ฮฒV B is used to represent this volume. V B is the B-zone pore volume, V(r) is the pore volume distribution curve, as shown in Equation (16). The adsorbed hydrocarbon volume V ab . can be obtained from Equation (15). Where S B is the B-zone specific surface area, S(r) is the specific surface area distribution curve, as shown in Equation (17). Q c = V con. + ฯ con. (13) The optimal k and ฮฒ of the model were calculated using the MATLAB Optimization Toolbox (k = 0.9, ฮฒ = 0.75). The Q t calculated by the model is close to the experimental results for relative pressures between 0.55 and 0.80, which justifies the application of the model ( Figure 10). Additionally, there is an error between the model and experimental results at a relative pressure of 0.24. This implies that the model may be inapplicable at lower relative pressures. ference between the effective volume ฮฒVB and the adsorbed hydrocarbon volume Vab Equation (14). Owing to the non-homogeneity of the kaolinite surface, only part of the volume in the B zone effectively contributes to hydrocarbon occurrence. The effective pore volume ฮฒVB is used to represent this volume. VB is the B-zone pore volume, V(r) is the pore volume distribution curve, as shown in Equation (16). The adsorbed hydrocarbon volume Vab. can be obtained from Equation (15). Where SB is the B-zone specific surface area, S(r) is the specific surface area distribution curve, as shown in Equation (17). The optimal k and ฮฒ of the model were calculated using the MATLAB Optimization Toolbox (k = 0.9, ฮฒ = 0.75). The Qt calculated by the model is close to the experimenta results for relative pressures between 0.55 and 0.80, which justifies the application of the model ( Figure 10). Additionally, there is an error between the model and experimenta results at a relative pressure of 0.24. This implies that the model may be inapplicable at lower relative pressures. The adsorption ratios derived from the HVA experiments and MD simulations under 0.81 P/P0 are shown in Figure 11. Moreover, according to the established HVA model, the adsorption ratio at the relative pressure close to 1 (0.97 P/P0) was calculated. Below 0.67 P/P0, the n-pentane adsorption ratio increases from 0.13 to 0.24, which is caused by a greater growth rate of adsorption than condensation of hydrocarbons. Above 0.67 P/P0 the adsorption ratio decreases rapidly from 0.24 to 0.06, and here the condensation hydrocarbons are generated much faster than the adsorbed hydrocarbons. The adsorption ratios at 0.97 P/P0 are much lower than the previous understanding of adsorbed hydrocarbons The adsorption ratios derived from the HVA experiments and MD simulations under 0.81 P/P 0 are shown in Figure 11. Moreover, according to the established HVA model, the adsorption ratio at the relative pressure close to 1 (0.97 P/P 0 ) was calculated. Below 0.67 P/P 0 , the n-pentane adsorption ratio increases from 0.13 to 0.24, which is caused by a greater growth rate of adsorption than condensation of hydrocarbons. Above 0.67 P/P 0 , the adsorption ratio decreases rapidly from 0.24 to 0.06, and here the condensation hydrocarbons are generated much faster than the adsorbed hydrocarbons. The adsorption ratios at 0.97 P/P 0 are much lower than the previous understanding of adsorbed hydrocarbons in shale oil reservoirs. The differences in the adsorption of gaseous and liquid hydrocarbons have already been mentioned. The adsorption capacity of liquid hydrocarbons is over 2.7 times higher than gaseous hydrocarbons. Therefore, the liquid hydrocarbons adsorption amount should be taken into the HVA model instead of gaseous hydrocarbons adsorption amount. After correction, the adsorption ratio is 0.22, which is consistent with the understanding of shale oil in situ development. in shale oil reservoirs. The differences in the adsorption of gaseous and liquid hydrocarbons have already been mentioned. The adsorption capacity of liquid hydrocarbons is over 2.7 times higher than gaseous hydrocarbons. Therefore, the liquid hydrocarbons adsorption amount should be taken into the HVA model instead of gaseous hydrocarbons adsorption amount. After correction, the adsorption ratio is 0.22, which is consistent with the understanding of shale oil in situ development. Using molecular modeling allowed us to gain a detailed insight into the mechanism of gaseous adsorption of the linear alkanes on the kaolinite mineral surfaces. The values derived from the simulations, nevertheless, were not in agreement with experimentally measured ones. The system presented here did not incorporate complex behaviors of geological systems studied experimentally, in particular: inclusion of edges, preferential exposed surfaces, pH sensitivity of the clay, long-range surface tension effects, impurities, contaminations of the geological sample, and capillary effects in the experimental system. To create a comprehensive picture, stepwise incorporation of levels of complexity should be completed. Therefore, to further understand the adsorption of oil components on clays, and to link theory and the experiment, the limitations in understanding both methods should be addressed. We envision a joint experimental-computational work to identify and inform about the influence of ratios of the edges to surfaces in natural clays, and the prevalence of exposed aluminol or siloxane surfaces, and the effect of the pH and any impurities. Conclusions (1) The hydrocarbon vapor adsorption experiment (HVA) cannot directly assess the proportion of adsorbed state oil due to the difference in adsorption of gaseous hydrocarbons at 0.97 P/P0 and liquid hydrocarbons: thickness (2.1 to 3.9 nm), adsorption density (1125 to 1444 kg/m 3 ), and adsorption capacity per unit area (0.3 to 0.81 mg/m 2 ). (2) A new method is developed to evaluate the proportion of adsorbed state oil on shale, which is validated by n-pentane adsorption on kaolinite. Shale oil adsorption ratio under 0.8 P/P0 and 313 K is 0.05, and is obviously lower than the that in shale oil reservoirs. After correction, the adsorption ratio is 0.22, which is consistent with the understanding of shale oil in situ development. (3) The adsorption characteristics of unsaturated n-pentane are summarized. Below 0.67 P/P0, the n-pentane adsorption ratio increases from 0.13 to 0.24, which is caused by a greater growth rate of adsorption than condensation hydrocarbons. Above 0.67 P/P0, the Using molecular modeling allowed us to gain a detailed insight into the mechanism of gaseous adsorption of the linear alkanes on the kaolinite mineral surfaces. The values derived from the simulations, nevertheless, were not in agreement with experimentally measured ones. The system presented here did not incorporate complex behaviors of geological systems studied experimentally, in particular: inclusion of edges, preferential exposed surfaces, pH sensitivity of the clay, long-range surface tension effects, impurities, contaminations of the geological sample, and capillary effects in the experimental system. To create a comprehensive picture, stepwise incorporation of levels of complexity should be completed. Therefore, to further understand the adsorption of oil components on clays, and to link theory and the experiment, the limitations in understanding both methods should be addressed. We envision a joint experimental-computational work to identify and inform about the influence of ratios of the edges to surfaces in natural clays, and the prevalence of exposed aluminol or siloxane surfaces, and the effect of the pH and any impurities. Conclusions (1) The hydrocarbon vapor adsorption experiment (HVA) cannot directly assess the proportion of adsorbed state oil due to the difference in adsorption of gaseous hydrocarbons at 0.97 P/P 0 and liquid hydrocarbons: thickness (2.1 to 3.9 nm), adsorption density (1125 to 1444 kg/m 3 ), and adsorption capacity per unit area (0.3 to 0.81 mg/m 2 ). (2) A new method is developed to evaluate the proportion of adsorbed state oil on shale, which is validated by n-pentane adsorption on kaolinite. Shale oil adsorption ratio under 0.8 P/P 0 and 313 K is 0.05, and is obviously lower than the that in shale oil reservoirs. After correction, the adsorption ratio is 0.22, which is consistent with the understanding of shale oil in situ development. (3) The adsorption characteristics of unsaturated n-pentane are summarized. Below 0.67 P/P 0 , the n-pentane adsorption ratio increases from 0.13 to 0.24, which is caused by a greater growth rate of adsorption than condensation hydrocarbons. Above 0.67 P/P 0 , the adsorption ratio decreases rapidly from 0.24 to 0.06, and here the condensation hydrocarbons are generated much faster than the adsorbed hydrocarbons.
Sample Complexity of Nonparametric Semi-Supervised Learning We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions. Under these assumptions, we establish an $\Omega(K\log K)$ labeled sample complexity bound without imposing parametric assumptions, where $K$ is the number of classes. Our results suggest that even in nonparametric settings it is possible to learn a near-optimal classifier using only a few labeled samples. Unlike previous theoretical work which focuses on binary classification, we consider general multiclass classification ($K>2$), which requires solving a difficult permutation learning problem. This permutation defines a classifier whose classification error is controlled by the Wasserstein distance between mixing measures, and we provide finite-sample results characterizing the behaviour of the excess risk of this classifier. Finally, we describe three algorithms for computing these estimators based on a connection to bipartite graph matching, and perform experiments to illustrate the superiority of the MLE over the majority vote estimator. Introduction With the rapid growth of modern datasets and increasingly passive collection of data, labeled data is becoming more and more expensive to obtain while unlabeled data remains cheap and plentiful in many applications. Leveraging unlabeled data to improve the predictions of a machine learning system is the problem of semisupervised learning (SSL), which has been the source of many empirical successes (Blum and Mitchell, 1998;Kingma et al., 2014;Dai et al., 2017) and theoretical inquiries (Azizyan et al., 2013;Cover, 1995, 1996;Cozman et al., 2003;Kรครคriรคinen, 2005;Niyogi, 2013;Rigollet, 2007;Singh et al., 2009;Wasserman and Lafferty, 2008;Zhu et al., 2003). Commonly studied assumptions include identifiability of the class conditional distributions Cover, 1995, 1996), the cluster assumption (Rigollet, 2007;Singh et al., 2009) and the manifold assumption (Zhu et al., 2003;Wasserman and Lafferty, 2008;Niyogi, 2013). In this work, we propose a new type of assumption that loosely combines ideas from both the identifiability and cluster assumption perspectives. Importantly, we consider the general multiclass (K > 2) scenario, which introduces significant complications. In this setting, we study the sample complexity and rates of convergence for SSL and propose simple algorithms to implement the proposed estimators. The basic question behind SSL is to connect the marginal distribution over the unlabeled data P(X) to the regression function P(Y | X). We consider multiclass classification, so that Y โˆˆ Y = {ฮฑ 1 , . . . , ฮฑ K } for some K โ‰ฅ 2. In order to motivate our perspective, let F * denote the marginal density of the unlabeled samples and suppose that F * can be written as a mixture model (1) Crucially, we do not assume that each f b corresponds to some f * k , where f * k is the density of the kth class conditional P(X | Y = ฮฑ k ). Nor do we assume that ฮป b corresponds to some ฮป * k where ฮป * k = P(Y = ฮฑ k ). We (1)) are depicted by the dashed black lines and the true decision boundaries are depicted by the solid red lines. (a) The unlabeled data is used to learn some approximate decision boundaries through the mixture model ฮ›. Even with the decision boundaries, it is not known which class each region corresponds to. The labeled data is used to learn this assignment. (b) Previous work assumes that the true and approximate decision boundaries are the same. (c) In the current work, we assume that the true decision boundaries are unknown, but that it is possible to learn a mixture model that approximates the true boundaries using unlabeled data. assume that the number of mixture components K is the same as the number of classes. Assuming the unlabeled data can be used to learn the mixture model (1), the question becomes when is this mixture model useful for predicting Y ? Figure 1 illustrates an idealized example. In an early series of papers, Cover (1995, 1996) considered this question under the following assumptions: (a) For each b there is some k such that f b = f * k and ฮป b = ฮป * k , (b) F * is known, and (c) K = 2. Thus, they assumed that the true components and weights were known but it was unknown which class each mixture component represents. In Figure 1, this corresponds to the case (b) where the decision boundaries are identical. Given labeled data, the special case K = 2 reduces to a simple hypothesis testing problem which can be tackled using the Neyman-Pearson lemma. In this paper, we are interested in settings where each of these three assumptions fail: (a) What if the class conditionals f * k are unknown? Although we can always write F * (x) = k ฮป * k f * k (x), it is generally not the case that this mixture model is learnable from unlabeled data alone. In practice, what is learned will be different from this ideal case, but the hope is that it will still be useful. In this case, the argument in Castelli and Cover (1995) breaks down. Motivated by recent work on nonparametric mixture models (Aragam et al., 2018), we study the general case where the true mixture model is not known or even learnable from unlabeled data. (b) What if F * is unknown? In a follow-up paper, Castelli and Cover (1996) studied the case where F * is unknown by assuming that K = 2 and the class conditional densities {f * 1 , f * 2 } are known up to a permutation. In this setting, the unlabeled data is used to ascertain the relative mixing proportions, but estimation error in the densities is not considered. We are interested in the general case in which a finite amount of unlabeled data is used to estimate both the mixture weights and densities. (c) What if K > 2? If K > 2, once again the argument in Castelli and Cover (1995) no longer applies, and we are faced with a challenging permutation learning problem. Permutation learning problems have gained notoriety recently owing to their applicability to a wide variety of problems, including statistical matching and seriation (Collier and Dalalyan, 2016;Fogel et al., 2013;Lim and Wright, 2014), graphical models (van de Geer and Bรผhlmann, 2013;Aragam et al., 2016), and regression (Pananjady et al., 2016;Flammarion et al., 2016), so these results may be of independent interest. With these goals in mind, we study the MLE and majority voting (MV) rules for learning the unknown class assignment introduced in the next section. Our assumptions for MV are closely related to recent work based on the so-called cluster assumption (Seeger, 2000;Singh et al., 2009;Rigollet, 2007;Azizyan et al., 2013); see Section 4.2 for more details. Contributions A key aspect of our analysis is to establish conditions that connect the mixture model (1) to the true mixture model. Under these conditions we prove nonasymptotic rates of convergence for learning the class assignment ( Figure 1a) from labeled data when K > 2, establish an ฮฉ(K log K) sample complexity for learning this assignment, and prove that the resulting classifier converges to the Bayes classifier. We then propose simple algorithms based on a connection to bipartite graph matching, and illustrate their performance on real and simulated data. SSL as permutation learning In this section, we formalize the ideas from the introduction using the language of mixing measures. We adopt this language for several reasons: 1) It makes it easy to refer to the parameters in the mixture model (1) by wrapping everything into a single, coherent statistical parameter ฮ›, 2) We can talk about convergence of these parameters via the Wasserstein metric, and 3) It simplifies discussions of identifiability in mixture models. Before going into technical details, we summarize the main idea as follows (see also Figure 1): 1. Use the unlabeled data to learn a K-component mixture model that approximates F * , which is represented by the mixing measure ฮ› defined below; 2. Use the labeled data to determine the correct assignment ฯ€ of classes ฮฑ k to the decision regions D b (ฮ›) defined by ฮ›; 3. Based on the pair (ฮ›, ฯ€), define a classifier g ฮ›,ฯ€ : X โ†’ Y by (3) below. Mixing measures and mixture models For concreteness, we will work on X = R d , however, our results generalize naturally to any space X with a dominating measure and well-defined density functions. Let P = {f โˆˆ L 1 (R d ) : ลŸ f dx = 1} be the set of probability density functions on R d , and M K (P) denote the space of probability measures over P with precisely K atoms. An element ฮ› โˆˆ M K (P) is called a (finite) mixing measure, and can be thought of as a convenient mathematical device for encoding the weights {ฮป k } and the densities {f k } into a single statistical parameter. By integrating against this measure, we obtain a new probability density which is denoted by where f b is a particular enumeration of the densities in the support of ฮ› and ฮป b is the probability of the bth density. Thus, (1) can be written as F * = m(ฮ›). By metrizing P via the total variation distance d TV (f, g) = 1 2 ลŸ |f โˆ’ g| dx, the distance between two finite K-mixtures can be computed via the Wasserstein metric: Decision regions, assignments, and classifiers Any mixing measure ฮ› defines K decision regions given by Figure 1). This allows us to assign an index from 1, . . . , K to any x โˆˆ X , and hence defines a classifierวง ฮ› : X โ†’ [K] := {1, . . . , K}. This classifier does not solve the original labeled problem, however, since the output is an uninformative index b โˆˆ [K] as opposed to a proper class label ฮฑ k โˆˆ Y. The key point is that even if we know ฮ›, we still must identify each label ฮฑ k with a decision region D b (ฮ›), i.e. we must learn a permutation ฯ€ : Y โ†’ [K]. With some abuse of notation, we will sometimes write ฯ€(k) instead of ฯ€(ฮฑ k ) for any permutation ฯ€. Together a pair (ฮ›, ฯ€) defines a classifier g ฮ›,ฯ€ : X โ†’ Y by ( 3) This mixing measure perspective helps to clarify the role of the unknown permutation in supervised learning: The unlabeled data is enough to learn ฮ› (and hence the decision regions D b (ฮ›)), however, labeled data are necessary to learn an assignment ฯ€ between classes and decision regions. This formulates SSL as a coupled mixture modeling and permutation learning problem: Given unlabeled and labeled data, learn a pair ( ฮ›, ฯ€) which yields a classifier g = g ฮ›, ฯ€ . The target is the Bayes classifier, which can also be written in the form (3): Let ฮ› * denote the mixing measure that assigns probability ฮป * k to the density f * k and note that F * = m(ฮ› * ), which is the true mixture model defined previously. Let ฯ€ * : Y โ†’ [K] be the permutation that assigns each class ฮฑ k to the correct decision region D Figure 1). Then it is easy to check that g ฮ› * ,ฯ€ * is the Bayes classifier. Identifiability Although the true mixing measure ฮ› * may not be identifiable from F * , some other mixture model may be. In other words, although it may not be possible to learn ฮ› * from unlabeled data, it may be possible to learn some other mixing measure ฮ› = ฮ› * such that m(ฮ›) = F * = m(ฮ› * ) ( Figure 1c). This essentially amounts to a violation of the cluster assumption: High-density clusters are identifiable, but in practice the true class labels may not respect the cluster boundaries. Assumptions that guarantee a mixture model are identifiable are well-studied (Teicher, 1961(Teicher, , 1963Yakowitz and Spragins, 1968), including both parametric Barndorff-Nielsen (1965) and nonparametric (Aragam et al., 2018;Teicher, 1967;Hall and Zhou, 2003) assumptions. In particular, Aragam et al. (2018) have proved general conditions under which mixture models with arbitrary, overlapping nonparametric components are identifiable and estimable, including examples where each component f k has the same mean. Since this problem is well-studied, we focus hereafter on the problem of learning the permutation ฯ€ * . Thus, in the sequel we will assume that we are given an arbitrary mixing measure ฮ› which will be used to estimate ฯ€ * . We do not assume that ฮ› = ฮ› * or even that these mixing measures are close. The idea is to elicit conditions on ฮ› that ensure consistent estimation of ฯ€ * . Two estimators Assume we are given a mixing measure ฮ› along with the labeled samples (X (i) , Y (i) ) โˆˆ X ร— Y. Two natural estimators of ฯ€ * are the MLE and majority vote. Although both estimators depend on ฮ›, this dependence will be suppressed for brevity. Majority vote The majority vote estimator (MV) is given by a simple majority vote over each decision region. Formally, we define a permutation ฯ€ MV as follows: The inverse assignment ฯ€ โˆ’1 MV : If there is no majority class in a given decision region, we consider this a failure of MV and treat it as undefined. Note that when K = 2, the MV classifier defined by (3) with ฯ€ = ฯ€ MV is essentially the same as the three-step procedure described in Rigollet (2007), which focuses on bounding the excess risk under the cluster assumption. In contrast, we are interested in the consistency of the unknown permutation ฯ€ * when K > 2, which is a more difficult problem. Statistical results Our main results establish rates of convergence for both the MLE and MV introduced in the previous section. We will use the notation E * h(X, Y ) to denote the expectation with respect to the true distribution (X, Y ) โˆผ P(X, Y ). Without loss of generality, we assume that ฯ€ * (ฮฑ k ) = k and Then ฯ€ = ฯ€ * if and only if ฯ€(ฮฑ k ) = k, which helps to simplify the notation in the sequel. Maximum likelihood Given ฮ›, the notation E * (ฯ€; ฮ›, X, Y ) = E * log ฮป ฯ€(Y ) f ฯ€(Y ) (X) denotes the expectation of the misspecified log-likelihood with respect to the true distribution. Define the "gap" For any function a : The condition โˆ† MLE (ฮ›) > 0 is a crucial condition that ensures that ฯ€ * is learnable from ฮ›, and the size of โˆ† MLE (ฮ›) quantifies "how easy" it is to learn ฯ€ * is given ฮ›. A bigger gap implies an easier problem. Thus, it is of interest to understand this quantity better. The following proposition shows that when ฮ› = ฮ› * , this gap is always nonnegative: Proposition 4.2. For any permutation ฯ€ and any ฮ›, In general, assuming โˆ† MLE (ฮ›) > 0 is a weak assumption, but bounds on โˆ† MLE (ฮ›) are difficult to obtain without making additional assumptions on the densities f k and f * k . A brief discussion of this can be found in Appendix 4.5; we leave it to future work to study this quantity more carefully. Majority vote For any ฮ›, define m b := |i : , where 1(ยท) is the indicator function. Similar to the MLE, our results for MV depend crucially on a "gap" quantity, given by This quantity essentially measures how much more likely it is to sample the bth label in the bth decision region than any other label, averaged over the entire region. Thus, conditions on โˆ† MV (ฮ›) are closely related to the well-known cluster assumption (Seeger, 2000;Singh et al., 2009;Rigollet, 2007;Azizyan et al., 2013). When ฮ› = ฮ› * , โˆ† MV (ฮ›) has the following interpretation: โˆ† MV (ฮ›) measures how well the decision regions defined by ฮ› match up with the decision regions defined by ฮ› * . When ฮ› defines decision regions that assign high probability to one class, โˆ† MV (ฮ›) will be large. If ฮ› defines decision regions where multiple classes have approximately the same probability, however, then it is possible that โˆ† MV (ฮ›) will be small. In this case, our experiments in Section 6 indicate that the MLE performs much better by managing overlapping decision regions more gracefully. Sample complexity Theorems 4.1 and 4.3 imply upper bounds on the minimum number of samples required to learn the permutation ฯ€ * : For any ฮด โˆˆ (0, 1), as long as we recover ฯ€ * with probability at least 1 โˆ’ ฮด. To derive the sample complexity in terms of the total number of labeled samples n, it suffices to determine the minimum number of samples per class given n draws from a multinomial random variable. For the general case with unequal probabilities, Lemma B.2 provides a precise answer. For simplicity here, we summarize the special case where each class (resp. decision region) is equally probable for the MLE (resp. MV). Coupon collector's problem and SSL To better understand these bounds, consider arguably the simplest possible case: Suppose that each density f * k has disjoint support, ฮป * k = 1/K, and that we know ฮ› * . Under these very strong assumptions, an alternative way to learn ฯ€ * is to simply sample from P(X) until we have visited each decision region D * k at least once. This is the classical coupon collector's problem (CCP), which is known to require ฮ˜(K log K) samples (Newman, 1960;Flajolet et al., 1992). Thus, under these assumptions the expected number of samples required to learn ฯ€ * is ฮ˜(K log K). By comparison, our results indicate that even if the f * k have overlapping supports and we do not know ฮ› * , as long as โˆ† MLE = ฮฉ(1) (resp. โˆ† MV = ฮฉ(1)) then ฮฉ(K log K) samples suffice to learn ฯ€ * . In other words, SSL is approximately as difficult as CCP in very general settings. Classification error So far our results have focused on the probability of recovery of the unknown permutation ฯ€ * . In this section, we bound the classification error of the classifier (3) in terms of the Wasserstein distance W 1 (ฮ›, ฮ› * ) between ฮ› and ฮ› * . We assume the following general set-up: We are given m unlabeled samples from which we estimate ฮ› by ฮ› m . Based on this mixing measure, we learn a permutation ฯ€ m,n from n labeled samples, e.g. using either MLE (4) or MV (5). Together, the pair ( ฮ› m , ฯ€ m,n ) defines a classifier g m,n via (3). We are interested in bounding the probability of misclassification P( g m,n (X) = Y ) in terms of the Bayes error. Theorem 4.7 (Classification error). Suppose W 1 ( ฮ› m , ฮ›) = O(r m ) for some r m โ†’ 0 where m is the number of unlabeled samples. Let g * = g ฮ› * ,ฯ€ * denote the Bayes classifier. Then there is a constant C > 0 depending on K and ฮ› * such that if ฯ€ m,n = ฯ€ * , This theorem allows for the possibility that the mixture model learned from the unlabeled data (i.e. ฮ› m ) does not converge to the true mixing measure ฮ› * . In this case, there is an irreducible error quantified by the Wasserstein distance W 1 (ฮ›, ฮ› * ). When W 1 (ฮ›, ฮ› * ) = 0, however, we can improve this upper bound considerably to yield nonasymptotic rates of convergence to the Bayes error rate: Corollary 4.8. If W 1 ( ฮ› m , ฮ› * ) = O(r m ) for some r m โ†’ 0, then the excess risk of g m,n converges to zero at the same rate as W 1 ( ฮ› m , ฮ› * ): Clairvoyant SSL Previous work Cover, 1995, 1996;Singh et al., 2009) has studied the so-called clairvoyant SSL case in which it is assumed that we know (1) perfectly. This amounts to taking ฮ› m = ฮ› in the previous results, or equivalently m = โˆž. Under this assumption, we have perfect knowledge of the decision regions and only need to learn the label permutation ฯ€ * . Then Corollary 4.8 implies that with high probability, we can learn a Bayes classifier for the problem using finitely many labeled samples. Convergence rates The convergence rate r m used here is essentially the rate of convergence in estimating an identifiable mixture model, which is well-studied for parametric mixture models (Heinrich and Kahn, 2015;Ho and Nguyen, 2016a,b). In particular, for so-called strongly identifiable parametric mixture models, the minimax rate of convergence attains the optimal root-m rate r m = m โˆ’1/2 (Heinrich and Kahn, 2015). 1 Asymptotic consistency theorems for nonparametric mixtures can be found in Aragam et al. (2018). Comparison to supervised learning (SL). Previous work (Singh et al., 2009) has compared the sample complexity of SSL to SL under a cluster-type assumption. While a precise characterization of these trade-offs is not the main focus of this paper, we note in passing here the following: If the minimax risk of SL for a particular problem is larger than W 1 (ฮ›, ฮ› * ), then Theorem 4.7 implies that SSL provably outperforms SL on finite samples. Discussion of conditions Here we have a simple experiment with the underlying distribution being a mixture of two Gaussians: where ยต is a small positive number indicating the separation between two Gaussians. We would like to compare the number of samples needed to recover the true permutation ฯ€ * with probability (1 โˆ’ ฮด) for both MLE and MV. Our experiments show that both estimators have roughly O(ยต โˆ’2 ) sample complexity when ยต โ†’ 0 + , but MV needs about 4 times as many samples as the MLE. In fact, our theory can verify the sample complexity of MV: The gap โˆ† MV is ฮฆ(ยต) โˆ’ ฮฆ(โˆ’ยต) = O(ยต) and the sample complexity has log(K/ฮด)/โˆ† 2 MV dependence with โˆ† MV , which gives exactly O(ยต โˆ’2 ). Here ฮฆ(ยต) is the cumulative distribution function of standard normal random variable. Unfortunately, the intractable form of the dual functions ฮฒ * b makes similar analytical comparisons difficult. Algorithms One of the significant appeals of MV (5) is its simplicity. It is conceptually easy to understand and trivial to implement. The MLE (4), on the other hand, is more subtle and difficult to compute in practice. In this section, we discuss two algorithms for computing the MLE: 1) An exact algorithm based on finding the maximum weight perfect matching in a bipartite graph by the Hungarian algorithm (Kuhn, 1955), and 2) Greedy optimization. Define Consider the weighted complete bipartite graph G = (V K,K , w) with edge weights Since a permutation ฯ€ defines a perfect matching on G, the log-likelihood can be rewritten as w(k, ฯ€(ฮฑ k )), the right side of which is the total weight of the matching ฯ€. Hence, the maximizer ฯ€ MLE can be found by finding a perfect matching for this graph that has maximum weight. This can be done in O(K 3 ) using the well-known Hungarian algorithm (Kuhn, 1955). We can also approximately solve the matching problem by a greedy method: Assign the kth class to This greedy heuristic isn't guaranteed to achieve optimal matching, however, it is simple to implement and can be viewed as a "soft interpolation" of ฯ€ MLE and ฯ€ MV as follows: If we define w MV (k, k ) = iโˆˆC k 1(X (i) โˆˆ D k (ฮ›)), we can see that a training example (X (i) , Y (i) = ฮฑ k ) contributes 1 to w MV (k, k ) if k = arg max j ฮป j f j (X (i) ), and contributes 0 to w MV (k, k ) otherwise. By comparison, for the greedy heuristic, a training example (X (i) , Y (i) = ฮฑ k ) contributes log(ฮป k f k (X (i) )) to w(k, k ). Therefore, the greedy estimator can be seen as a "soft" version of MV that also greedily optimizes the MLE objective. Experiments In order to evaluate the performance of the proposed estimators in practice, we implemented each of the three methods described in Section 5 on simulated and real data. Our experiments considered three settings: (i) Parametric mixtures of Gaussians, (ii) A nonparametric mixture model, and (iii) Real data from MNIST. All three experiments followed the same pattern: A random mixture model ฮ› * was generated, and then N = 99 labeled samples were drawn from this mixture model. We generated ฮ› * under different separation conditions, from well-separated to overlapping. Then, ฮ› was generated in two ways: (a) ฮ› = ฮ› * , corresponding to a setting where the true decision boundaries are known, and (b) ฮ› = ฮ› * by perturbing the components and weights of ฮ› * by a parameter ฮท > 0 (see below for details). Then ฮ› was used to estimate ฯ€ * using each of the three algorithms described in the previous section for the first n = 3, 6, 9, . . . , 99 labeled samples. This procedure was repeated T = 50 times (holding ฮ› * and ฮ› fixed) in order to estimate P( ฯ€ = ฯ€ * ). Mixture of Gaussians The first experiment uses synthetic data where F = k ฮป * k f * k is a mixture of Gaussians with ฮป * k being randomly drawn from a uniform distribution U(0, 1) (normalized afterwards) and f * k being a Gaussian density. The f * k were arranged on a square grid with randomly generated positive-definite covariance matrices. To explicitly control how well-separated the Gaussians are, we shrink the expectations of the Gaussians towards the origin using a parameter ฮท where ฮท โˆˆ {1, 0.75, 0.5}. We design the means of the Gaussians so that they are on a grid centered at the origin. The mean of each Gaussian component is thus given by ฮทยต * k , where ยต * k is the mean of the kth density. When ฮท = 1, components in the mixture are well-separated where {f * k } K k=1 have no or very little overlap within one standard deviation. The smaller the ฮท is, the more overlapping the components are. For each choice of dimension d โˆˆ {2, 10}, K is varied between {2, 4, 9, 16}. Perturbed mixture of Gaussians In this setting, we test the case where ฮ› * is unknown and the algorithms only have access to its perturbed version ฮ›. Similar to the above setups, we sample n labeled data using ฮ› * . However, instead of feeding the algorithms the true mixture ฮ› * , we input ฮ› where mixture weights are shifted: Each dimension of the means of the Gaussians are shifted by a random number drawn from N (0, 0.1) and the variance of each Gaussians is scaled by either 0.5 or 2 (chosen at random). Mixture of Gaussian mixtures and its perturbation This experiment is similar to the first experiment with a mixture of Gaussians except each f * k is itself a Gaussian mixture. We also controlled the degree of separation by shrinking the expectation of each Gaussian towards the origin with ฮท โˆˆ {1, 0.5}. MNIST and corrupted MNIST We trained 10 kernel density estimators (one for each digit) for {f k } 10 k=1 . These mixtures are used to define the true mixture ฮ› * . We then tested, under corruption of the labeled samples from the test set, how the three algorithms behave. With probability 0.1, the label of the sampled data is changed to an incorrect label. The results are depicted in Figure 3. As expected, the MLE performs by far the best, obtaining near perfect recovery of ฯ€ * with fewer than n = 20 labeled samples on synthetic data, and fewer than n = 40 on MNIST. Unsurprisingly, the most difficult case was K = 16, in which only the MLE was able recover the true permutation > 50% of the time. By increasing n, the MLE is eventually able to learn this most difficult case, in accordance with our theory. Furthermore, the MLE is much more robust to misspecification ฮ› = ฮ› * and component overlap compared to the others. This highlights the advantage of leveraging density information in the MLE, which is ignored by the MV estimator (i.e. MV only uses decision regions). A.3 Proof of Theorem 4.3 Proof. We have bj . It suffices to control the event where U (i) j โˆˆ {0, 1} are i.i.d. random variables. Thus, we are interested in the probability P(ฯ‡ bb > ฯ‡ bj โˆ€j = b). Note that and A bj (t) = {|ฯ‡ bj โˆ’ E * ฯ‡ bj | < t}. Then for any t < โˆ†/2, on the event โˆฉ K j=1 A bj (t) we have In other words, making the arbitrary choice of t = โˆ†/3, we deduce where we used Hoeffding's inequality to bound P A j (โˆ†/3) c for each j. Thus as claimed. B Additional lemmas B.1 Lemma B.1 For ease of notation in the following lemma, assume without loss of generality that Y โˆˆ [K]. Lemma B.1. Let g 1 , . . . , g K be functions and ฯˆ k (s) = log E * exp(sg k (X, Y )) be the log moment generating function of g k (X, Y ). Then P โˆ€ฯ€ : Proof. Define C k := {i : Y i = k}, n k := |C k |, and note that Then we have the following: Now, for each ฯ€, Z b (ฯ€) is just a sum over one of K possible subsets of [n], i.e. samples indices. To see this, define Z b,k := 1 n k iโˆˆC k g b (X i ) โˆ’ Eg b (X i ) and note that Z b (ฯ€) = Z b,ฯ€ โˆ’1 (b) for each b. It follows that Chernoff's inequality implies P(Z b,k โ‰ฅ t) โ‰ค exp(โˆ’n k ฯˆ * b (t)) for each b and k, which implies that n ฯ€ โˆ’1 (b) n t = t since b n b /n = 1 and ฯ€ is a bijection. The desired result follows. B.2 Lemma B.2 The following lemma gives a precise bound on the minimum number of samples n required to ensure min k n k โ‰ฅ m from a generic multinomial sample with high probability: Lemma B.2. Let Y i be a multinomial random variable such that P(Y i = k) = p k and define n k = n i=1 1(Y i = k). Then for any m > 0, Proof. By standard tail bounds on n k โˆผ Bin(n, p k ), we have P(n k โ‰ค m) โ‰ค exp(โˆ’2(np k โˆ’ m) 2 /(np k )). Thus P(min k n k < m) = P(โˆช K k=1 {n k < m}) โ‰ค K k=1 P(n k < m) โ‰ค K k=1 exp โˆ’ 2 np k (np k โˆ’ m) 2 , as claimed. B.3 Lemma B.3 For any density f โˆˆ L 1 , let ฮด f denote the point mass concentrated at f , so that for any Borel subset A โŠ‚ P, Lemma B.3. Let ฮ› = K K=1 ฮป k ฮด f k and ฮ› = K K=1 ฮป k ฮด f k . Then there is a constant C = C(ฮ› , K) such that Proof. The first inequality (13) follows from Theorem 4 in Gibbs and Su (2002), and the second inequality (14) is standard.
Use of quantitative real-time PCR to determine the local inflammatory response in the intestinal mucosa and muscularis externa of horses undergoing small intestinal resection Background: Studies in rodents and humans have demonstrated that intestinal manipulation or surgical trauma initiates an inflammatory response in the intestine which results in leucocyte recruitment to the muscularis externa causing smooth muscle dysfunction. Objectives: To examine the intestinal inflammatory response in horses undergoing colic surgery by measuring relative differential gene expression in intestinal tissues harvested from surgical colic cases and control horses. Study design: Prospective case- control study. Methods: Mucosa and muscularis externa were harvested from of horses undergoing abdominal surgery that necessitated intestinal resection. By measuring differential gene expression of a set of target genes ( IDO1 , IL6 , IL1 ฮฒ , TNF , CCL2 , NOS2 and PTGS2 ), we demonstrated the presence of intestinal inflammation in horses undergoing colic surgery. This gene list selection was based on previous reports of their upregulation in the muscularis externa of murine POI models 14 and in humans undergoing laparotomy. 7 In rodent intestinal manipulation- induced POI models, activation of MM results in cytokine and chemokine release, followed by infiltration of leucocytes into the muscularis externa . These infiltrating leucocytes inhibit smooth muscle function via the secretion of leucocytic products, such as nitric oxide (NO) and prostaglandins. This macrophage activation, which occurs within hours of surgery, has also been demonstrated in human patients. 7 As well as playing a key role in the initiation of inflammation, macrophages also have a protective function and are pivotal to the tissue repair process. At the level of the intestine, this protective function was recently demonstrated in a rodent model of infection- induced inflammation, in which muscularis externa macrophages protected the enteric neurons against post-infection neuronal death. IL1 CCL2 TNF upregulated in both the mucosa and muscularis externa in colic cases, whereas PTGS2 was upregulated in the mucosa only. These findings demonstrate the presence of active inflammation within intestinal tissue visually assessed as devitalised, a finding consistent with data derived from the muscularis externa of humans undergoing laparotomy. In addition, we also identified an inflammatory cytokine and mediator response within the mucosa, a finding which contrasts with the results derived from a study of people undergoing laparotomy which upregulation IL6 , PTGS2 and | INTRODUC TI ON Studies on the margins of surgically resected equine intestine from horses undergoing colic surgery identified a generalised stress response in the smooth muscle and myenteric plexus, characterised by an increase in apoptotic smooth muscle cells and apoptotic neurons and glial cells. 1 Ischaemia and manipulation of the equine jejunum also induces a post-operative neutrophilic infiltration in all tissue layers of the jejunum and eosinophilic infiltration of the jejunal submucosa. 2,3 In an equine model where the jejunum was distended and then decompressed, there was also an increase in the number of neutrophils in the mucosa and the serosa. 4 This inflammatory infiltrate is likely a downstream consequence of resident intestinal inflammatory cell activation in the muscularis externa that has been described in rodents and humans. [5][6][7] Intestinal manipulation triggers the activation of muscularis externa macrophages (MM) by pathogen-associated molecular patterns such as lipopolysaccharide. 8 This results in cytokine and chemokine release followed by the infiltration of leucocytes, predominantly neutrophils, monocytes and mast cells to the muscularis externa. These infiltrating leucocytes secrete products, such as nitric oxide and prostaglandins which can impair smooth muscle function. 9,10 Genes associated with inflammation of the muscularis externa induced by manipulation or surgery in rodent models and human studies include the pro-inflammatory cytokines interleukin (IL) 6, IL1ฮฒ, tumour necrosis factor (TNF), nitric oxide synthase 2 (NOS2), the chemokine C-C motif chemokine ligand 2 (CCL2) and the immunomodulatory enzyme mediator prostaglandin-endoperoxide synthase 2 (PTGS2). 7,11 This inflammatory response within the muscularis externa is strongly implicated as a significant causative factor in the pathogenesis of post-operative ileus (POI). 5 Despite the high prevalence of POI in horses following colic surgery, there are no equine studies to date evaluating the inflammatory response genes known to be associated with the development of POI in rodent and human studies. We hypothesised that an increase in inflammatory gene expression of IL6, IL1ฮฒ, CCL2, TNF, PTGS2 and indoleamine 2,3-dioxygenase (IDO1) will be present in the resected margins of horses undergoing small intestinal resection. Furthermore, we hypothesised that inflammatory gene expression will be greater in horses that subsequently develop post-operative reflux (POR) (a characteristic feature of POI) following abdominal surgery, compared with those that do not. | Study design This was a prospective case-control two-centre study between October 2014 and February 2017. Control horses were presented for elective euthanasia for a variety of reasons unrelated to diseases and disorders of the intestinal tract; these included poor temperament, chronic orthopaedic conditions, recurrent laminitis and suspected dental disease. Additionally, financial constraints of the owners contributed towards the decision for elective euthanasia in some of these cases. Control horses were excluded if any gross gastrointestinal lesions were identified at post-mortem examination. Horses were euthanased with secobarbital sodium 400 mg/ mL and cinchocaine hydrochloride 25 mg/mL (Somulose; Dechra Veterinary Products) and samples were obtained approximately 30 minutes following euthanasia. The colic case group comprised horses over 1 year of age undergoing exploratory celiotomy for colic unresponsive to medical management that required resection of a segment of the small intestine. Case inclusion was also dependent on the feasibility of appropriate sample collection and processing by the surgeon on duty. One experienced surgeon from each centre performed the exploratory celiotomies, intestinal resections and sample collection. All colic cases received the same pre-and postoperative care (Data S1). Signalment was recorded in both control horses and colic cases. For each colic case, site of lesion and resection, duration of colic prior to surgery, presence of pre-operative reflux, resection length, shortterm survival (defined as surviving to hospital discharge) and the presence of POR were recorded. Pre-operative reflux was defined as the presence of >2 L of reflux obtained on passing a nasogastric tube prior to surgery and POR was defined as the presence of >2 L of reflux on more than 2 intubations in the post-operative period. Both owner consent and ethical approval were obtained for the collection of tissues; ethical approval was provided by the Royal (Dick) School of Veterinary Studies Veterinary Ethical Review Committee. | Sample collection Control samples were obtained from an antimesenteric location in the region of the mid jejunum. For the colic cases, the resected intestine was deemed by the surgeon to have healthy grossly viable resection margins. Samples were obtained from both proximal and distal resection margins of either jejunum or ileum. All proximal resection margins were obtained from the jejunum and distal resection margins were either obtained from the jejunum (n = 9) or ileum (n = 3). | RNA extraction and cDNA synthesis Tissue sections were thawed, then homogenised in the presence of ฮฒ-mercaptoethanol and transferred to a gDNA Eliminator column (Qiagen) before extraction of total RNA using an RNeasy Plus Mini kit (Qiagen). RNA was quantified and purity was assessed using a Nanodrop spectrophotometer (ThermoFisher Scientific) and integrity was measured using the TapeStation System (Agilent Technologies). An RNA integrity number (RIN e ) greater than 7 was considered sufficient for downstream analysis 12 and cDNA was synthesised from 1 ยตg of total RNA using the SuperScript III First-Strand synthesis system (Invitrogen). It was not possible to extract RNA of adequate quality from all the colic cases and control samples. A total of 9 muscularis externa samples (5 proximal and 4 distal) and 16 mucosal samples (8 proximal and 8 distal) from colic cases and 4 muscularis externa and 6 mucosal samples from control horses were used for RT qPCR analysis. For colic cases, samples from the proximal and distal resection margins were analysed. After preliminary analysis comparing relative gene expression between proximal and distal resection margins with control tissues, the proximal mucosa (n = 8) and distal mucosa (n = 8) samples were combined as were the proximal and distal muscularis externa sections ( Figure 1). Proximal and distal margins obtained from the same case were not used. Where a proximal and distal sample existed for a colic case, the proximal section was selected as all proximal resection margins were from the jejunum and therefore better matched the control samples (Table S1). For RT-qPCR, cDNA was amplified with Power SYBR Green PCR Master Mix using the 7500 fast Real Time PCR system (ThermoFisher Scientific). Primer efficiency was validated with a standard curve of five serial dilution points (three for NOS2) with efficiency ranging between 92.68% and 100.74%. Amplification reactions were performed in triplicate and the melting curve analysed to check the annealing temperature and to ensure that there was no primer dimer formation. Efficiency and reactions were analysed using 7500 Software v2.3 (ThermoFisher Scientific). Details of primer sequences and design are included in Table S2. The expression of target gene mRNA relative to the housekeeping gene, glyceraldehyde 3-phosphate dehydrogenase (GAPDH), was calculated for each sample using the 2^ (โˆ’delta threshold cycle (CT)) (2^โˆ’ ฮ”ฮ”CT ) method. 13 Relative gene expression was assessed in the mucosa and muscularis externa of samples from colic cases and control horses. F I G U R E 1 Flow chart describing process of sample selection following quality control and preliminary analysis. *For each case, either a proximal or distal sample was selected | Data analysis Where appropriate, data are reported as median and interquartile range. All statistical analyses were performed using GraphPad Prism (GraphPad Software; GSL Biotech LLC). A non-parametric Mann-Whitney U test was applied to determine significant differences in relative gene expression between proximal and distal margins of the mucosa and muscularis externa; the mucosa samples obtained from the colic cases and the control horses; the muscularis externa samples obtained from the colic cases and the control horses; horses that did and did not survive to discharge; and horses with and without pre-and post-operative reflux. Linear regression analysis was used to analyse the relationship of age, duration of colic and resection length with relative gene expression. Normality was measured using D'Agostino's K 2 test. Significance was assumed at P < .05. | RE SULTS The colic cases comprised 12 horses (median age 19.5 years, range: | Relative expression of target genes in the mucosa and muscularis externa of colic cases and controls No significant differences in relative gene expression of target genes were observed between proximal and distal resection margins for both the mucosa and muscularis externa prior to the samples being combined ( Figures S1 and S2). NOS2 was removed from the set of target genes as no mRNA was detectable in any of the mucosa and muscularis externa samples. Relative expression of the target genes was evaluated in the mucosa and muscularis externa of colic cases and compared with the mucosa and muscularis externa of control horses ( Figure 2 | Analysis of relative expression in colic cases with relation to age, duration of colic, resection length, short-term survival and the presence of pre-and post-operative reflux There were no significant associations in either the mucosa or muscularis externa between inflammatory gene relative expression and the following factors: presence of pre-operative reflux; length of resected intestine; survival to discharge and duration of colic. In the mucosa, relative expression of all genes, except for TNF, reduced with age. There was a significant decrease in the relative expression of PTGS2 and CCL2 with increasing age (P < .05) ( Figure S3). In the muscularis externa, all genes, with the exception of IDO1, showed a trend of increased relative expression with increasing age; however, this apparent association was not statistically significant ( Figure S4). Mucosal samples from colic cases that developed POR demonstrated a significantly greater relative expression of TNF when compared with mucosal samples from colic cases that did not develop POR (Figure 4). There was no statistically significant difference in relative gene expression in the muscularis externa of colic cases that developed POR when compared with colic cases that did not develop POR ( Figure 5); however, those that developed POR showed a greater median relative expression of IL1ฮฒ, IL6, PTGS2, TNF and CCL2. | D ISCUSS I ON The principal aim of this study was to examine the intestinal inflammatory response in both the mucosa and muscularis externa F I G U R E 2 Relative gene expression of IL1B, IL6, PTGS2, TNF, CCL2 and IDO1 in the mucosa and muscularis externa of horses undergoing colic surgery. Scatter plots showing relative mRNA expression of target genes relative to Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) in the distal mucosa (n = 12) and combined proximal and distal muscularis externa (n = 9) samples compared with controls (mucosa n = 6, muscularis externa n = 4). Significance of relative gene expression levels between control and surgical samples using a Mann-Whitney U test. ns = not significant, *P < .05, **P < .01, ***P < .001, ****P < .0001 lipopolysaccharide, and did not upregulate any genes involved in arginine metabolism. 22 These data, and the data from this study suggest that NO production from macrophages does not contribute to the pathophysiology of equine POI. The cytokines IL1ฮฒ and TNFฮฑ reduce intestinal motility via a reduction in smooth muscle function, either by direct action on the smooth muscle cell or an indirect suppression of neurotransmitter release. 23,24 IL6 also suppresses motility, although the exact F I G U R E 5 Relative gene expression of IL1B, IL6, PTGS2, TNF, CCL2 and IDO1 in the muscularis externa of horses with and without postoperative reflux. Scatter plots showing mRNA expression of target genes relative to Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) in muscularis externa of colic cases (n = 7) and horses that developed post-operative reflux (n = 3). Significance of relative gene expression levels in horses with pre-operative reflux was assessed using a Mann-Whitney U Test Several factors such as age and the presence of pre-operative nasogastric reflux have been associated with an increased incidence of POI in horses. 32 In rodent models, a greater inflammatory response is associated with more "severe" POI. 33 Analysis was performed to determine if any association with pre-, intraoperative and post-operative factors was present. In the intestinal mucosa, there was a trend towards a reduction in the relative expression of all target genes, with the exception of TNF with age. The reduction in the relative expression of PTGS2 and CCL2 was statistically significant. In contrast, the relative expression of all target genes in the muscularis externa increased with age although 6 out of the 12 colic cases were diagnosed with pedunculated lipomas, the incidence of which is significantly greater in older horses. 34 In contrast, duration of colic was associated with a non-significant decrease in relative gene expression across all genes in both the mucosa and muscularis externa. The outlying data points were predominantly from four cases. There was no evidence of any differences in disease process or signalment to account for these phenomena. The study of larger group sizes may have revealed specific factors associated with a greater inflammatory response, thus accounting for the response observed in the outlying individuals. It is also possible that host genetics may play a role in the severity of inflammation and, like that seen in humans, some individuals may be genetically predisposed of developing a more severe inflammatory response. 35 The main limitations of the current study were the small group sizes. The inclusion of larger group sizes would be required to definitively explore the validity of certain association trends identified by the current study. Based on our preliminary data, power calculations revealed that a minimum group size of 25-60 horses (depending on gene of interest) would be required to detect a significant correlation between relative gene expression and horses that do develop POR. Although the use of combined proximal and distal resected margin samples could also be considered as a limitation, preliminary analysis failed to reveal any statistically significant differences in relative gene expression between these different locations. In the colic case group, there was also variation in the length of resected intestine, the underlying disease process, the disease duration prior to sample collection and the age of the horses. Although these sources of variation will inevitably impact upon the interpretation of the results, they are difficult to control within the constraints of the field study nature of the study design. Lastly, some of the distal resection margins were obtained from the ileum (3 out of 12 cases), which may have different resident inflammatory cell numbers compared with the control jejunal samples. While we have previously shown a difference between distal jejunum and ileum with respect to macrophage numbers in the submucosa, this difference was not evident in other layers including the muscularis externa. 36 That said, quantification of other resident immune cells in the equine intestine has not been performed and remains a significant knowledge gap. A further limitation of the study relates to the inherently restricted inferences which can be derived solely from consideration of gene expression data; this may not fully mirror the protein transcription profile at the tissue level. Expansion of this work to include proteomic profiling and pathway analyses would be warranted to more fully elucidate the factors involved in the initiation and propagation of the inflammatory response. While correlation of relative gene expression with several perioperative factors (e.g. development of POR) would have added further clinical relevance to the study, this was considered an adjunct to the primary objective of the study; namely, to extend findings from rodent and human studies to samples obtained from horses undergoing abdominal surgery. In this respect, our data confirmed the presence of an inflammatory response, characterised by differential inflammatory gene expression, in the intestine of horses undergoing colic surgery. As such, these findings justify continued research in this area and provide a methodological platform for such. The inclusion of a larger cohort of horses would greatly facilitate any future efforts to further assess whether associations exist between the magnitude of the inflammatory response and certain clinically relevant factors such as age, resection length, duration of colic, short-and long-term survival and the development of POR. Furthermore, further work to determine the roles of different cell types such as neutrophils and macrophages in intestinal inflammation is also warranted. | CON CLUS IONS These data demonstrate an inflammatory response within the intestine of horses undergoing colic surgery during which a small intestinal resection was performed. The upregulated genes encode proteins with the capability of inhibiting smooth muscle contractility and disrupting normal intestinal motility, ultimately resulting in functional POI. Additionally, the absence of NOS2 upregulation highlights inter-species variation in this inflammatory pathway, thus emphasising the importance of considering the target species when developing potential therapeutic targets. This study provides a foundation for future work to improve our understanding of the inflammatory response in horses undergoing colic surgery. ACK N OWLED G EM ENTS We thank Craig Pennycook and Chandra Logie at R(D)SVS Pathology for their assistance with tissue collection and to colleagues at the R(D)SVS Equine Hospital and Bell Equine Veterinary Clinic for their help in obtaining samples and Darren Shaw for his input on statistical methods. Additionally, we thank the owners of all the horses who gave permission for the collection of samples. CO N FLI C T O F I NTE R E S T S No competing interests have been declared. I N FO R M E D CO N S E NT Owners gave consent for their animals' inclusion in the study. DATA ACCE SS I B I LIT Y S TATE M E NT The data that support the findings of this study are available from the corresponding author upon reasonable request.