uuid
int64
0
6k
title
stringlengths
8
285
abstract
stringlengths
22
4.43k
900
Telemedicine Orthopedic Consultations Duration and Timing in Outpatient Clinical Practice During the COVID-19 Pandemic
Introduction: Orthopedic associations advocated telemedicine during the COVID-19 pandemic to prevent disease transmission without hindering providing services to orthopedic patients. The study aimed to evaluate outpatient orthopedic teleconsultations' timing, length, and organizational issues in the circumstances of the COVID-19 pandemic based on consecutive orthopedic teleconsultations during the period of the first lockdown. Methods: Orthopedic telemedical consultations (OTCs) were provided from March 23, 2020, to June 1, 2020, and analyzed retrospectively based on mobile smartphone billing and electronic health record. Teleconsultations were based on the legal regulations of telemedicine services in Poland. Results: One thousand seventy-one patients (514 women and 557 men) with a mean age of 41.7 were teleconsulted. The length of the OTC averagely lasted 13.36 min (standard deviation 8.63). Consulted patients suffered from orthopedic disorders 65.3%, musculoskeletal injuries 26.3%, and other diseases 8.4%. Most OTCs were delayed (74.22%) concerning the planned schedule, with a median delay time of 12 min. Only 7.3% of teleconsultations were held precisely on time. Conclusions: Televisit length may not be dependent on gender, older age, or more diagnoses. The services like e-prescriptions, e-Referrals, e-Orders for orthotics, and e-Sick-leaves influence OTC length. Any extension of the patient's OTC may create a "snowball effect" of further delay for each subsequent OTC. Orthopedic teleconsultation requires new understanding and skills by both the patient and specialist physicians. Future research directions should concern the practical aspects of orthopedic teleconsultations, like legal, organizational, and technological issues and their implementation.
901
Influence of green finance on carbon emission intensity: empirical evidence from China based on spatial metrology
Based on the panel data of 30 provinces in China from 2011 to 2020, this paper empirically examines the direct effect, spatial spillover effect, and total effect of green finance on carbon emission intensity through the spatial econometric model, considering both spatial and temporal patterns. The results show the following: (1) The carbon emission intensity of each province in China shows a noticeable spatial spillover effect and a positive spatial correlation distribution of "high-high" and "low-low" agglomeration. (2) The development of green finance in China is interrelated but uneven in space, which presents a gradient strengthening trend from the west to the east. (3) Green finance development will curb the intensity of carbon emissions, and this effect has gradually been increasing over time and differs by region. Specifically, green finance will increase the carbon emission intensity of adjacent areas in the short term but will significantly reduce the local province's carbon emission intensity to a larger extent. Finally, it puts forward policy recommendations to promote the coordinated development of green finance and a low-carbon economy.
902
Pitfalls of the MTT assay: Direct and off-target effects of inhibitors can result in over/underestimation of cell viability
The MTT assay (to a less degree MTS, XTT or WST) is a widely exploited approach for measuring cell viability/drug cytotoxicity. MTT reduction occurs throughout a cell and can be significantly affected by a number of factors, including metabolic and energy perturbations, changes in the activity of oxidoreductases, endo-/exocytosis and intracellular trafficking. Over/underestimation of cell viability by the MTT assay may be due to both adaptive metabolic and mitochondrial reprogramming of cells subjected to drug treatment-mediated stress and inhibitor off-target effects. Previously, imatinib, rottlerin, ursolic acid, verapamil, resveratrol, genistein nanoparticles and some polypeptides were shown to interfere with MTT reduction rate resulting in inconsistent results between the MTT assay and alternative assays. Here, to test the under/overestimation of viability by the MTT assay, we compared results derived from the MTT assay with the trypan blue exclusion assay after treatment of glioblastoma U251, T98G and C6 cells with three widely used inhibitors with the known direct and side effects on energy and metabolic homeostasis - temozolomide (TMZ), a DNA-methylating agent, temsirolimus (TEM), an inhibitor of mTOR kinase, and U0126, an inhibitor of MEK1/2 kinases. Inhibitors were applied shortly as in IC50 evaluating studies or long as in studies focusing on drug resistance acquisition. We showed that over/underestimation of cell viability by the MTT assay and its significance depends on a cell line, a time point of viability measurement and other experimental parameters. Furthermore, we provided a comprehensive survey of factors that should be accounted in the MTT assay. To avoid result misinterpretation, supplementation of the tetrazolium salt-based assays with other non-metabolic assays is recommended.
903
Application of ultrasound and microencapsulation on Limosilactobacillus reuteri DSM 17938 as a metabolic attenuation strategy for tomato juice probiotication
Counteracting probiotic-induced physicochemical and sensory changes is a challenge in the development of probiotic beverages. The aim of the study is to apply ultrasound and microencapsulation for the attenuation of Limosilactobacillus reuteri DSM 17938 to avoid change in a probiotic tomato juice. Preliminarily, six ultrasound treatments were applied. Probiotic survival in acid environment (pH 2.5) and bile salts (1.5 g/l) after ultrasound treatment was also studied. The probiotic was inoculated in tomato juice in four forms: free cells (PRO-TJ), sonicated-free cells (US-TJ), untreated-microencapsulated (PRO-MC-TJ) and sonicated-microencapsulated cells (US-MC-TJ). Probiotic viability and pH were monitored during 28 days of storage at 4 and 20 °C. Sensory analysis was performed for PRO-TJ and US-MC-TJ sample (4 °C). Ultrasound (57 W for 6 min) did not affect cell survival and transitorily modulated probiotic acidifying capacity; it reduced probiotic survival in acidic environment but increased probiotic survival in bile salts solution. Ultrasound was effective in maintain pH value of tomato juice but only at 4 °C. Instead, microencapsulation with sodium-alginate leads to a more stable probiotic juice, particularly at 20 °C. Finally, probiotication slightly modified some sensory attributes of the juice. This study shows the potential of ultrasound and microencapsulation as attenuation strategies and highlights the need for process optimization to increase ultrasound efficacy.
904
Can You Hear Me Now? Sensitive Comparisons of Human and Machine Perception
The rise of machine-learning systems that process sensory input has brought with it a rise in comparisons between human and machine perception. But such comparisons face a challenge: Whereas machine perception of some stimulus can often be probed through direct and explicit measures, much of human perceptual knowledge is latent, incomplete, or unavailable for explicit report. Here, we explore how this asymmetry can cause such comparisons to misestimate the overlap in human and machine perception. As a case study, we consider human perception of adversarial speech - synthetic audio commands that are recognized as valid messages by automated speech-recognition systems but that human listeners reportedly hear as meaningless noise. In five experiments, we adapt task designs from the human psychophysics literature to show that even when subjects cannot freely transcribe such speech commands (the previous benchmark for human understanding), they can sometimes demonstrate other forms of understanding, including discriminating adversarial speech from closely matched nonspeech (Experiments 1 and 2), finishing common phrases begun in adversarial speech (Experiments 3 and 4), and solving simple math problems posed in adversarial speech (Experiment 5) - even for stimuli previously described as unintelligible to human listeners. We recommend the adoption of such "sensitive tests" when comparing human and machine perception, and we discuss the broader consequences of such approaches for assessing the overlap between systems.
905
Comic Strip Narratives in Time Geography
On the basis of a shared emphasis on time as well as space, this paper argues for introducing elements of comic art into cartography, specifically the mapped comic, with an illustrated strip literally plotted and placed in a 3D time geographic virtual world. This approach is situated within current initiatives regarding the relationship between cartography and art, given that comics are a type of sequential art. Two examples demonstrate that the approach succeeds as a way of representing the geometry of a story without compromising emotional content. Comic conventions neatly package narrative geography for visual deployment. An example demonstrating the expressiveness of comic art principles when applied to maps (maps as comics) is discussed.
906
Bridging different perspectives for biocultural conservation: art-based participatory research on native maize conservation in two indigenous farming communities in Oaxaca, Mexico
Native maize conservation rests on the custody of traditional and indigenous small-scale farmers, but their traditional practices and way of life are challenged by multiple forces associated with globalization, international trade and neoliberal agricultural policies. Through participatory art-based research with two indigenous communities in Oaxaca, Mexico, we identified the main challenges and strategies for native maize conservation, as perceived by these farming communities. We implemented a stepwise method to elicit local strategies for biocultural conservation pertinent across gender and generations. We conclude that understanding the heterogeneity of perspectives is important for identifying root causes of agrobiodiversity decline and strategies for biocultural native maize conservation.
907
Computing Systems for Autonomous Driving: State of the Art and Challenges
The recent proliferation of computing technologies (e.g., sensors, computer vision, machine learning, and hardware acceleration) and the broad deployment of communication mechanisms (e.g., dedicated short-range communication, cellular vehicle-to-everything, 5G) have pushed the horizon of autonomous driving, which automates the decision and control of vehicles by leveraging the perception results based on multiple sensors. The key to the success of these autonomous systems is making a reliable decision in real-time fashion. However, accidents and fatalities caused by early deployed autonomous vehicles arise from time to time. The real traffic environment is too complicated for current autonomous driving computing systems to understand and handle. In this article, we present state-of-the-art computing systems for autonomous driving, including seven performance metrics and nine key technologies, followed by 12 challenges to realize autonomous driving. We hope this article will gain attention from both the computing and automotive communities and inspire more research in this direction.
908
被撤回的出版物: A comparative study of different acceleration sensors in measuring energy consumption of human martial arts (Retracted article. See vol. 2022, 2022)
At present, there are many acceleration sensors for measuring human martial arts in the market. However, due to the inaccurate measurement of some acceleration sensors, people who love martial arts are deeply troubled and unable to find an excellent acceleration sensor specifically for energy consumption detection of human martial arts. The development of this sensor is imminent, which is of great significance for the comparative study of energy consumption measurement of human martial arts in our country. In this study, 160 students aged 11-14 years were selected, and the subjects were divided into normal body mass group and abnormal body mass group. Of the 96 male adolescents, 32 were obese body mass, which was determined as male abnormal body mass Group; 64 male adolescents were normal body weight and male normal body weight group; female 64 adolescents were normal body weight and set as female normal body mass group. Using a built-in accelerometer and a mobile phone three-dimensional accelerometer, the subjects were subjected to a 3-8 km/h human martial arts exercise load test (each speed is continuously performed for 5 min). The two acceleration sensors collectively assess the accuracy of the prediction of the use of force in human martial arts experiments. The average power consumption of human art exercises uses a frequency of 60 times/min, 90 times/min and 120 times/min compared to two acceleration sensors. Test results show that the data points for the mobile accelerator eraser are scattered, and the distance between the data varies. The data points of the three-dimensional acceleration sensor are more concentrated and present a certain trend. The use of three-dimensional acceleration sensors to measure martial arts can fully reflect the energy consumption of human activities, and achieve an energy consumption measurement accuracy of more than 94%.
909
Association between changes in frailty during hospitalization in older adults and 3-month mortality after discharge
Frailty is a dynamic status that can worsen or improve. However, changes in their frailty status that occur during hospitalization and their significance have not been comprehensively investigated. In this study, we explored the association between such changes and mortality 3 months after discharge in older adults hospitalized for acute care. In total, 257 participants (mean age 84.95 ± 5.88, 41.6% male) completed comprehensive geriatric assessments, including the Clinical Frailty Scale (CFS) at admission and discharge. Mean CFS score was 5.14 ± 1.35 at admission. CFS scores increased, indicating deteriorating frailty, in 29.2% of the participants (75/257) during hospitalization. Multiple logistic regression analysis demonstrated a positive association between increased CFS score during hospitalization and mortality (odds ratio, 2.987) independent of potential co-founding factors. This deterioration in frailty during hospitalization may be modifiable risk factor of poor prognosis in older adults who need acute care hospitalization.
910
Desorption of positive and negative ions from activated field emitters at atmospheric pressure
Field desorption (FD) traditionally is an ionization technique in mass spectrometry (MS) that is performed in high vacuum. So far only two studies have explored FD at atmospheric pressure or even superatmospheric pressure, respectively. This work pursues ion desorption from 13-µm activated tungsten emitters at atmospheric pressure. The emitters are positioned in front of the atmospheric pressure interface of a Fourier transform-ion cyclotron resonance (FT-ICR) mass spectrometer and the entrance electrode of the interface is set to 3-5 kV with respect to the emitter. Under these conditions positive, and for the first time, negative ion desorption is achieved. In either polarity, atmospheric pressure field desorption (APFD) is robust and spectra are reproducible. Both singly charged positive and negative ions formed by these processes are characterized by accurate mass-based formula assignments and in part by tandem mass spectrometry. The compounds analyzed include the ionic liquids trihexyl(tetradecyl) phosphonium tris(pentafluoroethyl) trifluorophosphate) and 1-butyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide, the acidic compounds perfluorononanoic acid and polyethylene glycol diacid, as well as two amino-terminated polypropylene glycols. Some surface mobility on the emitter is prerequisite for ion desorption to occur. While ionic liquids inherently provide this mobility, the desorption of ions from solid analytes requires the assistance of a liquid matrix, e.g. glycerol.
911
3-D Active Contour Segmentation Based on Sparse Linear Combination of Training Shapes (SCoTS)
SCoTS captures a sparse representation of shapes in an input image through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. For the problem of lung nodule segmentation in X-ray CT, SCoTS offers a unified framework, capable of segmenting nodules of all types. Experimental validations are demonstrated on 542 3-D lung nodule images from the LIDC-IDRI database. Despite its generality, SCoTS is competitive with domain specific state of the art methods for lung nodule segmentation.
912
State-of-the-Art High Power Density and High Efficiency DC-DC Chopper Circuits for HEV and FCEV Applications
Recent environmental issues have accelerated the use of more efficient and energy saving technologies in any area of our daily life. One of the major energy consumptions is in the transportation area, especially in the automobile field. DC/DC chopper circuits for use in hybrid electric vehicles (HEV), fuel cell electric vehicles (FCEV) and so on will be discussed in this paper from the view point of power density and efficiency. A typical power range of such converters can be in order of kWs up to over 100 kW with a short term overload requirement of often more than 200 %. Considering the state of the art, switching frequency of these converters is in the range from 50 kHz with IGBTs to 200 kHz with power MOSFETs, the power density peaks at about 25 kW/l, and the highest efficiency is close to 98 [%] depending on the load conditions. As can be seen from the brief introduction, the design of such converter presents multiple challenges from power density as well as efficiency point of view and these are discussed further in the paper.
913
Heavy metals accumulation in wheat (Triticum aestivum L.) roots and shoots grown in calcareous soils treated with non-spiked and spiked sewage sludge
With growing urbanization and agriculture, the quantity of sewage sludge production increases every year. For the purpose of risk management, it is crucial to figure out how much heavy metals are transported to different parts of plants when sewage sludge is used. A greenhouse experiment was carried out to investigate the accumulation of heavy metals in wheat (Triticum aestivum L.) grown in 30 calcareous soils. The soils in this study were subjected to three different treatments: soils treated with sewage sludge at a rate of 2.5%, soils treated with sewage sludge at a rate of 2.5% and enriched with heavy metals, and control soils that received neither sewage sludge nor heavy metals. Wheat grown in sewage sludge-treated soils had the highest mean dry matter, and was 2.11 and 1.25 times greater than wheat grown in control and spiked-sewage sludge-treated soils, respectively. In all treatments, wheat roots had greater heavy metal levels than wheat shoots. Among all the heavy metals examined, Pb and Cu had the highest bioconcentration factors for roots and shoots (BCFRoots and BCFShoots) in control and sewage sludge-treated soils, followed by Cd in spiked-sewage sludge-treated soils, and Co and Ni had the lowest BCFRoots and BCFShoots across all treatments. In spiked-sewage sludge-treated soils, the root restriction for heavy metals translocation was more important for Co, Cu, and Ni than for Pb and Zn, indicating that wheat can be grown safely in a variety of calcareous soils amended with sewage sludge with high content of Cd, Co, Cu, and Ni. Reducing the transfer of Pb and Zn from soils to wheat in soils treated with sewage sludge yet having high concentrations of these heavy metals should be considered as a top priority strategy for preserving wheat products. Since a wide range of calcareous soils was used in this study and because calcareous soils make up the majority of soils in the Middle East, the findings are relevant for all of the countries in this region.
914
Hepatic transcriptomics and metabolomics indicated pathways associated with immune stress of broilers induced by lipopolysaccharide
Broilers with immune stress show decline of growth performance, causing severe economic losses. However, the molecular mechanisms underlying the immune stress still need to be elucidated. One hundred and twenty broiler chicks were randomly assigned to 2 groups with 6 repeats per group, 10 birds per repeat. The model broilers were intraperitoneally injection of 250 µg/kg LPS at 12, 14, 33, and 35 d of age to induce immunological stress. Control group was injected with an equivalent amount of sterile saline. Blood samples from chickens were collected using wing vein puncture at 35 d of age and the serum was obtained for detection of CORT and ACTH. At the end of the experiment, the liver tissues were excised and collected for omics analysis. The results showed that LPS challenge significantly inhibited growth performance, increased relative weight of liver, spleen and decreased relative weight of bursa, as well as enhanced the concentration of serum ACTH and CORT, when compared with the Control. A total of 129 DEGs and a total of 109 differential metabolites were identified between Model and Control group. Transcriptomics profiles revealed that immune stress enhanced the expression of genes related to defense function while declined the expression of genes related to oxidation-reduction process. Metabolomics further suggested that immune stress changed metabolites related to amino acid metabolism, glycerophospholipid metabolism. In addition, integrated analysis suggested that the imbalance of valine, leucine and isoleucine metabolism, glycerophospholipid metabolism, and mTOR signaling pathway played an important role in immune stress of broiler chicks.
915
Label Augmentation to Improve Generalization of Deep Learning Semantic Segmentation of Laparoscopic Images
Training Deep Neural Networks to solve semantic segmentation is a challenging problem with small-size labeled datasets, leading to overfitting. This is especially problematic in medical images, and in particular, in laparoscopic surgery images. In this context, ground-truth segmentation labels are available only for a small set of images with few patients. Besides, inter-patient variability is very high in practice. Models trained for a specific setup and a set of patients usually performs poorly when deployed in a new environment. This work proposes a new training strategy that improves the generalization accuracy of current state-of-the-art semantic segmentation methods applied to laparoscopic images. Our approach is based on training a discriminator network, which learns to detect segmentation errors, producing a dense segmentation error map. Unlike in adversarial networks, we train the discriminator offline by synthetically altering ground-truth segmentation labels with simple morphological and geometric operations. We then use the discriminator to train a segmentation neural network, by minimizing the discriminator predicted error jointly with a standard segmentation loss. This strategy results in segmentation models that are significantly more accurate when tested in unseen images than those only relying on data augmentation. This technique is very suitable to boost the performance of any state-of-the-art segmentation network and can be combined with other data augmentation strategies. This paper evaluates and validates our proposal by training and testing common state-of-the-art segmentation models in publicly available semantic segmentation datasets, specialized in laparoscopic and endoscopic surgery. The results show that our methods are effective, obtaining a significant improvement in terms of segmentation accuracy, especially in challenging small-size datasets.
916
Hierarchical convolutional features for visual tracking via two combined color spaces with SVM classifier
As the state-of-the-art object trackers majority, hierarchical convolutional features (HCF) cannot recover tracking processes from problems of drifting caused by several challenges, especially by heavy occlusion, scale variation, and illumination variation. In this paper, we present a new effective method with the aim of treating these challenges robustly based on two principal tasks. First, we infer the target location using multichannel correlation maps, resulting from the combination of five learned correlation filters with convolutional features. In order to handle the illumination variation and get more rich features, we exploit an HSV energy condition to control the use of two color spaces, RGB and HSV. Second, we use the histogram of gradient features to learn another correlation filter in order to estimate the scale variation. Furthermore, we exploit an online training SVM classifier for target re-detecting in failure cases. The extensive experiments on a commonly used tracking benchmark dataset justify that our tracker significantly improves HCF and outperforms the state-of-the-art methods.
917
Reversible Data Hiding in Block Compressed Sensing Images
Block compressed sensing (BCS) is widely used in image sampling and is an efficient, effective technique. Through the use of BCS, an image can be simultaneously compressed and encrypted. In this paper, a novel reversible data hiding (RDH) method is proposed to embed additional data into BCS images. The proposed method is the first RDH method of its kind for BCS images. Results demonstrate that our approach performs better compared with other state-of-the-art RDH methods on encrypted images.
918
Scientific Information System for Silk Road Education Study
In this paper, we present a scientific information system for the Silk Road education study. The proposed information system includes martial arts, dance, and play of seven countries (Korea, Japan, China, Mongolia, Kazakhstan, Uzbekistan, and Iran) of the Silk Road. The purpose of the information system is to promote convergence education for university students by providing a fundamental framework of the information system and traditional cultures. The basic concept of the information system can help university students to develop information and communications technology skills and to develop their own applications by collaborating with each other as a team. In addition, while developing the information system of martial arts, dance, and play of seven countries of the Silk Road, university students will understand the connection between traditional cultures and modern cultures of the Silk Road.
919
Cancer risk assessment and source apportionment of the gas- and particulate-phase of the polycyclic aromatic hydrocarbons in a metropolitan region in Brazil
A risk assessment and a source apportionment of the particulate- and gas-phase PAHs were conducted in a high vehicular traffic and industrialized region in southeastern Brazil. Higher concentrations of PAHs were found during summer, being likely driven by the contributions of PAHs in the vapor phase caused by fire outbreaks during this period. Isomer ratio diagnostic and Principal Component Analysis (PCA) identified four potential sources in the region, in which the Positive Matrix Factorization (PMF) model confirmed and apportioned as gasoline-related (31.8%), diesel-related (25.1%), biomass burning (23.4%), and mixed sources (19.6%). The overall cancer risk had a tolerable value, with ∑CR = 4.6 × 10-5, being ingestion the major via of exposure (64% of the ∑CR), followed by dermal contact (33% of the ∑CR) and inhalation (3%). Mixed sources contributed up to 45% of the overall cancer risk (∑CR), followed by gasoline-related (up to 35%), diesel-related (up to 15%), and biomass burning (up to 10%). The risk assessment for individual PAH species allowed identifying higher CR associated with BaP, DBA, BbF, BaA, and BkF, species associated with gasoline-related and industrial sources. Higher risks were associated with PM2.5-bound PAHs exposure, mainly via ingestion and dermal contact, highlighting the need for measures of mitigation and control of PM2.5 in the region.
920
Reimagining Ocean Stewardship: Arts-Based Methods to 'Hear' and 'See' Indigenous and Local Knowledge in Ocean Management
Current ocean management approaches are often characterised by economic or environmental objectives, paying limited consideration to social and cultural dimensions, as well as Indigenous and local knowledge. These approaches tend to inhibit ocean stewardship, often marginalising coastal communities or limiting people's access to spiritual, traditional and recreational uses of the ocean and coast. Piloting arts-based participatory research methods to co-create knowledge with co-researchers in Algoa Bay, South Africa finds that these methods can be useful in highlighting cultural connections to the ocean, and remembering and imagining, or reimagining, ways in which people relate to and care for the ocean and coast. For example, using photography and in situ storytelling often allows people to convey memories and histories of more accessible coastlines, or envisaging a future with more inclusive and participatory ocean management. The study finds that there is a strong sense of exclusion from and lack of access to coastal and ocean areas in Algoa Bay where Indigenous and local communities have depended on for spiritual, cultural and recreational purposes for several generations. Co-creation of knowledge regarding connections, values and priorities of the coast and ocean with Indigenous and local communities should therefore be planned for before the implementation of integrated ocean management approaches and intentionally designed as part of adaptive management processes. Emphasising these cultural connections, and better recognising them in ocean management has the potential to include i people's awareness of the ocean which could translate into an increased sense of care and stewardship towards the ocean and coast as people feel more connected to their contextual seascapes. This could in turn contribute to a more sustainable sociocultural approach to ocean management which is necessary for equitable and sustainable future ocean social-ecological wellbeing.
921
Novel antennas for ultra-wideband communications
Two novel antennas for UWB communications, both inspired from bowtie and triangular patch structures, art presented and tested. This paper proposes their studs, and optimization in order to cover the new WPAN standard and facilitate their integration oil communication devices. This work has then been centered around three specific goals: wide bandwidth, minimum size, and low cost. (C) 2004 Wiley Periodicals, Inc.
922
KPAC: A Kernel-Based Parametric Active Contour Method for Fast Image Segmentation
Object boundary detection has been a topic of keen interest to the signal processing and pattern recognition community. A popular approach for object boundary detection is parametric active contours. Existing parametric active contour approaches often suffer from slower convergence rates, difficulty dealing with complex high curvature boundaries, and are prone to being trapped in local optima in the presence of noise and background clutter. To address these problems, this paper proposes a novel kernel-based active contour (KPAC) approach, which replaces the conventional internal energy term used in existing approaches by incorporating an adaptive kernel derived for the underlying image characteristics. Experimental results demonstrate that the KPAC approach achieves state-of-the-art performance when compared to two other state-of-the-art parametric active contour approaches.
923
Public long-term care insurance scheme and informal care use among community-dwelling older adults in China
The public long-term care insurance (LTCI) scheme has been implemented in a few countries. Although the hypotheses of crowding-out, crowding-in and specialisation can facilitate our understanding of the relationship between LTCI and informal care use, existing studies may suffer from reverse causality. Employing a quasi-experimental design, this study examined the policy effect of LTCI on informal care use among community-dwelling older adults in China. Based on the data from three waves of the Chinese Longitudinal Healthy Longevity Survey, a staggered difference-in-differences (DID) with propensity score matching (PSM) approach was used to analyse the impact of LTCI on probability and hours of informal care use. The results showed that, for disabled older adults, LTCI reduced 43.3% of the probability and 82.4% of the weekly hours of informal care. LTCI also exhibited a spillover effect among nondisabled older adults through reducing the probability and weekly hours of informal care by 5.2% and 12.2%, respectively. Therefore, we argue that policymakers can consider rolling out the scheme for the entire country. Meanwhile, measures are needed to avoid a sharp decrease in informal care provision.
924
Metasurface-Assisted Wireless Communication with Physical Level Information Encryption
Since the discovery of wireless telegraphy in 1897, wireless communication via electromagnetic (EM) signals has become a standard solution to address increasing demand for information transfer in modern society. With the rapid growth of EM wave manipulation technique, programmable metasurface (PM) has emerged as a new type of wireless transmitter by directly modulating digital information without complex microwave components, thus providing an alternative to simplify the conventional wireless communication system. However, the challenges of improving information security and spectrum utilization still exist. Here, a dual-band metasurface-assisted wireless communication scheme is introduced to provide additional physical channels for the enhancement of information security. The information is divided into several parts and transmitted through different physical channels to accomplish information encryption, greatly reducing the possibility of eavesdropping. As the proof of concept, a dual-channel and high-security wireless communication system based on a 1-bit PM is established to simultaneously transmit two different parts of a picture to two receivers. Experiments show that the transmitted picture can be successfully retrieved only if the received signals of different receivers are synthetized as predefined. The proposed scheme provides a new route of employing PM in information encryption and spectrum utilization of wireless communication.
925
CircUBXN7 suppresses cell proliferation and facilitates cell apoptosis in lipopolysaccharide-induced cell injury by sponging miR-622 and regulating the IL6ST/JAK1/STAT3 axis
Acute respiratory distress syndrome (ARDS) is a common and serious respiratory illness with substantial morbidity and mortality. Circular RNAs have been demonstrated to participate in various diseases processes. However, the biological function and mechanism of most circular RNAs have not been elucidated in ARDS. In this study, we found that circUBXN7 was significantly increased in lipopolysaccharide (LPS)-induced A549 and Beas-2B cell injury. Inhibition of circUBXN7 significantly promoted cell proliferation and reduced cell apoptosis, while overexpression of circUBXN7 suppressed cell proliferation and accelerated cell apoptosis in LPS-induced A549 and Beas-2B cells. CircUBXN7 acted as a sponge for miR-622, and miR-622 rescued the effect of circUBXN7 on cell proliferation and apoptosis. We also found that IL6ST was a target gene of miR-622, and the expression of IL6ST was indirectly regulated by circUBXN7. Furthermore, western blotting indicated that the JAK1/STAT3 signaling pathway was involved in the circUBXN7/miR-622/IL6ST axis in LPS-induced A549 and Beas-2B cell injury. Overall, our study suggested that circUBXN7 suppressed cell proliferation and facilitated cell apoptosis by sponging miR-622 and regulating IL6ST, to activate the JAK1/STAT3 signaling pathway in LPS-induced A549 and Beas-2B cell injury. CircUBXN7 might therefore be a potential biomarker for ARDS, and dysregulation of circUBXN7 may be involved in the pathogenesis of ARDS.
926
A Flexible Hydrogen-Bonded Organic Framework Constructed from a Tetrabenzaldehyde with a Carbazole N-H Binding Site for the Highly Selective Recognition and Separation of Acetone
Rational design of hydrogen-bonded organic frameworks (HOFs) with multiple functionalities is highly sought after but challenging. Herein, we report a multifunctional HOF (HOF-FJU-2) built from 4,4',4'',4'''-(9H-carbazole-1,3,6,8-tetrayl)tetrabenzaldehyde molecule with tetrabenzaldeyde for their H bonding interactions and carbazole N-H site for its specific recognition of small molecules. The Lewis acid N-H sites allow HOF-FJU-2 facilely separate acetone from its mixture with another solvent like methanol with smaller pKa value. The donor (D)-π-acceptor (A) aromatic nature of the organic building molecule endows this HOF with solvent dependent luminescent/chromic properties, so the column acetone/methanol separation on HOF-FJU-2 can be readily visualized.
927
State-of-the-art W-band extended interaction klystron for the CloudSat program
This paper reviews the National Aeronautics and Space Administration (NASA)/Jet Propulsion Laboratory(JPL) CloudSat Extended Interaction Klystron requirements, design, performance, and development of a long life model for the European Space Agency (ESA)/Japan Aerospace Exploration Agency EarthCARE program plus Ka-band Extended Interaction Klystrons for the NASA/JPL Jupiter lcy Moons Orbiter and ESA European Global Precipitation Measurement Missions.
928
Circulating cell-free DNA for target quantification in hematologic malignancies: Validation of a protocol to overcome pre-analytical biases
Circulating tumor DNA (ctDNA) has become the most investigated analyte in blood. It is shed from the tumor into the circulation and represents a subset of the total cell-free DNA (cfDNA) pool released into the peripheral blood. In order to define if ctDNA could represent a useful tool to monitor hematologic malignancies, we analyzed 81 plasma samples from patients affected by different diseases. The results showed that: (i) the comparison between two different extraction methods Qiagen (Hilden, Germany) and Promega (Madison, WI) showed no significant differences in cfDNA yield, though the first recovered higher amounts of larger DNA fragments; (ii) cfDNA concentrations showed a notable inter-patient variability and differed among diseases: acute lymphoblastic leukemia and chronic myeloid leukemia released higher amounts of cfDNA than chronic lymphocytic leukemia, and diffuse large B-cell lymphoma released higher cfDNA quantities than localized and advanced follicular lymphoma; (iii) focusing on the tumor fraction of cfDNA, the quantity of ctDNA released was insufficient for an adequate target quantification for minimal residual disease monitoring; (iv) an amplification system proved to be free of analytical biases and efficient in increasing ctDNA amounts at diagnosis and in follow-up samples as shown by droplet digital PCR target quantification. The protocol has been validated by quality control rounds involving external laboratories. To conclusively document the feasibility of a ctDNA-based monitoring of patients with hematologic malignancies, more post-treatment samples need to be evaluated. This will open new possibilities for ctDNA use in the clinical practice.
929
Background subtraction in dynamic scenes using the dynamic principal component analysis
This study presents a foreground detection method capable of robustly estimating the background under the presence of dynamic effects. The key contribution of this study is the use of the dynamic principal component analysis to model the serial correlation between successive frames and construct a robust pixel-based background model. The frames are normalised in hue, saturation and value colour space to reduce the effect of illumination changes. To restrict the background model, kernel density estimation is used to identify the distribution of the background time-lagged data matrix and then confidence interval limits are used to determine the corresponding detection thresholds. The foreground is detected using background subtraction. This method is tested on several common sequences such as CDnet 2014, ETSI 2014 and MULTIVISION 2013. The authors also hold comparisons based on quantitative metrics with several state-of-the-art methods. Experimental results show that their method outperforms some state-of-the-art methods and has comparable performance with some depth-based methods.
930
Tert-butylhydroquinone abrogates fructose-induced insulin resistance in rats via mitigation of oxidant stress, NFkB-mediated inflammation in the liver but not the skeletal muscle of high fructose drinking rats
The effect of 21% fructose drinking water (FDW) (w/v) on some parameters of metabolic syndrome, hepatic, and skeletal muscular histology of rats was studied using standard techniques. Twenty male albino rats were divided into four groups of 5 rats each in this in vivo study. Group I received distilled water, group 2 received FDW, group 3 received FDW and metformin (300 mg/kg body weight daily, orally), group 4 received FDW and 1% tert-butylhydroquinone feed. FDW changed the serum leptin, triacylglycerol, very low-density lipoprotein, and C-reactive protein levels of the rats, inducing hypertriglyceridemia, oxidative stress, and inflammation in their liver (but not the skeletal muscle) and insulin resistance which were modulated with metformin and tBHQ as corroborated by liver and muscle histology. The study reveals the potentials of metformin and tBHQ in mitigating hepatic and skeletal muscular morphological changes arising from exposure to high fructose drinks. PRACTICAL APPLICATIONS: There has been an increase in the global consumption of fructose (either as a sweetner in beverages or soft and carbonated drinks) in the last few decades and this has been positively correlated with the global increase in metabolic complications. Regular intake of fructose contributes to the pathogenesis of lipid disorders, oxidant stress, and chronic inflammation, which are linked with the metabolic syndrome components (MetS) (obesity, insulin resistance, and cardiovascular diseases) as well as increased morbidity and mortality. Given that the approaches that have been applied to treat the MetS have not been able to totally arrest it, currenty study which showed that tBHQ abrogated fructose-induced insulin resistance, dyslipidemia, hepatic, and skeletal muscular pathology in the rats places tBHQ in the spotlight as a nutraceutical that could be of relevance in mitigating high dietary fructose-induced hepatic and skeletal muscular pathology.
931
Controlled Electrophoretic Deposition Strategy of Binder-Free CoFe2O4 Nanoparticles as an Enhanced Electrocatalyst for the Oxygen Evolution Reaction
The kinetic-sluggish oxygen evolution reaction (OER) is the main obstacle in electrocatalytic water splitting for sustainable production of hydrogen energy. Efficient water electrolysis can be ensured by lowering the overpotential of the OER by developing highly active catalysts. In this study, a controlled electrophoretic deposition strategy was used to develop a binder-free spinel oxide nanoparticle-coated Ni foam as an efficient electrocatalyst for water oxidation. Oxygen evolution was successfully promoted using the CoFe2O4 catalyst, and it was optimized by modulating the electrophoretic parameters. When optimized, CoFe2O4 nanoparticles presented more active catalytic sites, superior charge transfer, increased ion diffusion, and favorable reaction kinetics, which led to a small overpotential of 287 mV for a current density of 10 mA cm-2, with a small Tafel slope of 43 mV dec-1. Moreover, the CoFe2O4 nanoparticle electrode exhibited considerable long-term stability over 100 h without detectable activity loss. The results demonstrate promising potential for large-scale water splitting using Earth-abundant oxide materials via a simple and cheap fabrication process.
932
Isotropic Reconstruction of MR Images Using 3D Patch-Based Self-Similarity Learning
Isotropic three-dimensional (3D) acquisition is a challenging task in magnetic resonance imaging (MRI). Particularly in cardiac MRI, due to hardware and time limitations, current 3D acquisitions are limited by low-resolution, especially in the through-plane direction, leading to poor image quality in that dimension. To overcome this problem, super-resolution (SR) techniques have been proposed to reconstruct a single isotropic 3D volume from multiple anisotropic acquisitions. Previously, local regularization techniques such as total variation have been applied to limit noise amplification while preserving sharp edges and small features in the images. In this paper, inspired by the recent progress in patch-based reconstruction, we propose a novel isotropic 3D reconstruction scheme that integrates non-local and self-similarity information from 3D patch neighborhoods. By grouping 3D patches with similar structures, we enforce the natural sparsity of MR images, which can be expressed by a low-rank structure, leading to robust image reconstruction with high signal-to-noise ratio efficiency. An Augmented Lagrangian formulation of the problem is proposed to efficiently decompose the optimization into a low-rank volume denoising and a SR reconstruction. Experimental results in simulations, brain imaging and clinical cardiac MRI, demonstrate that the proposed joint SR and self-similarity learning framework outperforms current state-of-the-art methods. The proposed reconstruction of isotropic 3D volumes may be particularly useful for cardiac applications, such as myocardial infarction scar assessment by late gadolinium enhancement MRI.
933
Multiple instance learning for malware classification
This work addresses classification of unknown binaries executed in sandbox by modeling their interaction with system resources (files, mutexes, registry keys and communication with servers over the network) and error messages provided by the operating system, using vocabulary-based method from the multiple instance learning paradigm. It introduces similarities suitable for individual resource types that combined with an approximative clustering method efficiently group the system resources and define features directly from data. This approach effectively removes randomization often employed by malware authors and projects samples into low-dimensional feature space suitable for common classifiers. An extensive comparison to the state of the art on a large corpus of binaries demonstrates that the proposed solution achieves superior results using only a fraction of training samples. Moreover, it makes use of a source of information different than most of the prior art, which increases the diversity of tools detecting the malware, hence making detection evasion more difficult. (C) 2017 Elsevier Ltd. All rights reserved.
934
A study of artificial bee colony variants for radar waveform design
Waveform design and diversity is the technology that allows one or more sensors (e.g., radar or radar sensor networks) on board a platform to automatically change operating parameters, e.g., frequency, gain pattern, and pulse repetition frequency, to meet the varying environments. Optimization problems in the design of radar waveforms such as polyphase code design often bring troubles to designers. This paper proposes to hybridize the migration operator of biogeography-based optimization (BBO) approach with artificial bee colony (ABC) algorithm. Migration operator is able to promote information exchange of solutions in bee colony. This is useful for exploiting good information of searched solutions. Moreover, three state-of-the-art ABC variants are taken for study. A spread spectrum polyphase code design problem is chosen for experiment. The proposed ABCBBO algorithm as well as three state-of-the-art ABC algorithms are applied to solve polyphase code design. Results show that ABCBBO presents the overall best performance amongst the test algorithms. It is also the most reliable one.
935
Techniques and applications for soccer video analysis: A survey
Nowadays, soccer is the most popular sport in our society, followed by millions of people. Consequently, many video analysis applications have been developed in the last years to provide information that can be useful for viewers, referees, coaches and players. Some of these applications are focused on specific tasks, such as detecting players, segmenting the field of play, or registering the broadcast images. On the other hand, there are applications aimed at performing tasks of a higher level, such as event detection or game analysis. Here, the most meaningful techniques and applications that have been proposed throughout the last two decades to analyze soccer video sequences are surveyed. The aim of the paper is not to compare the existing techniques, but to represent a comprehensive and organized showcase for the state-of-the-art in the field: as such, it provides a thorough review of the existing types of soccer analysis applications and the techniques used in each one of them, along with the apparent recent technical trends identified from the most recent works, and discuses the challenges in soccer analysis that still remain unsolved.
936
Levodopa alters resting-state functional connectivity more selectively in Parkinson's disease with freezing of gait
Freezing of gait (FOG) is a debilitating motor symptom of Parkinson's disease (PD). Although PD dopaminergic medication (L-DOPA) seems to generally reduce FOG severity, its effect on neural mechanisms of FOG remains to be determined. The purpose of this study was to quantify the effect of L-DOPA on brain resting-state functional connectivity in individuals with FOG. Functional magnetic resonance imaging was acquired at rest in 30 individuals living with PD (15 freezers) in the ON- and OFF- medication state. A seed-to-voxel analysis was performed with seeds in the bilateral basal ganglia nuclei, the thalamus and the mesencephalic locomotor region. In freezers, medication-state contrasts revealed numerous changes in resting-state functional connectivity, not modulated by L-DOPA in non-freezers. In freezers, L-DOPA increased the functional connectivity between the seeds and regions including the posterior parietal, the posterior cingulate, the motor and the medial prefrontal cortices. Comparisons with non-freezers revealed that L-DOPA generally normalizes brain functional connectivity to non-freezers levels but can also increase functional connectivity, possibly compensating for dysfunctional networks in freezers. Our findings suggest that L-DOPA could contribute to a better sensorimotor, attentional, response inhibition and limbic processing to prevent FOG when triggers are encountered but could also contribute to FOG by interfering with the processing capacity of the striatum. This study shows that levodopa taken to control PD symptoms induces changes in functional connectivity at rest, in freezers only. Increases (green) in functional connectivity of GPe, GPi, putamen and thalamus with cognitive, sensorimotor and limbic cortical regions of the Interference model (blue) was observed. Our results suggest that levodopa can normalize connections similar to non-freezers or increases connectivity to compensate for dysfunctional networks.
937
DIRECT Mode Early Decision Optimization Based on Rate Distortion Cost Property and Inter-view Correlation
In this paper, an Efficient DIRECT Mode Early Decision (EDMED) algorithm is proposed for low complexity multiview video coding. Two phases are included in the proposed EDMED: 1) early decision of DIRECT mode is made before doing time-consuming motion estimation/disparity estimation, where adaptive rate-distortion (RD) cost threshold, inter-view DIRECT mode correlation and coded block pattern are jointly utilized; and 2) false rejected DIRECT mode macroblocks of the first phase are then successfully terminated based on weighted RD cost comparison between 16x16 and DIRECT modes for further complexity reduction. Experimental results show that the proposed EDMED algorithm achieves 11.76% more complexity reduction than that achieved by the state-of-the-art SDMET for the temporal views. Also, it achieves a reduction of 50.98% to 81.13% (69.15% on average) in encoding time for inter-view, which is 29.31% and 15.03% more than the encoding time reduction achieved by the state-of-the-art schemes. Meanwhile, the average Peak Signal-to-Noise Ratio (PSNR) degrades 0.05 dB and average bit rate increases by -0.37%, which is negligible.
938
An Improved Decoding Algorithm to Decode Quadratic Residue Codes Based on the Difference of Syndromes
The paper proposes an improved difference-of-syndromes decoding algorithm for decoding quadratic residue codes up to half the minimum distance. Therein, soft-decision information is not essential, but it can be used to speed up the computations. The improvement over the prior art is demonstrated by simulations. Moreover, the proposed algorithm is also compared with the classic Berlekamp-Massey algorithm by taking two BCH codes as examples.
939
Factors Associated with patient satisfaction towards a prison detention Clinic Care among male drug-using inmates
This study assessed patient satisfaction and its associated factors among male drug-using inmates utilizing a prison detention clinic in Taiwan. A cross-sectional design and structured questionnaire were employed to recruit 580 drug-using inmates into the study. The Patient Satisfaction Questionnaire Short Form (PSQ-18), developed by the RAND Corporation, was used as the basis for the short scale of patient satisfaction, and the research data were analyzed using the SPSS for Windows 20.0 statistical software package. The results showed that the research subjects had low patient satisfaction in all the factors assessed compared with the scale's general norms. Among the original seven satisfaction subscales in this study, the highest score was for the financial aspects, and the lowest was for the amount of time spent with doctors. This study also investigated satisfaction with medical lab exams and the pharmacy at the prison's clinic, and the satisfaction scores were higher than the original seven subscales. In multiple logistic regression analyses, the final model indicated that the inmates undergoing observed rehabilitation (OR = 13.837, 95% CI = 2.736-69.983) were more likely satisfied with prison detention clinic c than those serving prison sentences. Those inmates with custodial deposits (high vs. low; OR = 1.813, 95% CI = 1.038-3.168), and meet their physical health needs (met vs. unmet; OR = 4.872, 95% CI = 2.054-11.560) had significant correlated with detention clinic care satisfactory level. Although there is only one study setting cannot give a generalizability for people who are incarcerated in Taiwan, this study highlights that the prison authorities should scrutinize factors associated with detention clinic care satisfaction, such as the type of inmate, economic status in the prison, self-reported health status, and their physical health needs, to increase the level of patient satisfaction.
940
MultiCycleNet: Multiple Cycles Self-Boosted Neural Network for Short-term Electric Household Load Forecasting
Household load forecasting plays an important role in future grid planning and operation. However, compared with aggregated load forecasting, household load forecasting faces the challenge of the uncertainty of prolific load profiles. This paper presents a novel multiple cycles self-boosted neural network (MultiCycleNet) framework for household load forecasting, which aims to solve the uncertainty problem of household load profiles through the correlation analysis of electricity consumption patterns in multiple cycles. The basic idea of the proposed framework is that the predictor can learn customers' power consumption patterns better by learning the features and contextual information of relevant load profiles in multiple historical cycles. We use two real-life datasets: 1. the household load consumption dataset from Low Carbon London project led by United Kingdom (UK) Power Networks and 2. the UK Domestic Appliance-Level Electricity (UK-DALE) dataset to evaluate the effectiveness of the proposed framework. Compared with the state-of-the-art methods, experimental results show that the proposed framework is effective and outperforms the state-of-the-art methods by 19.83%, 10.46%, 11.14% and 9.02% in terms of mean squared error, root mean squared error, mean absolute error and mean absolute percent error, respectively.
941
Breakage Assessment of Lath-Like Crystals in a Novel Laboratory-Scale Agitated Filter Bed Dryer
Agitated filter bed dryer is often the equipment of choice in the pharmaceutical industry for the isolation of potent active pharmaceutical ingredients (API) from the mother liquor and subsequent drying through intermittent agitation. The use of an impeller to promote homogeneous drying could lead to undesirable size reduction of the crystal product due to shear deformation induced by the impeller blades during agitation, potentially causing off-specification product and further downstream processing issues. An evaluation of the breakage propensity of crystals during the initial development stage is therefore critical. A new versatile scale-down agitated filter bed dryer (AFBD) has been developed for this purpose. Carbamazepine dihydrate crystals that are prone to breakage have been used as model particles. The extent of particle breakage as a function of impeller rotational speed, size of clearance between the impeller and containing walls and base, and solvent content has been evaluated. A transition of breakage behaviour is observed, where carbamazepine dihydrate crystals undergo fragmentation first along the crystallographic plane [00l]. As the crystals become smaller and more equant, the breakage pattern switches to chipping. Unbound solvent content has a strong influence on the breakage, as particles break more readily at high solvent contents. The laboratory-scale instrument developed here provides a tool for comparative assessment of the propensity of particle attrition under agitated filter bed drying conditions.
942
A deep learning based unified framework to detect, segment and recognize irises using spatially corresponding features
This paper proposes a deep learning based unified and generalizable framework for accurate iris detection, segmentation and recognition. The proposed framework firstly exploits state-of-the-art and iris specific Mask R-CNN, which performs highly reliable iris detection and primary segmentation i.e., identifying iris/non-iris pixels, followed by adopting an optimized fully convolutional network (FCN), which generates spatially corresponding iris feature descriptors. A specially designed Extended Triplet Loss (ETL) function is presented to incorporate the bit-shifting and non-iris masking, which are found necessary for learning meaningful and discriminative spatial iris features. Thorough experiments on four publicly available databases suggest that the proposed framework consistently outperforms several classic and state-of-the-art iris recognition approaches. More importantly, our model exhibits superior generalization capability as, unlike popular methods in the literature, it does not essentially require database-specific parameter tuning, which is another key advantage. (C) 2019 Published by Elsevier Ltd.
943
Fixed Point Iteration Based Algorithm for Asynchronous TOA-Based Source Localization
This paper investigates the problem of source localization using signal time-of-arrival (TOA) measurements in the presence of unknown start transmission time. Most state-of-art methods are based on convex relaxation technologies, which possess global solution for the relaxed optimization problem. However, computational complexity of the convex optimization-based algorithm is usually large, and need CVX toolbox to solve it. Although the two stage weighted least squares (2SWLS) algorithm has very low computational complexity, its estimate performance is susceptible to sensor geometry and threshold phenomenon. A new algorithm that is directly derived from maximum likelihood estimator (MLE) is developed. The newly proposed algorithm is named as fixed point iteration (FPI); it only involves simple calculations, such as addition, multiplication, division, and square-root. Unlike state-of-the-art methods, there is no matrix inversion operation and can avoid the unstable performance incurred by singular matrix. The FPI algorithm can be easily extended to the scenario with sensor position errors. Finally, simulation results demonstrate that the proposed algorithm reaches a good balance between computational complexity and localization accuracy.
944
Localized bioelectrical conduction block from radiofrequency gastric ablation persists after healing: safety and feasibility in a recovery model
Gastric ablation has demonstrated potential to induce conduction blocks and correct abnormal electrical activity (i.e., ectopic slow-wave propagation) in acute, intraoperative in vivo studies. This study aimed to evaluate the safety and feasibility of gastric ablation to modulate slow-wave conduction after 2 wk of healing. Chronic in vivo experiments were performed in weaner pigs (n = 6). Animals were randomly divided into two groups: sham-ablation (n = 3, control group; no power delivery, room temperature, 5 s/point) and radiofrequency (RF) ablation (n = 3; temperature-control mode, 65°C, 5 s/point). In the initial surgery, high-resolution serosal electrical mapping (16 × 16 electrodes; 6 × 6 cm) was performed to define the baseline slow-wave activation profile. Ablation (sham/RF) was then performed in the mid-corpus, in a line around the circumferential axis of the stomach, followed by acute postablation mapping. All animals recovered from the procedure, with no sign of perforation or other complications. Two weeks later, intraoperative high-resolution mapping was repeated. High-resolution mapping showed that ablation successfully induced sustained conduction blocks in all cases in the RF-ablation group at both the acute and 2 wk time points, whereas all sham-controls had no conduction block. Histological and immunohistochemical evaluation showed that after 2 wk of healing, the lesions were in the inflammation and early proliferation phase, and interstitial cells of Cajal (ICC) were depleted and/or deformed within the ablation lesions. This safety and feasibility study demonstrates that gastric ablation can safely and effectively induce a sustained localized conduction block in the stomach without disrupting the surrounding slow-wave conduction capability.NEW & NOTEWORTHY Ablation has recently emerged as a tool for modulating gastric electrical activation and may hold interventional potential for disorders of gastric function. However, previous studies have been limited to the acute intraoperative setting. This study now presents the safety of gastric ablation after postsurgical recovery and healing. Localized electrical conduction blocks created by ablation remained after 2 wk of healing, and no perforation or other complications were observed over the postsurgical period.
945
Exploiting stereoscopic disparity for augmenting human activity recognition performance
This work investigates several ways to exploit scene depth information, implicitly available through the modality of stereoscopic disparity in 3D videos, with the purpose of augmenting performance in the problem of recognizing complex human activities in natural settings. The standard state-of-the-art activity recognition algorithmic pipeline consists in the consecutive stages of video description, video representation and video classification. Multimodal, depth-aware modifications to standard methods are being proposed and studied, both for video description and for video representation, that indirectly incorporate scene geometry information derived from stereo disparity. At the description level, this is made possible by suitably manipulating video interest points based on disparity data. At the representation level, the followed approach represents each video by multiple vectors corresponding to different disparity zones, resulting in multiple activity descriptions defined by disparity characteristics. In both cases, a scene segmentation is thus implicitly implemented, based on the distance of each imaged object from the camera during video acquisition. The investigated approaches are flexible and able to cooperate with any monocular low-level feature descriptor. They are evaluated using a publicly available activity recognition dataset of unconstrained stereoscopic 3D videos, consisting in extracts from Hollywood movies, and compared both against competing depth-aware approaches and a state-of-the-art monocular algorithm. Quantitative evaluation reveals that some of the examined approaches achieve state-of-the-art performance.
946
Remote Sensing Image Scene Classification Based on an Enhanced Attention Module
Classifying different satellite remote sensing scenes is a very important subtask in the field of remote sensing image interpretation. With the recent development of convolutional neural networks (CNNs), remote sensing scene classification methods have continued to improve. However, the use of recognition methods based on CNNs is challenging because the background of remote sensing image scenes is complex and many small objects often appear in these scenes. In this letter, to improve the feature extraction and generalization abilities of deep neural networks so that they can learn more discriminative features, an enhanced attention module (EAM) was designed. Our proposed method achieved very competitive performance-94.29% accuracy on NWPU-RESISC45 and state-of-the-art performance on different remote sensing scene recognition data sets. The experimental results show that the proposed method can learn more discriminative features than state-of-the-art methods, and it can effectively improve the accuracy of scene classification for remote sensing images. Our code is available at https://github.com/williamzhao95/Pay-More-Attention.
947
The magic of split augmented Lagrangians applied to K-frame-based l (0)-l (2) minimization image restoration
We propose a simple, yet efficient image deconvolution approach, which is formulated as a complementary K-frame-based l (0)-l (2) minimization problem, aiming at benefiting from the advantages of each frame. The problem is solved by borrowing the idea of alternating split augmented Lagrangians. The experimental results demonstrate that our approach has achieved competitive performance among state-of-the-art methods.
948
Unsupervised Two-Path Neural Network for Cell Event Detection and Classification Using Spatiotemporal Patterns
Automatic event detection in cell videos is essential for monitoring cell populations in biomedicine. Deep learning methods have advantages over traditional approaches for cell event detection due to their ability to capture more discriminative features of cellular processes. Supervised deep learning methods, however, are inherently limited due to the scarcity of annotated data. Unsupervised deep learning methods have shown promise in general (non-cell) videos because they can learn the visual appearance and motion of regularly occurring events. Cell videos, however, can have rapid, irregular changes in cell appearance and motion, such as during cell division and death, which are often the events of most interest. We propose a novel unsupervised two-path input neural network architecture to capture these irregular events with three key elements: 1) a visual encoding path to capture regular spatiotemporal patterns of observed objects with convolutional long short-term memory units; 2) an event detection path to extract information related to irregular events with max-pooling layers; and 3) integration of the hidden states of the two paths to provide a comprehensive representation of the video that is used to simultaneously locate and classify cell events. We evaluated our network in detecting cell division in densely packed stem cells in phase-contrast microscopy videos. Our unsupervised method achieved higher or comparable accuracy to standard and state-of-the-art supervised methods.
949
Using species sensitivity distribution approach to assess the risks of commonly detected agricultural pesticides to Australia's tropical freshwater ecosystems
To assess the potential impacts of agricultural pesticides on tropical freshwater ecosystems, the present study developed temperature-specific, freshwater species protection concentrations (i.e., ecotoxicity threshold values) for 8 pesticides commonly detected in Australia's tropical freshwaters. Because relevant toxicity data for native tropical freshwater species to assess the ecological risks were mostly absent, scientifically robust toxicity data obtained at ≥20 °C were used for ecologically relevant taxonomic groups representing primary producers and consumers. Species sensitivity distribution (SSD) curves were subsequently generated for predicted chronic exposure using Burrlioz 2.0 software with mixed chronic and converted acute data relevant to exposure conditions at ≥20 °C. Ecotoxicity threshold values for tropical freshwater ecosystem protection were generated for ametryn, atrazine, diuron, metolachlor, and imidacloprid (all moderate reliability), as well as simazine, hexazinone, and tebuthiuron (all low reliability). Using these SSD curves, the retrospective risk assessments for recently reported pesticide concentrations highlight that the herbicides ametryn, atrazine, and diuron are of major concern for ecological health in Australia's tropical freshwater ecosystems. The insecticide imidacloprid also appears to pose an emerging threat to the most sensitive species in tropical freshwater ecosystems. The exposed temperature-specific approach may be applied to develop water quality guideline values for other environmental contaminants detected in tropical freshwater ecosystems until reliable and relevant toxicity data are generated using representative native species.
950
Single-walled carbon nanotube conjugated cytochrome c as exogenous nano catalytic medicine to combat intracellular oxidative stress
Mitochondrial dysfunction has been reported to be one of the main causes of many diseases including cancer, type2 diabetes, neurodegenerative disorders, cardiac ischemia, sepsis, muscular dystrophy, etc. Under in vitro conditions, Cytochrome C (Cyt C) maintains mitochondrial homeostasis and stimulates apoptosis, along with being a key participant in the life-supporting function of ATP synthesis. Hence, the medicinal importance of Cyt C as catalytic defense is immensely important in various mitochondrial disorders. Here, we have developed a nanomaterial via electrostatically conjugating oxidized single-wall carbon nanotube with Cyt C (Cyt C@cSWCNT) for the exogenous delivery of Cyt C. The chemical and morphological characterization of the developed Cyt C@cSWCNT was done using UV-vis, FTIR, XPS, powder XRD, TGA/DSC, TEM, etc. The developed Cyt C@cSWCNT exhibited bifunctional catalase and peroxidase activity with Km (∼ 642.7 μM and 351.6 μM) and Vmax (∼0.33 μM/s and 2.62 μM/s) values, respectively. Also, through this conjugation Cyt C was found to retain its catalytic activity even at 60 °C, excellent catalytic recyclability (at least up to 3 times), and wider pH activity (pH = 3 to 9). Cyt C@cSWCNT was found to promote intracellular ROS quenching and maintain mitochondrial membrane potential and cellular membrane integrity via Na+/K+ ion homeostasis during the H2O2 stress. Overall the present strategy provides an alternative approach for the exogenous delivery of Cyt C which can be used as nano catalytic medicine.
951
Deep representation for partially occluded face verification
By using deep learning-based strategy, the performance of face recognition tasks has been significantly enhanced. However, the verification and discrimination of the faces with occlusions still remain a challenge to most of the state-of-the-art approaches. Bearing this in mind, we propose a novel convolutional neural network which was designed specifically for the verification between the occluded and non-occluded faces for the same identity. It could learn both the shared and unique features based on a multiple network convolutional neural network architecture. The newly presented joint loss function and the corresponding alternating minimization approach were integrated to implement the training and testing of the presented convolutional neural network. Experimental results on the publicly available datasets (LFW 99.73%, YTF 97.30%, CACD 99.12%) show that the proposed deep representation approach outperforms the state-of-the-art face verification techniques.
952
A Cross-Modality Learning Approach for Vessel Segmentation in Retinal Images
This paper presents a new supervised method for vessel segmentation in retinal images. This method remolds the task of segmentation as a problem of cross-modality data transformation from retinal image to vessel map. A wide and deep neural network with strong induction ability is proposed to model the transformation, and an efficient training strategy is presented. Instead of a single label of the center pixel, the network can output the label map of all pixels for a given image patch. Our approach outperforms reported state-of-the-art methods in terms of sensitivity, specificity and accuracy. The result of cross-training evaluation indicates its robustness to the training set. The approach needs no artificially designed feature and no preprocessing step, reducing the impact of subjective factors. The proposed method has the potential for application in image diagnosis of ophthalmologic diseases, and it may provide a new, general, high-performance computing framework for image segmentation.
953
Investigation of EEG-Based Biometric Identification Using State-of-the-Art Neural Architectures on a Real-Time Raspberry Pi-Based System
Despite the growing interest in the use of electroencephalogram (EEG) signals as a potential biometric for subject identification and the recent advances in the use of deep learning (DL) models to study neurological signals, such as electrocardiogram (ECG), electroencephalogram (EEG), electroretinogram (ERG), and electromyogram (EMG), there has been a lack of exploration in the use of state-of-the-art DL models for EEG-based subject identification tasks owing to the high variability in EEG features across sessions for an individual subject. In this paper, we explore the use of state-of-the-art DL models such as ResNet, Inception, and EEGNet to realize EEG-based biometrics on the BED dataset, which contains EEG recordings from 21 individuals. We obtain promising results with an accuracy of 63.21%, 70.18%, and 86.74% for Resnet, Inception, and EEGNet, respectively, while the previous best effort reported accuracy of 83.51%. We also demonstrate the capabilities of these models to perform EEG biometric tasks in real-time by developing a portable, low-cost, real-time Raspberry Pi-based system that integrates all the necessary steps of subject identification from the acquisition of the EEG signals to the prediction of identity while other existing systems incorporate only parts of the whole system.
954
Functionalized graphene quantum dots obtained from graphene foams used for highly selective detection of Hg2+ in real samples
Here we report the use of graphene quantum dots (GQDs), obtained from 3D graphene foam, functionalized with 8-hydroxyquinoline (8-HQ) for the sensitive and selective detection of Hg2+ via front-face fluorescence. The great surface area and active groups within the GQDs permitted the functionalization with 8-HQ to increase their selectivity toward the analyte of interest. The fluorescence probe follows the Stern-Volmer model, yielding a direct relationship between the degree of quenching and the concentration of the analyte. Diverse parameters, including the pH and the use of masking agents, were optimized in order to improve the selectivity toward Hg2+ down to a limit of detection of 2.4 nmol L-1. It is hereby demonstrated that the functionalized GQDs work perfectly fine under adverse conditions such as acidic pH and in the presence of a large number of cationic and anionic interferences for the detection of Hg2+ in real samples. Parallel measurements using cold vapor atomic fluorescence spectrometry also demonstrated an excellent correlation with the front-face fluorescence method applied here for real samples including tap, river, underground, and dam waters.
955
Relief of CoA sequestration and restoration of mitochondrial function in a mouse model of propionic acidemia
Propionic acidemia (PA, OMIM 606054) is a devastating inborn error of metabolism arising from mutations that reduce the activity of the mitochondrial enzyme propionyl-CoA carboxylase (PCC). The defects in PCC reduce the concentrations of nonesterified coenzyme A (CoASH), thus compromising mitochondrial function and disrupting intermediary metabolism. Here, we use a hypomorphic PA mouse model to test the effectiveness of BBP-671 in correcting the metabolic imbalances in PA. BBP-671 is a high-affinity allosteric pantothenate kinase activator that counteracts feedback inhibition of the enzyme to increase the intracellular concentration of CoA. Liver CoASH and acetyl-CoA are depressed in PA mice and BBP-671 treatment normalizes the cellular concentrations of these two key cofactors. Hepatic propionyl-CoA is also reduced by BBP-671 leading to an improved intracellular C3:C2-CoA ratio. Elevated plasma C3:C2-carnitine ratio and methylcitrate, hallmark biomarkers of PA, are significantly reduced by BBP-671. The large elevations of malate and α-ketoglutarate in the urine of PA mice are biomarkers for compromised tricarboxylic acid cycle activity and BBP-671 therapy reduces the amounts of both metabolites. Furthermore, the low survival of PA mice is restored to normal by BBP-671. These data show that BBP-671 relieves CoA sequestration, improves mitochondrial function, reduces plasma PA biomarkers, and extends the lifespan of PA mice, providing the preclinical foundation for the therapeutic potential of BBP-671.
956
A robust model for salient text detection in natural scene images using MSER feature detector and Grabcut
Visual attention models have been used to recognize the most prominent region in a natural scene. These regions are going to pull the human attention. The state-of-art models keep on under-predicting the significant image regions having text. These are specifically the regions with most noteworthy semantic significance in a natural scene and turn out to be useful for saliency-based applications like image classification and captioning. The text or character detection as a salient region in image remains a challenging research problem. Text contents within the scene convey vital information about the image. For example, signboard content conveys the important information for visually impaired person. In this paper, we have proposed a new model for salient text detection in a natural scene. In the proposed model, we integrate saliency model with the segmentation and text detection approach in a natural scene to generate the text saliency. The experimental outcomes in ROC curve and DET curves illustrate that the proposed model outperformed the state-of-art methods for detection of salient text content from a natural scene.
957
Dissected 3D CNNs: Temporal skip connections for efficient online video processing
Convolutional Neural Networks with 3D kernels (3D CNNs) currently achieve state-of-the-art results in video recognition tasks due to their supremacy in extracting spatiotemporal features within video frames. There have been many successful 3D CNN architectures surpassing state-of-the-art results successively. However, nearly all of them are designed to operate offline, creating several serious handicaps during online operation. Firstly, conventional 3D CNNs are not dynamic since their output features represent the complete input clip instead of the most recent frame in the clip. Secondly, they are not temporal resolution-preserving due to their inherent temporal downsampling. Lastly, 3D CNNs are constrained to be used with fixed temporal input size limiting their flexibility. In order to address these drawbacks, we propose dissected 3D CNNs, where the intermediate volumes of the network are dissected and propagated over depth (time) dimension for future calculations, substantially reducing the number of computations at online operation. For action classification, the dissected version of ResNet models performs 77%-90% fewer computations at online operation while achieving similar to 5% better classification accuracy on the Kinetics-600 dataset than conventional 3D-ResNet models. Moreover, the advantages of dissected 3D CNNs are demonstrated by deploying our approach onto several vision tasks, which consistently improved the performance.
958
AADG: Automatic Augmentation for Domain Generalization on Retinal Image Segmentation
Convolutional neural networks have been widely applied to medical image segmentation and have achieved considerable performance. However, the performance may be significantly affected by the domain gap between training data (source domain) and testing data (target domain). To address this issue, we propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG). Our AADG framework can effectively sample data augmentation policies that generate novel domains and diversify the training set from an appropriate search space. Specifically, we introduce a novel proxy task maximizing the diversity among multiple augmented novel domains as measured by the Sinkhorn distance in a unit sphere space, making automated augmentation tractable. Adversarial training and deep reinforcement learning are employed to efficiently search the objectives. Quantitative and qualitative experiments on 11 publicly-accessible fundus image datasets (four for retinal vessel segmentation, four for optic disc and cup (OD/OC) segmentation and three for retinal lesion segmentation) are comprehensively performed. Two OCTA datasets for retinal vasculature segmentation are further involved to validate cross-modality generalization. Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches by considerable margins on retinal vessel, OD/OC and lesion segmentation tasks. The learned policies are empirically validated to be model-agnostic and can transfer well to other models. The source code is available at https://github.com/CRazorback/AADG.
959
Medical Assistance in Dying, Data and Casting Assertions Aside
Various assertions have been made regarding why eligibility for medical assistance in dying (MAiD) should be expanded. Examining these and the studies used to support them should clear the way for thoughtful data monitoring and research into why some patients make death hastening requests. This will not only improve MAiD practices in Canada, but will lead to better more effective palliative care for patients whose suffering leads them to covet death.
960
Dietary Intervention for Preventing Colorectal Cancer: A Practical Guide for Physicians
Colorectal cancer (CRC) is a disease with high prevalence and mortality. Estimated preventability for CRC is approximately 50%, indicating that altering modifiable factors, including diet and body weight, can reduce CRC risk. There is strong evidence that dietary factors including whole grains, high-fiber, red and processed meat, and alcohol can affect the risk of CRC. An alternative strategy for preventing CRC is use of a chemopreventive supplement that provides higher individual exposure to nutrients than what can be obtained from the diet. These include calcium, vitamin D, folate, n-3 polyunsaturated fatty acids, and phytochemicals. Several intervention trials have shown that these dietary chemopreventives have positive protective effects on development and progression CRC. Research on chemoprevention with phytochemicals that possess anti-inflammatory and/or, anti-oxidative properties is still in the preclinical phase. Intentional weight loss by bariatric surgery has not been effective in decreasing long-term CRC risk. Physicians should perform dietary education for patients who are at high risk of cancer for changing their dietary habits and behaviour. An increased understanding of the role of individual nutrients linked to the intestinal micro-environment and stages of carcinogenesis would facilitate the development of the best nutritional formulations for preventing CRC.
961
Impact of Economic Policy Uncertainty and Pandemic Uncertainty on International Tourism: What do We Learn From COVID-19?
Uncertainty is an overarching aspect of life that is particularly pertinent to the present COVID-19 pandemic crisis; as seen by the pandemic's rapid worldwide spread, the nature and level of uncertainty have possibly increased due to the possible disconnects across national borders. The entire economy, especially the tourism industry, has been dramatically impacted by COVID-19. In the current study, we explore the impact of economic policy uncertainty (EPU) and pandemic uncertainty (PU) on inbound international tourism by using data gathered from Italy, Spain, and the United States for the years 1995-2021. Using the Quantile on Quantile (QQ) approach, the study confirms that EPU and PU negatively affected inbound tourism in all states. Wavelet-based Granger causality further reveals bi-directional causality running from EPU to inbound tourism and unidirectional causality from PU to inbound tourism in the long run. The overall findings show that COVID-19 has had a strong negative effect on tourism. So resilient skills are required to restore a sustainable tourism industry.
962
Inspection and maintenance optimization for heterogeneity units in redundant structure with Non-dominated Sorting Genetic Algorithm III
Redundant structure has been widely deployed to improve system reliability, as when one unit fails, the system can continue to function by using another one. Most existing studies rely on the similar assumption that the heterogeneous units are subject to periodic inspections and identical in terms of their aging situations and the numbers of resisted shocks. In practice, it is often adequate to trigger a unit individually in the event of a single shock, which intensifies the degradation of that unit, accordingly, requiring a sooner inspection to ensure its safety. In this study, the stochastic dependency among units is addressed firstly by introducing a novel activation sequence. Secondly, an adaptive system-level inspection policy is proposed by prioritizing the unit with a worse state. Finally, we take advantage of Monte Carlo methods to simulate the whole process and estimate two objectives, referring to the average system unavailability and maintenance cost, in a designed service time. It is found that the two objectives are contradictory through numerical examples. The Non-dominated Sorting Genetic Algorithm III (NSGA-III) algorithm, therefore, has been employed to find the optimal solutions in system unavailability and cost, which provide clues for practitioners in decision-making.
963
Data-Parallel Hashing Techniques for GPU Architectures
Hash tables are a fundamental data structure for effectively storing and accessing sparse data, with widespread usage in domains ranging from computer graphics to machine learning. This study surveys the state-of-the-art research on data-parallel hashing techniques for emerging massively-parallel, many-core GPU architectures. This survey identifies key factors affecting the performance of different techniques and suggests directions for further research.
964
Widespread Gene Expression Divergence in Butterfly Sensory Tissues Plays a Fundamental Role During Reproductive Isolation and Speciation
Neotropical Heliconius butterflies are well known for their intricate behaviors and multiple instances of incipient speciation. Chemosensing plays a fundamental role in the life history of these groups of butterflies and in the establishment of reproductive isolation. However, chemical communication involves synergistic sensory and accessory functions, and it remains challenging to investigate the molecular mechanisms underlying behavioral differences. Here, we examine the gene expression profiles and genomic divergence of three sensory tissues (antennae, legs, and mouthparts) between sexes (females and males) and life stages (different adult stages) in two hybridizing butterflies, Heliconius melpomene and Heliconius cydno. By integrating comparative transcriptomic and population genomic approaches, we found evidence of widespread gene expression divergence, supporting a crucial role of sensory tissues in the establishment of species barriers. We also show that sensory diversification increases in a manner consistent with evolutionary divergence based on comparison with the more distantly related species Heliconius charithonia. The findings of our study strongly support the unique chemosensory function of antennae in all three species, the importance of the Z chromosome in interspecific divergence, and the nonnegligible role of nonchemosensory genes in the divergence of chemosensory tissues. Collectively, our results provide a genome-wide illustration of diversification in the chemosensory system under incomplete reproductive isolation, revealing strong molecular separation in the early stage of speciation. Here, we provide a unique perspective and relevant view of the genetic architecture (sensory and accessory functions) of chemosensing beyond the classic chemosensory gene families, leading to a better understanding of the magnitude and complexity of molecular changes in sensory tissues that contribute to the establishment of reproductive isolation and speciation.
965
Optics and Photonics: Key Enabling Technologies
The fields of optics and photonics have experienced dramatic technical advances over the past several decades and have cemented themselves as key enabling technologies across many different industries. This paper explores past milestones, present state of the art, and future perspectives of several different topics, including: lasers, materials, devices, communications, bioimaging, displays, manufacturing, and industry evolution.
966
Blockchain in global supply chains and cross border trade: a critical synthesis of the state-of-the-art, challenges and opportunities
Blockchain possesses the potential of transforming global supply chain management. Gartner predicts that blockchain could be able to track $2 T of goods and services in their movement across the globe by 2023, and blockchain will be a more than $3 trillion business by 2030. Nowadays, a growing number of blockchain initiatives are disrupting traditional business models in each sector. In this paper, we provide a timely and holistic overview of the state-of-the-art, challenges, gaps and opportunities in global supply chain and trade operations for both the private sector and governmental agencies, by synthesising a wide range of resources from business leaders, global international organisations, leading supply chain consulting firms, research articles, trade magazines and conferences. We then identify collaborative schema and future research directions for industry, government, and academia to jointly work together in ensuring that the full potential of blockchain is unleashed amidst the socioeconomic, geopolitical and technological disruptions that global supply chains and trade are facing.
967
Periodicity-Oriented Data Analytics on Time-Series Data for Intelligence System
Periodic pattern mining models analyze patterns which occur periodically in a time-series database, such as sensor readings of smartphones and/or Internet of Things devices. The extracted patterns can be utilized for risk prediction, system management, and decision-making. In this article, we propose an efficient periodicity-oriented data analytics approach. It ignores intermediate events deliberately by adopting the concept of flexible periodic patterns, so it can be applied to more diverse real-life scenarios and systems. Moreover, the proposed approach adopts a novel symbol-centered data structure instead of existing data structures for state-of-the-art approaches of periodic pattern mining. Performance evaluations on real-life datasets, Diabetes, Oil Prices, and Bike Sharing, and requirements show that our approach has better runtime, memory usage, number of visited patterns, and sensitivity than efficient periodic pattern mining (EPPM) and flexible periodic pattern mining (FPPM), which are the state-of-the-art approaches in the same field. The experimental results show that the proposed algorithm will require less runtime and smaller memory than the existing algorithms on most data and requirements in real life.
968
MPC-CSAS: Multi-Party Computation for Real-Time Privacy-Preserving Speed Advisory Systems
As a part of Advanced Driver Assistance Systems (ADASs), Consensus-based Speed Advisory Systems (CSAS) have been proposed to recommend a common speed to a group of vehicles for specific application purposes, such as emission control and energy management. With Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I) technologies and advanced control theories in place, state-of-the-art CSAS can be designed to get an optimal speed in a privacy-preserving and decentralized manner. However, the current method only works for specific cost functions of vehicles, and its execution usually involves many algorithm iterations leading long convergence time. Therefore, the state-of-the-art design method is not applicable to a CSAS design which requires real-time decision making. In this article, we address the problem by introducing MPC-CSAS, a Multi-Party Computation (MPC) based design approach for privacy-preserving CSAS. Our proposed method is simple to implement and applicable to all types of cost functions of vehicles. Moreover, our simulation results show that the proposed MPC-CSAS can achieve very promising system performance in just one algorithm iteration without using extra infrastructure for a typical CSAS.
969
Camouflaged Instance Segmentation In-the-Wild: Dataset, Method, and Benchmark Suite
This paper pushes the envelope on decomposing camouflaged regions in an image into meaningful components, namely, camouflaged instances. To promote the new task of camouflaged instance segmentation of in-the-wild images, we introduce a dataset, dubbed CAMO++, that extends our preliminary CAMO dataset (camouflaged object segmentation) in terms of quantity and diversity. The new dataset substantially increases the number of images with hierarchical pixel-wise ground truths. We also provide a benchmark suite for the task of camouflaged instance segmentation. In particular, we present an extensive evaluation of state-of-the-art instance segmentation methods on our newly constructed CAMO++ dataset in various scenarios. We also present a camouflage fusion learning (CFL) framework for camouflaged instance segmentation to further improve the performance of state-of-the-art methods. The dataset, model, evaluation suite, and benchmark will be made publicly available on our project page.
970
Fluorescent melamine-formaldehyde/polyamine coatings for microcapsules enabling their tracking in composites
This study aimed the development of fluorescent melamine-formaldehyde (MF)/polyamine coatings for labelling of prefabricated microcapsules and their tracking in composites. The composition of the fluorescent MF coatings was studied by FTIR spectroscopy, thermogravimetric analysis, and elemental analysis. The characteristics of the coatings and its deposition on different surfaces were investigated using optical and fluorescence microscopy and fluorescence spectroscopy. MF prepolymers were polymerised with tri- and polyamines yielding in fluorescent coatings without addition of fluorescent dyes. Both, MF/poly(ethylene imine) and MF/poly(vinyl amine) (PVAm) coated glass beads showed maximum fluorescence at an excitation wavelength of λmax = 360 nm with the emission maxima at λmax = 490 nm and λmax = 410 nm, correspondingly. The MF/PVAm polymer was coated on diuron-poly(methyl methacrylate) microcapsules and tracked in highly filled composites (water-based plaster/paint) to show its applicability. MF/polyamine coatings were identified as promising materials for the fluorescent labelling of prefabricated microcapsules.
971
Role of Vitamin D Deficiency in Increased Susceptibility to Respiratory Infections Among Children: A Systematic Review
Vitamin D has several roles in the immune system besides its effects on bone metabolism. Acute respiratory infections are common infections in children. Severe lower respiratory tract infections (LRTIs) even cause death in children, especially in those less than five years of age. Our study aims to examine whether children with vitamin D deficiency are susceptible to respiratory infections and to study the association between vitamin D deficiency and the severity of respiratory infections. We comprehensively searched research articles in PubMed, ScienceDirect, and Cochrane library databases. The main keywords were vitamin D deficiency, respiratory infections, and children. We used Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines to conduct this systematic review. The initial search showed 16,120 papers. A meticulous screening of research articles using the eligibility criteria and quality appraisal tools was done. Finally, 10 research articles qualified for this systematic review, including eight case-control studies, one randomized controlled trial (RCT), and one cohort study. Seven of 10 research studies reviewed found that children with low vitamin D levels are susceptible to respiratory infections. Five studies discussed the severity of respiratory infections and low vitamin D levels. This systematic review concluded that children with low vitamin D levels are prone to developing respiratory infections. But we could not find a conclusive association between the severity of respiratory infections and low vitamin D levels.
972
High-Performance 1-D and 2-D Inverse DWT 5/3 Filter Architectures for Efficient Hardware Implementation
This paper presents high-performance and memory-efficient hardware architectures for one-dimensional (1-D) and two-dimensional (2-D) inverse discrete wavelet transform (DWT) 5/3 filters. The proposed 1-D filter architecture requires 33% less memory resources and 17% less logic resources than the best state-of-the-art solutions. The proposed 1-D filter architecture has 100% hardware utilization, which is defined as the ratio of the actual computation time to the total processing time, both expressed in numbers of clock cycles. It allows a 7% higher operational frequency and simultaneously has the lowest total power dissipation in comparison with the best state-of-the-art solutions. The proposed 2-D inverse DWT 5/3 architecture, based on the proposed 1-D inverse DWT filter design, provides medium total computing time and output latency, but outperforms the best state-of-the-art solutions for at least 20% in terms of required memory capacity.
973
Simultaneously Tolerate Thermal and Process Variations Through Indirect Feedback Tuning for Silicon Photonic Networks
Silicon photonics is the leading candidate technology for high-speed and low-energy-consumption networks. Thermal and process variations are the two main challenges of achieving high-reliability photonic networks. Thermal variation is due to the heat issues created by application, floorplan, and environment, while process variation is caused by fabrication variability in the deposition, masking, exposition, etching, and doping. Tuning techniques are then required to overcome the impact of the variations and efficiently stabilize the performance of silicon photonic networks. We extend our previous optical switch integration model, BOSIM, to support the variation and thermal analyses. Based on device properties, we propose indirect feedback tuning (IFT) to simultaneously alleviate thermal and process variations. IFT can improve the BER of silicon photonic networks to 10(-9) under different variation situations. Compared to state-of-the-art techniques, IFT can achieve an up to 1.52 x 10(8) times bit-error-rate improvement and 4.11X better heater energy efficiency. Indirect feedback does not require high-speed optical signal detection, and thus, the circuit design of IFT saves up to 61.4% of the power and 51.2% of the area compared to state-of-the-art designs.
974
4D Printing: dawn of an emerging technology cycle
Purpose - The purpose of this article is to reviews state-of-the-art developments in four-dimensional (4D) printing, discuss what it is, investigate new applications that have been discovered and suggest its future impact. Design/methodology/approach - The article clarifies the definition of 4D printing and describes notable examples covering material science, equipment and applications. Findings - This article highlights an emerging technology cycle where 4D printing research has gained traction within additive manufacturing. The use of stimuli-responsive materials can be programmed and printed to enable pre-determined reactions when subject to external stimuli. Originality/value - This article reviews state-of-the-art developments in 4D printing, discusses what it is, investigates new applications that have been discovered and suggests its future impact.
975
Reversible Transformation between Isolated Skyrmions and Bimerons
Skyrmions and bimerons are versatile topological spin textures that can be used as information bits for both classical and quantum computing. The transformation between isolated skyrmions and bimerons is an essential operation for computing architecture based on multiple different topological bits. Here we report the creation of isolated skyrmions and their subsequent transformation to bimerons by harnessing the electric current-induced Oersted field and temperature-induced perpendicular magnetic anisotropy variation. The transformation between skyrmions and bimerons is reversible, which is controlled by the current amplitude and scanning direction. Both skyrmions and bimerons can be created in the same system through the skyrmion-bimeron transformation and magnetization switching. Deformed skyrmion bubbles and chiral labyrinth domains are found as nontrivial intermediate transition states. Our results may provide a unique way for building advanced information-processing devices using different types of topological spin textures in the same system.
976
A CT-Based Automated Algorithm for Airway Segmentation Using Freeze-and-Grow Propagation and Deep Learning
Chronic obstructive pulmonary disease (COPD) is a common lung disease, and quantitative CT-based bronchial phenotypes are of increasing interest as a means of exploring COPD sub-phenotypes, establishing disease progression, and evaluating intervention outcomes. Reliable, fully automated, and accurate segmentation of pulmonary airway trees is critical to such exploration. We present a novel approach of multi-parametric freeze-and-grow (FG) propagation which starts with a conservative segmentation parameter and captures finer details through iterative parameter relaxation. First, a CT intensity-based FG algorithm is developed and applied for airway tree segmentation. A more efficient version is produced using deep learning methods generating airway lumen likelihood maps from CT images, which are input to the FG algorithm. Both CT intensity- and deep learning-based algorithms are fully automated, and their performance, in terms of repeat scan reproducibility, accuracy, and leakages, is evaluated and compared with results from several state-of-the-art methods including an industry-standard one, where segmentation results were manually reviewed and corrected. Both new algorithms show a reproducibility of 95% or higher for total lung capacity (TLC) repeat CT scans. Experiments on TLC CT scans from different imaging sites at standard and low radiation dosages show that both new algorithms outperform the other methods in terms of leakages and branch-level accuracy. Considering the performance and execution times, the deep learning-based FG algorithm is a fully automated option for large multi-site studies.
977
Sketched Subspace Clustering
The immense amount of daily generated and communicated data presents unique challenges in their processing. Clustering, the grouping of data without the presence of ground-truth labels, is an important tool for drawing inferences from data. Subspace clustering (SC) is a relatively recent method that is able to successfully classify nonlinearly separable data in a multitude of settings. In spite of their high clustering accuracy, SC methods incur prohibitively high computational complexity when processing large volumes of high-dimensional data. Inspired by random sketching approaches for dimensionality reduction, the present paper introduces a randomized scheme for SC, termed Sketch-SC, tailored for large volumes of high-dimensional data. Sketch-SC accelerates the computationally heavy parts of state-of-the-art SC approaches by compressing the data matrix across both dimensions using random projections, thus enabling fast and accurate large-scale SC. Performance analysis as well as extensive numerical tests on real data corroborate the potential of Sketch-SC and its competitive performance relative to state-of-the-art scalable SC approaches.
978
Factor structure of the 10-item CES-D Scale among patients with persistent COVID-19
The presence of persistent coronavirus disease 2019 (COVID-19) might be associated with significant levels of psychological distress that would meet the threshold for clinical relevance. The Center for Epidemiologic Studies Depression Scale (CES-D) version 10 has been widely used in assessing psychological distress among general and clinical populations from different cultural backgrounds. To our knowledge, however, researchers have not yet validated these findings among patients with persistent COVID-19. A cross-sectional validation study was conducted with 100 patients from the EXER-COVID project (69.8% women; mean (±standard deviation) ages: 47.4 ± 9.5 years). Confirmatory factor analyses (CFAs) were performed on the 10-item CES-D to test four model fits: (a) unidimensional model, (b) two-factor correlated model, (c) three-factor correlated model, and (d) second-order factor model. The diagonal-weighted least-squares estimator was used, as it is commonly applied to latent variable models with ordered categorical variables. The reliability indices of the 10-item CES-D in patients with persistent COVID-19 were as follows: depressive affect factor ( α Ord = 0 . 82 ${\alpha }_{\mathrm{Ord}}=0.82$ ; ω u - cat = 0 . 78 ${\omega }_{{\rm{u}}-\mathrm{cat}}=0.78$ ), somatic retardation factor ( α Ord = 0 . 78 ${\alpha }_{\mathrm{Ord}}=0.78$ ; ω u - cat = 0 . 56 ${\omega }_{{\rm{u}}-\mathrm{cat}}=0.56$ ), and positive affect factor ( α Ord = 0 . 56 ${\alpha }_{\mathrm{Ord}}=0.56$ ; ω u - cat = 0 . 55 ${\omega }_{{\rm{u}}-\mathrm{cat}}=0.55$ ). The second-order model fit showed good Omega reliability ( ω ho = 0 . 87 ${\omega }_{\mathrm{ho}}=0.87$ ). Regarding CFAs, the unidimensional-factor model shows poor goodness of fit, especially residuals analysis (root mean square error of approximation [RMSEA] = 0.081 [95% confidence interval, CI = 0.040-0.119]; standardized root mean square residual [SRMR] = 0.101). The two-factor correlated model, three-factor correlated model, and second-order factor model showed adequate goodness of fit, and the χ2 difference test ( ∆ X 2 $\unicode{x02206}{X}^{2}$ ) did not show significant differences between the goodness of fit for these models ( ∆ X 2 = 4.1128 $\unicode{x02206}{X}^{2}=4.1128$ ; p = 0.127). Several indices showed a good fit with the three-factor correlated model: goodness-of-fit index = 0.974, comparative fit index = 0.990, relative noncentrality index = 0.990, and incremental fit index = 0.990, which were all above 0.95, the traditional cut-off establishing adequate fit. On the other hand, RMSEA = 0.049 (95% CI = 0.000-0.095), where an RMSEA < 0.06-0.08 indicates an adequate fit. Item loadings on the factors were statistically significant ( λ j ≥ 0.449 ${\lambda }_{j}\ge 0.449$ ; p's < 0.001), indicating that the items loaded correctly on the corresponding factors and the relationship between factors ( ϕ ≥ 0.382 $\phi \ge 0.382$ ; p's ≤ 0.001. To our knowledge, this is the first study to provide validity and reliability to 10-item CES-D in a persistent COVID-19 Spanish patient sample. The validation and reliability of this short screening tool allow us to increase the chance of obtaining complete data in a particular patient profile with increased fatigue and brain fog that limit patients' capacity to complete questionnaires.
979
3D Object Detection for Autonomous Driving: A Survey
Autonomous driving is regarded as one of the most promising remedies to shield human beings from severe crashes. To this end, 3D object detection serves as the core basis of perception stack especially for the sake of path planning, motion prediction, and collision avoidance etc. . Taking a quick glance at the progress we have made, we attribute challenges to visual appearance recovery in the absence of depth information from images, representation learning from partially occluded unstructured point clouds, and semantic alignments over heterogeneous features from cross modalities. Despite existing efforts, 3D ob-ject detection for autonomous driving is still in its infancy. Recently, a large body of literature have been investigated to address this 3D vision task. Nevertheless, few investigations have looked into collecting and structuring this growing knowledge. We therefore aim to fill this gap in a comprehensive survey, encompassing all the main concerns including sensors, datasets, performance metrics and the recent state-of-the-art detection methods, together with their pros and cons. Furthermore, we provide quan-titative comparisons with the state of the art. A case study on fifteen selected representative methods is presented, involved with runtime analysis, error analysis, and robustness analysis. Finally, we provide concluding remarks after an in-depth analysis of the surveyed works and identify promising directions for future work.(c) 2022 Elsevier Ltd. All rights reserved.
980
SCGAN: Saliency Map-Guided Colorization With Generative Adversarial Network
Given a grayscale photograph, the colorization system estimates a visually plausible colorful image. Conventional methods often use semantics to colorize grayscale images. However, in these methods, only classification semantic information is embedded, resulting in semantic confusion and color bleeding in the final colorized image. To address these issues, we propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework. It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding in the colorized image. Since the global features from pre-trained VGG-16-Gray network are embedded to the colorization encoder, the proposed SCGAN can be trained with much less data than state-of-the-art methods to achieve perceptually reasonable colorization. In addition, we propose a novel saliency map-based guidance method. Branches of the colorization decoder are used to predict the saliency map as a proxy target. Moreover, two hierarchical discriminators are utilized for the generated colorization and saliency map, respectively, in order to strengthen visual perception performance. The proposed system is evaluated on ImageNet validation set. Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.
981
A multi-strategy approach to biological named entity recognition
Recognizing and disambiguating bio-entities (genes, proteins, cells, etc.) names are very challenging tasks as some biologica databases can be outdated, names may not be normalized, abbreviations are used, syntactic and word order is modified, etc. Thus, the same bio-entity might be written into different ways making searching tasks a key obstacle as many candidate relevant literature containing those entities might not be found. As consequence, the same protein mention but using different names should be looked for or the same discovered protein name is being used to name a new protein using completely different features hence named-entity recognition methods are required. In this paper, we developed a bio-entity recognition model which combines different classification methods and incorporates simple pre-processing tasks for bio-entities (genes and proteins) recognition is presented. Linguistic pre-processing and feature representation for training and testing is observed to positively affect the overall performance of the method, showing promising results. Unlike some state-of-the-art methods, the approach does not require additional knowledge bases or specific-purpose tasks for post processing which make it more appealing. Experiments showing the promise of the model compared to other state-of-the-art methods are discussed. (c) 2012 Elsevier Ltd. All rights reserved.
982
Non-Local Low-Rank Cube-Based Tensor Factorization for Spectral CT Reconstruction
Spectral computed tomography (CT) reconstructs material-dependent attenuation images from the projections of multiple narrow energy windows, which is meaningful for material identification and decomposition. Unfortunately, the multi-energy projection datasets usually have lower signal-noise ratios (SNR). Very recently, a spatial-spectral cube matching frame (SSCMF) was proposed to explore the non-local spatial-spectral similarities for spectral CT. This method constructs a group by clustering up a seriesof non-local spatial-spectralcubes. Thesmall size of spatial patches for such a group makes the SSCMF fail to fully encode the sparsity and low-rank properties. The hard-thresholding and collaboration filtering in the SSCMF also cause difficulty in recovering the image features and spatial edges. While all the steps are operated on 4-D group, the huge computational cost andmemory loadmight not be affordable in practice. To avoid the above limitations and further improve the image quality, we first formulate a non-local cube-based tensor instead of group to encode the sparsity and low-rank properties. Then, as a newregularizer, the Kronecker-basis-representation tensor factorization is employed into a basic spectral CT reconstruction model to enhance the capability of image feature extraction and spatial edge preservation, generating a non-local low-rank cube-based tensor factorization (NLCTF) method. Finally, the split-Bregman method is adopted to solve the NLCTF model. Both numerical simulations and preclinical mouse studies are performed to validate and evaluate the NLCTF algorithm. The results show that the NLCTF method outperforms the other state-of-the-art competing algorithms.
983
DCBT-Net: Training Deep Convolutional Neural Networks With Extremely Noisy Labels
Obtaining data with correct labels is crucial to attain the state-of-the-art performance of Convolutional Neural Network (CNN) models. However, labeling datasets is significantly time-consuming and expensive process because it requires expert knowledge in a particular domain. Therefore, real-life datasets often exhibit incorrect labels due to the involvement of nonexperts in the data-labeling process. Consequently, there are many cases of incorrectly labeled data in the wild. Although the issue of poorly labeled datasets has been studied, the existing methods are complex and difficult to reproduce. Thus, in this study, we proposed a simpler algorithm called "Deep Clean Before Training Net" (DCBT-Net) that is based on cleaning wrongly labeled data points using the information from eigenvalues of the Laplacian matrix obtained from similarities between the data samples. The cleaned data were trained using deep CNN (DCNN) to attain the state-of-the-art results. This system achieved better performance than the existing approaches. In conducted experiments, the performance of the DCBT-Net was tested on three commercially available datasets, namely, Modified National Institute of Standards and Technology (MNIST) database of handwritten digits, Canadian Institute for Advanced Research (CIFAR) and WebVision1000 datasets. The proposed method achieved better results when assessed using several evaluation metrics compared with the existing state-of-the-art methods. Specifically, the DCBT-Net attained an average 15%, 20%, and 3% increase in accuracy score using MNIST database, CIFAR-10 dataset, and WebVision dataset, respectively. Also, the proposed approach demonstrated better results in specificity, sensitivity, positive predictive value, and negative predictive value evaluation metrics.
984
Impact of Government Support on Performing Artists' Job and Life Satisfaction: Findings from The National Survey in Korea
In this study, we aim to propose motives that can help increase the creative activities of Korean performing artists and discuss the policy implications for the sustainable management of Korean performing arts. First, we investigate the characteristics of Korean artists that receive subsidies as a form of government support for undertaking artistic activities. Second, we examine whether receipt of such grants influences the artists' job and life satisfaction. Through a logistics model, we reconstructed the "2015 Survey Report on Artists & Activities" and validated the research hypothesis. We first considered subsidies that could directly impact artists' income and activities and then verified whether subsidies influence artists' job and life satisfaction. As a result of the research, first, art grants should be supported in order to help artists produce creative and experimental works. Second, we showed that artists' subsidies should be expanded in order to enhance artists' quality of life and the sustainability of artistic activities. Above all, subsidy support for artists showed that art can be legitimate as a public good, which is a common asset in society.
985
Augmented reality application assessment for disseminating rock art
Currently, marker-based tracking is the most used method to develop augmented reality (AR) applications (apps). However, this method cannot be applied in some complex and outdoor settings such as prehistoric rock art sites owing to the fact that the usage of markers is restricted on site. Thus, natural feature tracking methods have to be used. There is a wide range of libraries to develop AR apps based on natural feature tracking. In this paper, a comparative study of Vuforia and ARToolKit libraries is carried out, analysing factors such as distance, occlusion and lighting conditions that affect user experience in both indoor and outdoor environments, and eventually the app developer. Our analysis confirms that Vuforia's user experience indoor is better, faster and flicker-free whether the images are properly enhanced, but it does not work properly on site. Therefore, the development of AR apps for complex outdoor environments such as rock art sites should be performed with ARToolKit.
986
A novel C-type lectin from Trichinella spiralis mediates larval invasion of host intestinal epithelial cells
The aim of this study was to investigate the characteristics of a novel type C lectin from Trichinella spiralis (TsCTL) and its role in larval invasion of intestinal epithelial cells (IECs). TsCTL has a carbohydrate recognition domain (CRD) of C-type lectin. The full-length TsCTL cDNA sequence was cloned and expressed in Escherichia coli BL21. The results of qPCR, Western blotting and immunofluorescence assays (IFAs) showed that TsCTL was a surface and secretory protein that was highly expressed at the T. spiralis intestinal infective larva (IIL) stages and primarily located at the cuticle, stichosome and embryos of the parasite. rTsCTL could specifically bind with IECs, and the binding site was localized in the IEC nucleus and cytoplasm. The IFA results showed that natural TsCTL was secreted and bound to the enteral epithelium at the intestinal stage of T. spiralis infection. The rTsCTL had a haemagglutinating effect on murine erythrocytes, while mannose was able to inhibit the rTsCTL agglutinating effect for mouse erythrocytes. rTsCTL accelerated larval intrusion into the IECs, whereas anti-rTsCTL antibodies and mannose significantly impeded larval intrusion in a dose-dependent manner. The results indicated that TsCTL specifically binds to IECs and promotes larval invasion of intestinal epithelium, and it might be a potential target of vaccines against T. spiralis enteral stages.
987
A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces
Currently, state-of-the-art motor intention decoding algorithms in brain-machine interfaces are mostly implemented on a PC and consume significant amount of power. A machine learning coprocessor in 0.35-mu m CMOS for the motor intention decoding in the brain-machine interfaces is presented in this paper. Using Extreme Learning Machine algorithm and low-power analog processing, it achieves an energy efficiency of 3.45 pJ/MAC at a classification rate of 50 Hz. The learning in second stage and corresponding digitally stored coefficients are used to increase robustness of the core analog processor. The chip is verified with neural data recorded in monkey finger movements experiment, achieving a decoding accuracy of 99.3% for movement type. The same coprocessor is also used to decode time of movement from asynchronous neural spikes. With time-delayed feature dimension enhancement, the classification accuracy can be increased by 5% with limited number of input channels. Further, a sparsity promoting training scheme enables reduction of number of programmable weights by approximate to 2X.
988
Precise A∙T to G∙C base editing in the allotetraploid rapeseed (Brassica napus L.) genome
Rapeseed is an important source of oilseed crop in the world. Achieving genetic improvement has always been the major goal in rapeseed production. Single nucleotide substitution is the basis of most genetic variation underpinning important agronomic traits. Nowadays, Cas-base editing acts as an efficient tool to mediate single-base substitution at the target site. In this study, four adenine base editors (ABE) were modified to achieve adenosine base editing at different genome sites in allotetraploid Brassica napus. We designed 18 small guide RNAs to target phytoene desaturase (PDS), acetolactate synthase (ALS), CLAVATA3 (CLV3), CLV2, TRANSPARENT TESTA12 (TT12), carotenoid isomerase (CRTISO), designated de-etiolated-2 (DET2), BRANCHED1 (BRC1), zeaxanthin epoxidase (ZEP) genes, respectively. Among the four ABE systems, pBGE17 had the highest base-editing efficiency, with an average editing efficiency of 3.51%. Target sequencing results revealed that the editing window ranged from A5 to A8 of the protospacer-adjacent motif (PAM) sequence. Moreover, the ABEmax-nCas9NG system with NG PAM was developed, with a base-editing efficiency of 1.22%. These results revealed that ABE system developed in this study could efficiently induce A to G substitution and the ABE-nCas9NG system could broaden editing window in oilseed rape.
989
Learning Geodesic Active Contours for Embedding Object Global Information in Segmentation CNNs
Most existing CNNs-based segmentation methods rely on local appearances learned on the regular image grid, without consideration of the object global information. This article aims to embed the object global geometric information into a learning framework via the classical geodesic active contours (GAC). We propose a level set function (LSF) regression network, supervised by the segmentation ground truth, LSF ground truth and geodesic active contours, to not only generate the segmentation probabilistic map but also directly minimize the GAC energy functional in an end-to-end manner. With the help of geodesic active contours, the segmentation contour, embedded in the level set function, can be globally driven towards the image boundary to obtain lower energy, and the geodesic constraint can lead the segmentation result to have fewer outliers. Extensive experiments on four public datasets show that (1) compared with state-of-the-art (SOTA) learning active contour methods, our method can achieve significantly better performance; (2) compared with recent SOTA methods that are designed for reducing boundary errors, our method also outperforms them with more accurate boundaries; (3) compared with SOTA methods on two popular multi-class segmentation challenge datasets, our method can still obtain superior or competitive results in both organ and tumor segmentation tasks. Our study demonstrates that introducing global information by GAC can significantly improve segmentation performance, especially on reducing the boundary errors and outliers, which is very useful in applications such as organ transplantation surgical planning and multi-modality image registration where boundary errors can be very harmful.
990
Effects of Nature-Based Group Art Therapy Programs on Stress, Self-Esteem and Changes in Electroencephalogram (EEG) in Non-Disabled Siblings of Children with Disabilities
The purpose of the present study was to examine changes in brain waves, stress, and self-esteem after a continuous eight-week nature-based art therapy program in the forest in non-disabled siblings of children with disabilities. A total of 29 participants participated in this study (art therapy program group, n = 18; control group, n = 11). The art therapy program group received eight weekly sessions of art therapy lasting 60 min each. Pre- and post-test results showed positive changes in the brain function index and stress levels of the participants in the art therapy program group. On the self-esteem scale, overall and social self-esteem increased significantly. In conclusion, creative activities in the forest can increase resistance to diseases through mechanisms that relieve stress and increase self-esteem. If art therapy that emphasizes somatosensory experience, creative expression, and self-motivation is accompanied by forest activities, this combined intervention can elicit positive physical and psychological changes.
991
Evaluating and Improving Unified Debugging
Automated debugging techniques, including fault localization and program repair, have been studied for over a decade. However, the only existing connection between fault localization and program repair is that fault localization computes the potential buggy elements for program repair to patch. Recently, a pioneering work, ProFL, explored the idea of unified debugging to unify fault localization and program repair in the other direction for the first time to boost both areas. More specifically, ProFL utilizes the patch execution results from one state-of-the-art repair system, PraPR, to help improve state-of-the-art fault localization. In this way, ProFL not only improves fault localization for manual repair, but also extends the application scope of automated repair to all possible bugs (not only the small ratio of bugs that repair systems can automatically fix). However, ProFL only considers one program repair system (i.e., PraPR), and it is not clear how other existing program repair systems based on different designs contribute to unified debugging. In this work, we perform an extensive study of the unified debugging approach on 16 state-of-the-art program repair systems for the first time. Our initial experimental results on the widely studied Defects4J benchmark suite reveal various practical guidelines for unified debugging, such as (1) nearly all 16 studied repair systems positively contribute to unified debugging despite their varying repair capabilities, (2) repair systems targeting multi-edit patches can bring extraneous noise into unified debugging, (3) repair systems with more executed/plausible patches tend to perform better for unified debugging, (4) unified debugging effectiveness does not rely on the availability of correct patches from automated repair, and (5) we propose a new unified debugging technique, UniDebug++, which localizes over 20% more bugs within Top-1 than state-of-the-art unified debugging technique ProFL (evaluated against four Defects4J subjects). Furthermore, we conduct more comprehensive studies to extend the above experiments to make the following additional contributions: we (6) further perform an extensive study on 76.3% additional buggy versions from Defects4J (for Closure and Mockito) and confirm that UniDebug++ again outperforms ProFL by localizing 185 (out of 395 in total) bugs within Top-1, 14% more than ProFL, (7) investigate the impact of 33 SBFL formulae on unified debugging and observe that UniDebug++ consistently improves upon all formulae, e.g., 61% and 53% average improvement on MFR / MAR, (8) demonstrate that UniDebug++ can substantially boost state-of-the-art learning-based method-level fault localization techniques, (9) extend unified debugging to the statement level for first time and observe that UniDebug++ localizes 78 (out of 395 in total) bugs within Top-1 (22% more bugs than ProFL) and outperforms state-of-the-art learning-based fault localization techniques by 30%, and finally (10) propose a new technique, UniDebug+(star), based on detailed patch statistics, to improve upon UniDebug++, e.g., further localizing up to 9% more bugs within Top-1 than UniDebug++.
992
Sketch-and-Fill Network for Semantic Segmentation
Recent efforts in semantic segmentation using deep learning framework have made notable advances. While achieving high performance, however, they often require heavy computation, making them impractical to be used in real world applications. There are two reasons that produce prohibitive computational cost: 1) heavy backbone CNN to create high resolution of contextual information and 2) complex modules to aggregate multi-level features. To address these issues, we propose the computationally efficient architecture called "Sketch-and-Fill Network (SFNet)" with a three-stage Coarse-to-Fine Aggregation (CFA) module for semantic segmentation. In the proposed network, lower-resolution contextual information is first produced so that the overall computation in the backbone CNN is largely reduced. Then, to alleviate the detail loss of the lower-resolution contextual information, the CFA module forms global structures and fills fine details in a coarse-to-fine manner. To preserve global structures, the contextual information is passed without any reduction to the CFA module. Experimental results show that the proposed SFNet achieves significantly lower computational loads while delivering comparable or improved segmentation performance with state-of-the-art methods. Qualitative results show that our method is superior to state-of-the-art methods in capturing fine detail while keeping global structures on Cityscapes, ADE20K and RUGD benchmarks.
993
Growth of desert varnish on petroglyphs from Jubbah and Shuwaymis, Ha'il region, Saudi Arabia
Petroglyphs, engraved throughout the Holocene into rock varnish coatings on sandstone, were investigated in the Ha'il region of northwestern Saudi Arabia, at Jabal Yatib, Jubbah, and Shuwaymis. The rock art has been created by removing the black varnish coating and thereby exposing the light sandstone underneath. With time, the varnish, a natural manganese (Mn)-rich coating, grows back. To study the rate of regrowth, we made 234 measurements by portable X-ray fluorescence (pXRF) on intact varnish and engraved petroglyphs. Since many petroglyphs can be assigned to a specific time period, a relationship between their ages and the Mn surface densities (D-Mn) of the regrown material could be derived. This relationship was improved by normalizing the D-Mn in the petroglyphs with the D-Mn of adjacent intact varnish. In turn, we used this relationship to assign a chronologic context to petroglyphs of unknown ages. Following the removal of the varnish by the artist and prior to the beginning of Mn oxyhydroxide regrowth, a thin Fe-rich film forms on the underlying rock. This initial Fe oxyhydroxide deposit may act as catalyst for subsequent fast Mn oxidation. After a few decades of relatively rapid growth, the regrowth of the Mn-rich varnish slows down to about 0.017 mu g cm(-2) a(-1) Mn, corresponding to about 0.012% a(-1) Mn of the intact varnish density, or about 1.2 nm a(-1), presumably due to a change of the catalytic process. Our results suggest that petroglyphs were engraved almost continuously since the pre-Neolithic period, and that rock varnish growth seems to proceed roughly linear, without detectable influences of the regional Holocene climatic changes.
994
Model-assisted content adaptive detail enhancement and quadtree decomposition for image visibility enhancement
In this work, a simple and unique, yet effective framework for visibility enhancement, using image decomposition, adaptive boundary constraint and quadtree-based dehazing, detail enhancement and fusion is proposed. The input image firstly undergoes image decomposition, to be split into its its ambient illumination and reflex lightness components. For increasing brightness and contrast, contrast-based enhancement algorithm is applied to the reflex lightness component and the ambient illumination component is dehazed through the application of atmospheric scattering model. In order to estimate the airlight, instead of using the whole image, a simple and efficient method based on quadtree decomposition is used. The transmission map is computed through contextual regularization using adaptive boundary constraints. The dehazed ambient illumination component is passed through detail enhancement for enhancement of sharp edges. The resultant image and the enhanced reflex lightness component are then combined together through fusion to obtain the final, artifact free, enhanced image with preserved colors and details. The proposed methodology is evaluated using numerous images and compared with 8 different state-of-the-art techniques. Visual and quantitative comparison of the proposed methodology with existing state-of-the-art techniques demonstrates the effectiveness of the proposed technique.
995
DTS-SNN: Spiking Neural Networks With Dynamic Time-Surfaces
Convolution helps spiking neural networks (SNNs) capture the spatio-temporal structures of neuromorphic (event) data as evident in the convolution-based SNNs (C-SNNs) with the state-of-the-art classification-accuracies on various datasets. However, the efficacy aside, the efficiency of C-SNN is questionable. In this regard, we propose SNNs with novel trainable dynamic time-surfaces (DTS-SNNs) as efficient alternatives to convolution. The novel dynamic time-surface proposed in this work features its high responsiveness to moving objects given the use of the zero-sum temporal kernel that is motivated by the simple cells' receptive fields in the early stage visual pathway. We evaluated the performance and computational complexity of our DTS-SNNs on three real-world event-based datasets (DVS128 Gesture, Spiking Heidelberg dataset, N-Cars). The results highlight high classification accuracies and significant improvements in computational efficiency, e.g., merely 1.51% behind of the state-of-the-art result on DVS128 Gesture but a x 18 improvement in efficiency. The code is available online (https://github.com/dooseokjeong/DTS-SNN).
996
Guided Zoom: Zooming into Network Evidence to Refine Fine-Grained Model Decisions
In state-of-the-art deep single-label classification models, the top-k (k = 2,3,4,....) accuracy is usually significantly higher than the top-1 accuracy. This is more evident in fine-grained datasets, where differences between classes are quite subtle. Exploiting the information provided in the top k predicted classes boosts the final prediction of a model. We propose Guided Zoom, a novel way in which explainabitity could be used to improve model performance. We do so by making sure the model has "the right reasons" fora prediction. The reason/evidence upon which a deep neural network makes a prediction is defined to be the grounding, in the pixel space, for a specific class conditional probability in the model output. Guided Zoom examines how reasonable the evidence used to make each of the top-k predictions is. Test time evidence is deemed reasonable if it is coherent with evidence used to make similar correct decisions at training time. This leads to better informed predictions. We explore a variety of grounding techniques and study their complementarity for computing evidence. We show that Guided Zoom results in an improvement of a model's classification accuracy and achieves state-of-the-art classification performance on four fine-grained classification datasets.
997
GII.17[P17] and GII.8[P8] noroviruses showed different RdRp activities associated with their epidemic characteristics
Norovirus is the primary foodborne pathogenic agent causing viral acute gastroenteritis. It possesses broad genetic diversity and the prevalence of different genotypes varies substantially. However, the differences in RNA-dependent RNA polymerase (RdRp) activity among different genotypes of noroviruses remain unclear. In this study, the molecular mechanism of RdRp activity difference between the epidemic strain GII.17[P17] and the non-epidemic strain GII.8[P8] was characterized. By evaluating the evolutionary history of RdRp sequences with Markov Chain Monte Carlo method, the evolution rate of GII.17[P17] variants was higher than that of GII.8[P8] variants (1.22 × 10-3 nucleotide substitutions/site/year to 9.31 × 10-4 nucleotide substitutions/site/year, respectively). The enzyme catalytic reaction demonstrated that the Vmax value of GII.17[P17] RdRp was 2.5 times than that of GII.8[P8] RdRp. And the Km of GII.17[P17] and GII.8[P8] RdRp were 0.01 and 0.15 mmol/L, respectively. Then, GII.8[P8] RdRp fragment mutants (A-F) were designed, among which GII.8[P8]-A/B containing the conserved motif G/F were found to have significant effects on improving RdRp activity. The Km values of GII.8[P8]-A/B reached 0.07 and 0.06 mmol/L, respectively. And their Vmax values were 1.34 times than that of GII.8[P8] RdRp. In summary, our results suggested that RdRp activities were correlated with their epidemic characteristics. These findings will ultimately provide a better understanding in replication mechanism of noroviruses and development of antiviral drugs.
998
High-Rate Maximum Runlength Constrained Coding Schemes Using Nibble Replacement
In this paper, we will present coding techniques for the character-constrained channel, where information is conveyed using q-bit characters (nibbles), and where w prescribed characters are disallowed. Using codes for the character-constrained channel, we present simple and systematic constructions of high-rate binary maximum runlength constrained codes. The new constructions have the virtue that large lookup tables for encoding and decoding are not required. We will compare the error propagation performance of codes based on the new construction with that of prior art codes.
999
Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images
Segmentation of livers and liver tumors is one of the most important steps in radiation therapy of hepatocellular carcinoma. The segmentation task is often done manually, making it tedious, labor intensive, and subject to intra-/inter- operator variations. While various algorithms for delineating organ-at-risks (OARs) and tumor targets have been proposed, automatic segmentation of livers and liver tumors remains intractable due to their low tissue contrast with respect to the surrounding organs and their deformable shape in CT images. The U-Net has gained increasing popularity recently for image analysis tasks and has shown promising results. Conventional U-Net architectures, however, suffer from three major drawbacks. First, skip connections allow for the duplicated transfer of low resolution information in feature maps to improve efficiency in learning, but this often leads to blurring of extracted image features. Secondly, high level features extracted by the network often do not contain enough high resolution edge information of the input, leading to greater uncertainty where high resolution edge dominantly affects the network's decisions such as liver and liver-tumor segmentation. Thirdly, it is generally difficult to optimize the number of pooling operations in order to extract high level global features, since the number of pooling operations used depends on the object size. To cope with these problems, we added a residual path with deconvolution and activation operations to the skip connection of the U-Net to avoid duplication of low resolution information of features. In the case of small object inputs, features in the skip connection are not incorporated with features in the residual path. Furthermore, the proposed architecture has additional convolution layers in the skip connection in order to extract high level global features of small object inputs as well as high level features of high resolution edge information of large object inputs. Efficacy of the modified U-Net (mU-Net) was demonstrated using the public dataset of Liver tumor segmentation (LiTS) challenge 2017. For liver-tumor segmentation, Dice similarity coefficient (DSC) of 89.72 %, volume of error (VOE) of 21.93 %, and relative volume difference (RVD) of - 0.49 % were obtained. For liver segmentation, DSC of 98.51 %, VOE of 3.07 %, and RVD of 0.26 % were calculated. For the public 3D Image Reconstruction for Comparison of Algorithm Database (3Dircadb), DSCs were 96.01 % for the liver and 68.14 % for liver-tumor segmentations, respectively. The proposed mU-Net outperformed existing state-of-art networks.